Aug 19 00:08:19.808695 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 19 00:08:19.808715 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Mon Aug 18 22:15:14 -00 2025 Aug 19 00:08:19.808724 kernel: KASLR enabled Aug 19 00:08:19.808730 kernel: efi: EFI v2.7 by EDK II Aug 19 00:08:19.808735 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Aug 19 00:08:19.808741 kernel: random: crng init done Aug 19 00:08:19.808747 kernel: secureboot: Secure boot disabled Aug 19 00:08:19.808753 kernel: ACPI: Early table checksum verification disabled Aug 19 00:08:19.808759 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Aug 19 00:08:19.808766 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 19 00:08:19.808772 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:08:19.808777 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:08:19.808783 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:08:19.808789 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:08:19.808796 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:08:19.808803 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:08:19.808858 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:08:19.808867 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:08:19.808882 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:08:19.808889 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 19 00:08:19.808895 kernel: ACPI: Use ACPI SPCR as default console: Yes Aug 19 00:08:19.808901 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 19 00:08:19.808908 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Aug 19 00:08:19.808914 kernel: Zone ranges: Aug 19 00:08:19.808920 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 19 00:08:19.808929 kernel: DMA32 empty Aug 19 00:08:19.808935 kernel: Normal empty Aug 19 00:08:19.808941 kernel: Device empty Aug 19 00:08:19.808947 kernel: Movable zone start for each node Aug 19 00:08:19.808953 kernel: Early memory node ranges Aug 19 00:08:19.808959 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Aug 19 00:08:19.808965 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Aug 19 00:08:19.808971 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Aug 19 00:08:19.808977 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Aug 19 00:08:19.808983 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Aug 19 00:08:19.808989 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Aug 19 00:08:19.808995 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Aug 19 00:08:19.809002 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Aug 19 00:08:19.809008 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Aug 19 00:08:19.809014 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 19 00:08:19.809023 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 19 00:08:19.809029 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 19 00:08:19.809036 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 19 00:08:19.809043 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 19 00:08:19.809049 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 19 00:08:19.809056 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Aug 19 00:08:19.809062 kernel: psci: probing for conduit method from ACPI. Aug 19 00:08:19.809068 kernel: psci: PSCIv1.1 detected in firmware. Aug 19 00:08:19.809075 kernel: psci: Using standard PSCI v0.2 function IDs Aug 19 00:08:19.809081 kernel: psci: Trusted OS migration not required Aug 19 00:08:19.809087 kernel: psci: SMC Calling Convention v1.1 Aug 19 00:08:19.809094 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 19 00:08:19.809100 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Aug 19 00:08:19.809108 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Aug 19 00:08:19.809114 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 19 00:08:19.809121 kernel: Detected PIPT I-cache on CPU0 Aug 19 00:08:19.809127 kernel: CPU features: detected: GIC system register CPU interface Aug 19 00:08:19.809133 kernel: CPU features: detected: Spectre-v4 Aug 19 00:08:19.809140 kernel: CPU features: detected: Spectre-BHB Aug 19 00:08:19.809146 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 19 00:08:19.809152 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 19 00:08:19.809159 kernel: CPU features: detected: ARM erratum 1418040 Aug 19 00:08:19.809165 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 19 00:08:19.809171 kernel: alternatives: applying boot alternatives Aug 19 00:08:19.809179 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a868ccde263e96e0a18737fdbf04ca04bbf30dfe23963f1ae3994966e8fc9468 Aug 19 00:08:19.809187 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 00:08:19.809194 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 19 00:08:19.809200 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 19 00:08:19.809206 kernel: Fallback order for Node 0: 0 Aug 19 00:08:19.809213 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Aug 19 00:08:19.809219 kernel: Policy zone: DMA Aug 19 00:08:19.809225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 00:08:19.809232 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Aug 19 00:08:19.809238 kernel: software IO TLB: area num 4. Aug 19 00:08:19.809244 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Aug 19 00:08:19.809251 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Aug 19 00:08:19.809259 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 19 00:08:19.809265 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 00:08:19.809272 kernel: rcu: RCU event tracing is enabled. Aug 19 00:08:19.809279 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 19 00:08:19.809285 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 00:08:19.809292 kernel: Tracing variant of Tasks RCU enabled. Aug 19 00:08:19.809298 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 00:08:19.809305 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 19 00:08:19.809311 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 00:08:19.809318 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 00:08:19.809324 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 19 00:08:19.809337 kernel: GICv3: 256 SPIs implemented Aug 19 00:08:19.809345 kernel: GICv3: 0 Extended SPIs implemented Aug 19 00:08:19.809351 kernel: Root IRQ handler: gic_handle_irq Aug 19 00:08:19.809358 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 19 00:08:19.809364 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Aug 19 00:08:19.809370 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 19 00:08:19.809377 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 19 00:08:19.809383 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Aug 19 00:08:19.809390 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Aug 19 00:08:19.809396 kernel: GICv3: using LPI property table @0x0000000040130000 Aug 19 00:08:19.809403 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Aug 19 00:08:19.809409 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 00:08:19.809418 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:08:19.809424 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 19 00:08:19.809431 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 19 00:08:19.809437 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 19 00:08:19.809444 kernel: arm-pv: using stolen time PV Aug 19 00:08:19.809450 kernel: Console: colour dummy device 80x25 Aug 19 00:08:19.809457 kernel: ACPI: Core revision 20240827 Aug 19 00:08:19.809464 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 19 00:08:19.809470 kernel: pid_max: default: 32768 minimum: 301 Aug 19 00:08:19.809477 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 00:08:19.809485 kernel: landlock: Up and running. Aug 19 00:08:19.809491 kernel: SELinux: Initializing. Aug 19 00:08:19.809498 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 00:08:19.809505 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 00:08:19.809511 kernel: rcu: Hierarchical SRCU implementation. Aug 19 00:08:19.809518 kernel: rcu: Max phase no-delay instances is 400. Aug 19 00:08:19.809525 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 00:08:19.809532 kernel: Remapping and enabling EFI services. Aug 19 00:08:19.809538 kernel: smp: Bringing up secondary CPUs ... Aug 19 00:08:19.809550 kernel: Detected PIPT I-cache on CPU1 Aug 19 00:08:19.809557 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 19 00:08:19.809564 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Aug 19 00:08:19.809572 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:08:19.809579 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 19 00:08:19.809586 kernel: Detected PIPT I-cache on CPU2 Aug 19 00:08:19.809593 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 19 00:08:19.809600 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Aug 19 00:08:19.809608 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:08:19.809615 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 19 00:08:19.809622 kernel: Detected PIPT I-cache on CPU3 Aug 19 00:08:19.809629 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 19 00:08:19.809636 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Aug 19 00:08:19.809651 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:08:19.809658 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 19 00:08:19.809665 kernel: smp: Brought up 1 node, 4 CPUs Aug 19 00:08:19.809672 kernel: SMP: Total of 4 processors activated. Aug 19 00:08:19.809681 kernel: CPU: All CPU(s) started at EL1 Aug 19 00:08:19.809688 kernel: CPU features: detected: 32-bit EL0 Support Aug 19 00:08:19.809694 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 19 00:08:19.809701 kernel: CPU features: detected: Common not Private translations Aug 19 00:08:19.809708 kernel: CPU features: detected: CRC32 instructions Aug 19 00:08:19.809715 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 19 00:08:19.809722 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 19 00:08:19.809729 kernel: CPU features: detected: LSE atomic instructions Aug 19 00:08:19.809736 kernel: CPU features: detected: Privileged Access Never Aug 19 00:08:19.809744 kernel: CPU features: detected: RAS Extension Support Aug 19 00:08:19.809751 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 19 00:08:19.809758 kernel: alternatives: applying system-wide alternatives Aug 19 00:08:19.809765 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Aug 19 00:08:19.809773 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Aug 19 00:08:19.809780 kernel: devtmpfs: initialized Aug 19 00:08:19.809787 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 00:08:19.809794 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 19 00:08:19.809801 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 19 00:08:19.809815 kernel: 0 pages in range for non-PLT usage Aug 19 00:08:19.809823 kernel: 508576 pages in range for PLT usage Aug 19 00:08:19.809836 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 00:08:19.809843 kernel: SMBIOS 3.0.0 present. Aug 19 00:08:19.809850 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Aug 19 00:08:19.809857 kernel: DMI: Memory slots populated: 1/1 Aug 19 00:08:19.809864 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 00:08:19.809871 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 19 00:08:19.809878 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 19 00:08:19.809887 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 19 00:08:19.809895 kernel: audit: initializing netlink subsys (disabled) Aug 19 00:08:19.809902 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Aug 19 00:08:19.809909 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 00:08:19.809916 kernel: cpuidle: using governor menu Aug 19 00:08:19.809923 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 19 00:08:19.809930 kernel: ASID allocator initialised with 32768 entries Aug 19 00:08:19.809938 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 00:08:19.809945 kernel: Serial: AMBA PL011 UART driver Aug 19 00:08:19.809953 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 19 00:08:19.809961 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 19 00:08:19.809968 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 19 00:08:19.809975 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 19 00:08:19.809982 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 00:08:19.809989 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 00:08:19.809996 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 19 00:08:19.810004 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 19 00:08:19.810010 kernel: ACPI: Added _OSI(Module Device) Aug 19 00:08:19.810019 kernel: ACPI: Added _OSI(Processor Device) Aug 19 00:08:19.810026 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 00:08:19.810033 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 19 00:08:19.810040 kernel: ACPI: Interpreter enabled Aug 19 00:08:19.810047 kernel: ACPI: Using GIC for interrupt routing Aug 19 00:08:19.810053 kernel: ACPI: MCFG table detected, 1 entries Aug 19 00:08:19.810060 kernel: ACPI: CPU0 has been hot-added Aug 19 00:08:19.810067 kernel: ACPI: CPU1 has been hot-added Aug 19 00:08:19.810074 kernel: ACPI: CPU2 has been hot-added Aug 19 00:08:19.810081 kernel: ACPI: CPU3 has been hot-added Aug 19 00:08:19.810089 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 19 00:08:19.810096 kernel: printk: legacy console [ttyAMA0] enabled Aug 19 00:08:19.810103 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 19 00:08:19.810241 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 19 00:08:19.810305 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 19 00:08:19.810373 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 19 00:08:19.810431 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 19 00:08:19.810501 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 19 00:08:19.810510 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 19 00:08:19.810517 kernel: PCI host bridge to bus 0000:00 Aug 19 00:08:19.810582 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 19 00:08:19.810636 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 19 00:08:19.810688 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 19 00:08:19.810740 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 19 00:08:19.810871 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Aug 19 00:08:19.810950 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 19 00:08:19.811012 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Aug 19 00:08:19.811072 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Aug 19 00:08:19.811129 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Aug 19 00:08:19.811187 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Aug 19 00:08:19.811247 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Aug 19 00:08:19.811310 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Aug 19 00:08:19.811375 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 19 00:08:19.811429 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 19 00:08:19.811482 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 19 00:08:19.811491 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 19 00:08:19.811498 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 19 00:08:19.811505 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 19 00:08:19.811515 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 19 00:08:19.811522 kernel: iommu: Default domain type: Translated Aug 19 00:08:19.811528 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 19 00:08:19.811535 kernel: efivars: Registered efivars operations Aug 19 00:08:19.811542 kernel: vgaarb: loaded Aug 19 00:08:19.811549 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 19 00:08:19.811556 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 00:08:19.811563 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 00:08:19.811570 kernel: pnp: PnP ACPI init Aug 19 00:08:19.811635 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 19 00:08:19.811645 kernel: pnp: PnP ACPI: found 1 devices Aug 19 00:08:19.811652 kernel: NET: Registered PF_INET protocol family Aug 19 00:08:19.811659 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 19 00:08:19.811666 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 19 00:08:19.811673 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 00:08:19.811680 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 19 00:08:19.811687 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 19 00:08:19.811695 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 19 00:08:19.811702 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 00:08:19.811710 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 00:08:19.811717 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 00:08:19.811723 kernel: PCI: CLS 0 bytes, default 64 Aug 19 00:08:19.811731 kernel: kvm [1]: HYP mode not available Aug 19 00:08:19.811738 kernel: Initialise system trusted keyrings Aug 19 00:08:19.811745 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 19 00:08:19.811751 kernel: Key type asymmetric registered Aug 19 00:08:19.811759 kernel: Asymmetric key parser 'x509' registered Aug 19 00:08:19.811767 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 19 00:08:19.811774 kernel: io scheduler mq-deadline registered Aug 19 00:08:19.811780 kernel: io scheduler kyber registered Aug 19 00:08:19.811787 kernel: io scheduler bfq registered Aug 19 00:08:19.811794 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 19 00:08:19.811801 kernel: ACPI: button: Power Button [PWRB] Aug 19 00:08:19.811818 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 19 00:08:19.811889 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 19 00:08:19.811901 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 00:08:19.811908 kernel: thunder_xcv, ver 1.0 Aug 19 00:08:19.811915 kernel: thunder_bgx, ver 1.0 Aug 19 00:08:19.811922 kernel: nicpf, ver 1.0 Aug 19 00:08:19.811929 kernel: nicvf, ver 1.0 Aug 19 00:08:19.812003 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 19 00:08:19.812060 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-19T00:08:19 UTC (1755562099) Aug 19 00:08:19.812070 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 19 00:08:19.812079 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Aug 19 00:08:19.812086 kernel: watchdog: NMI not fully supported Aug 19 00:08:19.812093 kernel: watchdog: Hard watchdog permanently disabled Aug 19 00:08:19.812100 kernel: NET: Registered PF_INET6 protocol family Aug 19 00:08:19.812108 kernel: Segment Routing with IPv6 Aug 19 00:08:19.812115 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 00:08:19.812122 kernel: NET: Registered PF_PACKET protocol family Aug 19 00:08:19.812128 kernel: Key type dns_resolver registered Aug 19 00:08:19.812136 kernel: registered taskstats version 1 Aug 19 00:08:19.812143 kernel: Loading compiled-in X.509 certificates Aug 19 00:08:19.812151 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: becc5a61d1c5dcbcd174f4649c64b863031dbaa8' Aug 19 00:08:19.812159 kernel: Demotion targets for Node 0: null Aug 19 00:08:19.812166 kernel: Key type .fscrypt registered Aug 19 00:08:19.812173 kernel: Key type fscrypt-provisioning registered Aug 19 00:08:19.812180 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 19 00:08:19.812187 kernel: ima: Allocated hash algorithm: sha1 Aug 19 00:08:19.812194 kernel: ima: No architecture policies found Aug 19 00:08:19.812201 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 19 00:08:19.812210 kernel: clk: Disabling unused clocks Aug 19 00:08:19.812217 kernel: PM: genpd: Disabling unused power domains Aug 19 00:08:19.812224 kernel: Warning: unable to open an initial console. Aug 19 00:08:19.812231 kernel: Freeing unused kernel memory: 38912K Aug 19 00:08:19.812238 kernel: Run /init as init process Aug 19 00:08:19.812245 kernel: with arguments: Aug 19 00:08:19.812253 kernel: /init Aug 19 00:08:19.812259 kernel: with environment: Aug 19 00:08:19.812279 kernel: HOME=/ Aug 19 00:08:19.812288 kernel: TERM=linux Aug 19 00:08:19.812295 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 00:08:19.812304 systemd[1]: Successfully made /usr/ read-only. Aug 19 00:08:19.812314 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 00:08:19.812322 systemd[1]: Detected virtualization kvm. Aug 19 00:08:19.812337 systemd[1]: Detected architecture arm64. Aug 19 00:08:19.812345 systemd[1]: Running in initrd. Aug 19 00:08:19.812353 systemd[1]: No hostname configured, using default hostname. Aug 19 00:08:19.812362 systemd[1]: Hostname set to . Aug 19 00:08:19.812370 systemd[1]: Initializing machine ID from VM UUID. Aug 19 00:08:19.812378 systemd[1]: Queued start job for default target initrd.target. Aug 19 00:08:19.812386 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:08:19.812393 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:08:19.812402 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 00:08:19.812410 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 00:08:19.812418 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 00:08:19.812428 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 00:08:19.812436 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 00:08:19.812444 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 00:08:19.812452 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:08:19.812459 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:08:19.812467 systemd[1]: Reached target paths.target - Path Units. Aug 19 00:08:19.812475 systemd[1]: Reached target slices.target - Slice Units. Aug 19 00:08:19.812483 systemd[1]: Reached target swap.target - Swaps. Aug 19 00:08:19.812490 systemd[1]: Reached target timers.target - Timer Units. Aug 19 00:08:19.812498 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 00:08:19.812506 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 00:08:19.812513 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 00:08:19.812521 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 00:08:19.812529 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:08:19.812536 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 00:08:19.812545 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:08:19.812552 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 00:08:19.812560 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 00:08:19.812567 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 00:08:19.812575 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 00:08:19.812583 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 00:08:19.812591 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 00:08:19.812598 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 00:08:19.812605 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 00:08:19.812615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:08:19.812622 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 00:08:19.812630 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:08:19.812638 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 00:08:19.812647 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 00:08:19.812655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:08:19.812678 systemd-journald[243]: Collecting audit messages is disabled. Aug 19 00:08:19.812697 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 00:08:19.812707 systemd-journald[243]: Journal started Aug 19 00:08:19.812725 systemd-journald[243]: Runtime Journal (/run/log/journal/b0272e9c9d44442db9ed143de3b8ae6e) is 6M, max 48.5M, 42.4M free. Aug 19 00:08:19.802028 systemd-modules-load[244]: Inserted module 'overlay' Aug 19 00:08:19.814549 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 00:08:19.821836 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 00:08:19.822005 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 00:08:19.825079 systemd-modules-load[244]: Inserted module 'br_netfilter' Aug 19 00:08:19.825985 kernel: Bridge firewalling registered Aug 19 00:08:19.825302 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 00:08:19.836263 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 00:08:19.837944 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 00:08:19.839239 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 00:08:19.841541 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 00:08:19.849475 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:08:19.851874 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:08:19.854741 systemd-tmpfiles[268]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 00:08:19.857556 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:08:19.862763 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:08:19.865295 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 00:08:19.867276 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a868ccde263e96e0a18737fdbf04ca04bbf30dfe23963f1ae3994966e8fc9468 Aug 19 00:08:19.902035 systemd-resolved[301]: Positive Trust Anchors: Aug 19 00:08:19.902056 systemd-resolved[301]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 00:08:19.902090 systemd-resolved[301]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 00:08:19.907013 systemd-resolved[301]: Defaulting to hostname 'linux'. Aug 19 00:08:19.907945 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 00:08:19.911733 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:08:19.942851 kernel: SCSI subsystem initialized Aug 19 00:08:19.947832 kernel: Loading iSCSI transport class v2.0-870. Aug 19 00:08:19.955850 kernel: iscsi: registered transport (tcp) Aug 19 00:08:19.968079 kernel: iscsi: registered transport (qla4xxx) Aug 19 00:08:19.968112 kernel: QLogic iSCSI HBA Driver Aug 19 00:08:19.985966 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 00:08:20.010545 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:08:20.012153 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 00:08:20.063639 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 00:08:20.066149 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 00:08:20.123863 kernel: raid6: neonx8 gen() 15673 MB/s Aug 19 00:08:20.140841 kernel: raid6: neonx4 gen() 15802 MB/s Aug 19 00:08:20.157837 kernel: raid6: neonx2 gen() 13176 MB/s Aug 19 00:08:20.174843 kernel: raid6: neonx1 gen() 10422 MB/s Aug 19 00:08:20.191835 kernel: raid6: int64x8 gen() 6889 MB/s Aug 19 00:08:20.208842 kernel: raid6: int64x4 gen() 7309 MB/s Aug 19 00:08:20.225837 kernel: raid6: int64x2 gen() 6089 MB/s Aug 19 00:08:20.242954 kernel: raid6: int64x1 gen() 5041 MB/s Aug 19 00:08:20.242974 kernel: raid6: using algorithm neonx4 gen() 15802 MB/s Aug 19 00:08:20.260949 kernel: raid6: .... xor() 12313 MB/s, rmw enabled Aug 19 00:08:20.260968 kernel: raid6: using neon recovery algorithm Aug 19 00:08:20.267260 kernel: xor: measuring software checksum speed Aug 19 00:08:20.267296 kernel: 8regs : 21579 MB/sec Aug 19 00:08:20.267306 kernel: 32regs : 21653 MB/sec Aug 19 00:08:20.268002 kernel: arm64_neon : 26501 MB/sec Aug 19 00:08:20.268016 kernel: xor: using function: arm64_neon (26501 MB/sec) Aug 19 00:08:20.324850 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 00:08:20.331358 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 00:08:20.335930 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:08:20.363031 systemd-udevd[500]: Using default interface naming scheme 'v255'. Aug 19 00:08:20.367164 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:08:20.369728 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 00:08:20.397087 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Aug 19 00:08:20.423860 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 00:08:20.426470 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 00:08:20.486470 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:08:20.490946 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 00:08:20.541834 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 19 00:08:20.543227 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 19 00:08:20.545603 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 19 00:08:20.546118 kernel: GPT:9289727 != 19775487 Aug 19 00:08:20.547082 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 19 00:08:20.548207 kernel: GPT:9289727 != 19775487 Aug 19 00:08:20.548237 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 19 00:08:20.548856 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 00:08:20.553949 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 00:08:20.554069 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:08:20.563058 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:08:20.565928 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:08:20.594858 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 19 00:08:20.596376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:08:20.605203 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 19 00:08:20.607838 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 00:08:20.619138 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 19 00:08:20.620421 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 19 00:08:20.629737 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 00:08:20.631104 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 00:08:20.633217 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:08:20.635331 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 00:08:20.638076 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 00:08:20.639830 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 00:08:20.658654 disk-uuid[592]: Primary Header is updated. Aug 19 00:08:20.658654 disk-uuid[592]: Secondary Entries is updated. Aug 19 00:08:20.658654 disk-uuid[592]: Secondary Header is updated. Aug 19 00:08:20.664882 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 00:08:20.667962 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 00:08:21.683337 disk-uuid[597]: The operation has completed successfully. Aug 19 00:08:21.684708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 00:08:21.718040 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 00:08:21.718172 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 00:08:21.742758 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 00:08:21.760608 sh[613]: Success Aug 19 00:08:21.779732 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 00:08:21.779797 kernel: device-mapper: uevent: version 1.0.3 Aug 19 00:08:21.781472 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 00:08:21.792925 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Aug 19 00:08:21.819795 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 00:08:21.822662 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 00:08:21.836158 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 00:08:21.840867 kernel: BTRFS: device fsid 1e492084-d287-4a43-8dc6-ad086a072625 devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (626) Aug 19 00:08:21.843203 kernel: BTRFS info (device dm-0): first mount of filesystem 1e492084-d287-4a43-8dc6-ad086a072625 Aug 19 00:08:21.843233 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:08:21.843244 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 00:08:21.847698 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 00:08:21.849184 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 00:08:21.850570 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 00:08:21.851394 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 00:08:21.853085 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 00:08:21.876850 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (658) Aug 19 00:08:21.879290 kernel: BTRFS info (device vda6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:08:21.879347 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:08:21.879358 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 00:08:21.887831 kernel: BTRFS info (device vda6): last unmount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:08:21.888642 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 00:08:21.891629 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 00:08:21.973405 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 00:08:21.976644 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 00:08:22.015214 systemd-networkd[800]: lo: Link UP Aug 19 00:08:22.015223 systemd-networkd[800]: lo: Gained carrier Aug 19 00:08:22.016039 systemd-networkd[800]: Enumeration completed Aug 19 00:08:22.016167 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 00:08:22.017396 systemd[1]: Reached target network.target - Network. Aug 19 00:08:22.027572 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:08:22.027577 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 00:08:22.028573 systemd-networkd[800]: eth0: Link UP Aug 19 00:08:22.028954 systemd-networkd[800]: eth0: Gained carrier Aug 19 00:08:22.028966 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:08:22.059901 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 00:08:22.117000 ignition[706]: Ignition 2.21.0 Aug 19 00:08:22.117016 ignition[706]: Stage: fetch-offline Aug 19 00:08:22.117059 ignition[706]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:08:22.117069 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:08:22.117282 ignition[706]: parsed url from cmdline: "" Aug 19 00:08:22.117286 ignition[706]: no config URL provided Aug 19 00:08:22.117291 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 00:08:22.117298 ignition[706]: no config at "/usr/lib/ignition/user.ign" Aug 19 00:08:22.117329 ignition[706]: op(1): [started] loading QEMU firmware config module Aug 19 00:08:22.117335 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 19 00:08:22.131779 ignition[706]: op(1): [finished] loading QEMU firmware config module Aug 19 00:08:22.171406 ignition[706]: parsing config with SHA512: 244d4fc923fe619fbde13c29d839a6c822b17512137e17ae14c9fde081b9f2cf803a0a999f750b65a4a3cd7be0cb99ed79cc650b383789c7dcc37021f6170702 Aug 19 00:08:22.175493 unknown[706]: fetched base config from "system" Aug 19 00:08:22.175504 unknown[706]: fetched user config from "qemu" Aug 19 00:08:22.175859 ignition[706]: fetch-offline: fetch-offline passed Aug 19 00:08:22.175912 ignition[706]: Ignition finished successfully Aug 19 00:08:22.180219 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 00:08:22.182512 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 19 00:08:22.183469 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 00:08:22.209622 ignition[814]: Ignition 2.21.0 Aug 19 00:08:22.209639 ignition[814]: Stage: kargs Aug 19 00:08:22.209799 ignition[814]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:08:22.209808 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:08:22.212025 ignition[814]: kargs: kargs passed Aug 19 00:08:22.214885 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 00:08:22.212104 ignition[814]: Ignition finished successfully Aug 19 00:08:22.219181 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 00:08:22.255589 ignition[822]: Ignition 2.21.0 Aug 19 00:08:22.255611 ignition[822]: Stage: disks Aug 19 00:08:22.255760 ignition[822]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:08:22.255770 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:08:22.258764 ignition[822]: disks: disks passed Aug 19 00:08:22.258849 ignition[822]: Ignition finished successfully Aug 19 00:08:22.261253 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 00:08:22.262409 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 00:08:22.264030 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 00:08:22.266436 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 00:08:22.267632 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 00:08:22.269645 systemd[1]: Reached target basic.target - Basic System. Aug 19 00:08:22.273059 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 00:08:22.314209 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 19 00:08:22.319600 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 00:08:22.322405 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 00:08:22.389832 kernel: EXT4-fs (vda9): mounted filesystem 593a9299-85f8-44ab-a00f-cf95b7233713 r/w with ordered data mode. Quota mode: none. Aug 19 00:08:22.390371 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 00:08:22.391628 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 00:08:22.394448 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 00:08:22.396251 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 00:08:22.397250 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 19 00:08:22.397301 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 00:08:22.397338 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 00:08:22.408984 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 00:08:22.412446 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 00:08:22.418238 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (841) Aug 19 00:08:22.418262 kernel: BTRFS info (device vda6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:08:22.418272 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:08:22.418281 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 00:08:22.419787 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 00:08:22.468200 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 00:08:22.475556 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Aug 19 00:08:22.479136 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 00:08:22.482908 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 00:08:22.569582 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 00:08:22.571808 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 00:08:22.573576 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 00:08:22.588848 kernel: BTRFS info (device vda6): last unmount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:08:22.612196 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 00:08:22.626446 ignition[954]: INFO : Ignition 2.21.0 Aug 19 00:08:22.626446 ignition[954]: INFO : Stage: mount Aug 19 00:08:22.628198 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:08:22.628198 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:08:22.628198 ignition[954]: INFO : mount: mount passed Aug 19 00:08:22.628198 ignition[954]: INFO : Ignition finished successfully Aug 19 00:08:22.629418 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 00:08:22.634969 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 00:08:22.840540 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 00:08:22.842168 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 00:08:22.867087 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (967) Aug 19 00:08:22.867140 kernel: BTRFS info (device vda6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:08:22.867158 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:08:22.868277 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 00:08:22.872027 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 00:08:22.904002 ignition[984]: INFO : Ignition 2.21.0 Aug 19 00:08:22.904002 ignition[984]: INFO : Stage: files Aug 19 00:08:22.906507 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:08:22.906507 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:08:22.908763 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Aug 19 00:08:22.912724 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 00:08:22.912724 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 00:08:22.916410 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 00:08:22.918215 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 00:08:22.918215 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 00:08:22.917000 unknown[984]: wrote ssh authorized keys file for user: core Aug 19 00:08:22.923125 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 19 00:08:22.923125 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 19 00:08:22.973461 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 00:08:23.294978 systemd-networkd[800]: eth0: Gained IPv6LL Aug 19 00:08:23.369807 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 19 00:08:23.371740 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 19 00:08:23.371740 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 00:08:23.371740 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 00:08:23.371740 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 00:08:23.371740 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 00:08:23.371740 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 00:08:23.371740 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 00:08:23.371740 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 00:08:23.386034 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 00:08:23.386034 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 00:08:23.386034 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 19 00:08:23.386034 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 19 00:08:23.386034 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 19 00:08:23.386034 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 19 00:08:23.718035 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 19 00:08:24.178672 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 19 00:08:24.178672 ignition[984]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 19 00:08:24.182818 ignition[984]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 00:08:24.184947 ignition[984]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 00:08:24.184947 ignition[984]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 19 00:08:24.184947 ignition[984]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 19 00:08:24.189976 ignition[984]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 00:08:24.189976 ignition[984]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 00:08:24.189976 ignition[984]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 19 00:08:24.189976 ignition[984]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 19 00:08:24.210891 ignition[984]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 00:08:24.214434 ignition[984]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 00:08:24.216054 ignition[984]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 19 00:08:24.216054 ignition[984]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 19 00:08:24.216054 ignition[984]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 00:08:24.216054 ignition[984]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 00:08:24.216054 ignition[984]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 00:08:24.216054 ignition[984]: INFO : files: files passed Aug 19 00:08:24.216054 ignition[984]: INFO : Ignition finished successfully Aug 19 00:08:24.218095 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 00:08:24.221237 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 00:08:24.223979 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 00:08:24.240766 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 00:08:24.241614 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 00:08:24.246066 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory Aug 19 00:08:24.248001 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:08:24.248001 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:08:24.254257 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:08:24.248452 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 00:08:24.251385 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 00:08:24.253325 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 00:08:24.302442 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 00:08:24.302551 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 00:08:24.304841 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 00:08:24.306719 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 00:08:24.308558 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 00:08:24.309448 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 00:08:24.344319 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 00:08:24.346967 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 00:08:24.374113 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:08:24.375430 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:08:24.376669 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 00:08:24.378473 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 00:08:24.378605 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 00:08:24.381297 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 00:08:24.383143 systemd[1]: Stopped target basic.target - Basic System. Aug 19 00:08:24.384985 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 00:08:24.386881 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 00:08:24.388697 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 00:08:24.390661 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 00:08:24.392654 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 00:08:24.394533 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 00:08:24.396651 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 00:08:24.398493 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 00:08:24.400464 systemd[1]: Stopped target swap.target - Swaps. Aug 19 00:08:24.402052 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 00:08:24.402188 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 00:08:24.404614 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:08:24.405783 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:08:24.407730 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 00:08:24.407840 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:08:24.409777 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 00:08:24.409926 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 00:08:24.412746 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 00:08:24.412893 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 00:08:24.414917 systemd[1]: Stopped target paths.target - Path Units. Aug 19 00:08:24.416797 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 00:08:24.417890 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:08:24.419914 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 00:08:24.421646 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 00:08:24.423473 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 00:08:24.423567 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 00:08:24.425088 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 00:08:24.425168 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 00:08:24.426894 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 00:08:24.427019 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 00:08:24.429303 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 00:08:24.429420 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 00:08:24.431792 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 00:08:24.433326 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 00:08:24.433459 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:08:24.436132 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 00:08:24.437685 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 00:08:24.437835 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:08:24.439874 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 00:08:24.439982 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 00:08:24.445558 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 00:08:24.446023 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 00:08:24.454926 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 00:08:24.460581 ignition[1039]: INFO : Ignition 2.21.0 Aug 19 00:08:24.460581 ignition[1039]: INFO : Stage: umount Aug 19 00:08:24.463126 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:08:24.463126 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:08:24.463126 ignition[1039]: INFO : umount: umount passed Aug 19 00:08:24.463126 ignition[1039]: INFO : Ignition finished successfully Aug 19 00:08:24.464173 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 00:08:24.464300 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 00:08:24.466389 systemd[1]: Stopped target network.target - Network. Aug 19 00:08:24.467722 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 00:08:24.467796 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 00:08:24.469482 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 00:08:24.469533 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 00:08:24.471288 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 00:08:24.471356 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 00:08:24.473163 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 00:08:24.473207 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 00:08:24.475321 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 00:08:24.476996 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 00:08:24.483691 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 00:08:24.483807 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 00:08:24.487245 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 00:08:24.487465 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 00:08:24.487551 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 00:08:24.490964 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 00:08:24.491656 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 00:08:24.493588 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 00:08:24.493630 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:08:24.496406 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 00:08:24.497296 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 00:08:24.497368 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 00:08:24.501561 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 00:08:24.501618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:08:24.504658 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 00:08:24.504715 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 00:08:24.506554 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 00:08:24.506604 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:08:24.513564 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:08:24.517704 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 00:08:24.517770 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 00:08:24.518657 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 00:08:24.518747 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 00:08:24.520642 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 00:08:24.520703 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 00:08:24.529198 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 00:08:24.529315 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 00:08:24.531405 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 00:08:24.531549 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:08:24.534198 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 00:08:24.534262 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 00:08:24.535657 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 00:08:24.535691 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:08:24.537591 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 00:08:24.537646 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 00:08:24.540284 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 00:08:24.540343 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 00:08:24.543425 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 00:08:24.543481 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 00:08:24.547539 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 00:08:24.548640 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 00:08:24.548716 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:08:24.551653 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 00:08:24.551696 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:08:24.554652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 00:08:24.554698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:08:24.559356 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 19 00:08:24.559414 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 19 00:08:24.559447 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 00:08:24.566066 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 00:08:24.566158 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 00:08:24.569045 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 00:08:24.571095 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 00:08:24.587997 systemd[1]: Switching root. Aug 19 00:08:24.628350 systemd-journald[243]: Journal stopped Aug 19 00:08:25.421036 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Aug 19 00:08:25.421093 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 00:08:25.421104 kernel: SELinux: policy capability open_perms=1 Aug 19 00:08:25.421114 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 00:08:25.421126 kernel: SELinux: policy capability always_check_network=0 Aug 19 00:08:25.421135 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 00:08:25.421147 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 00:08:25.421160 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 00:08:25.421169 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 00:08:25.421177 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 00:08:25.421186 kernel: audit: type=1403 audit(1755562104.805:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 00:08:25.421199 systemd[1]: Successfully loaded SELinux policy in 57.555ms. Aug 19 00:08:25.421217 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.851ms. Aug 19 00:08:25.421228 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 00:08:25.421238 systemd[1]: Detected virtualization kvm. Aug 19 00:08:25.421249 systemd[1]: Detected architecture arm64. Aug 19 00:08:25.421258 systemd[1]: Detected first boot. Aug 19 00:08:25.421268 systemd[1]: Initializing machine ID from VM UUID. Aug 19 00:08:25.421278 zram_generator::config[1086]: No configuration found. Aug 19 00:08:25.421288 kernel: NET: Registered PF_VSOCK protocol family Aug 19 00:08:25.421297 systemd[1]: Populated /etc with preset unit settings. Aug 19 00:08:25.421319 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 00:08:25.421331 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 00:08:25.421397 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 00:08:25.421411 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 00:08:25.421421 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 00:08:25.421431 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 00:08:25.421444 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 00:08:25.421454 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 00:08:25.421464 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 00:08:25.421479 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 00:08:25.421490 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 00:08:25.421501 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 00:08:25.421511 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:08:25.421522 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:08:25.421532 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 00:08:25.421541 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 00:08:25.421551 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 00:08:25.421561 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 00:08:25.421571 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 19 00:08:25.421583 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:08:25.421595 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:08:25.421605 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 00:08:25.421615 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 00:08:25.421624 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 00:08:25.421678 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 00:08:25.421692 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:08:25.421702 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 00:08:25.421714 systemd[1]: Reached target slices.target - Slice Units. Aug 19 00:08:25.421725 systemd[1]: Reached target swap.target - Swaps. Aug 19 00:08:25.421735 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 00:08:25.421745 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 00:08:25.421759 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 00:08:25.421770 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:08:25.421780 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 00:08:25.421790 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:08:25.421799 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 00:08:25.421821 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 00:08:25.421906 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 00:08:25.421922 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 00:08:25.421931 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 00:08:25.421941 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 00:08:25.421952 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 00:08:25.421962 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 00:08:25.421972 systemd[1]: Reached target machines.target - Containers. Aug 19 00:08:25.421982 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 00:08:25.421995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:08:25.422005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 00:08:25.422015 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 00:08:25.422025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:08:25.422035 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 00:08:25.422045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:08:25.422055 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 00:08:25.422065 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:08:25.422075 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 00:08:25.422087 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 00:08:25.422097 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 00:08:25.422107 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 00:08:25.422116 kernel: fuse: init (API version 7.41) Aug 19 00:08:25.422126 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 00:08:25.422138 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:08:25.422148 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 00:08:25.422158 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 00:08:25.422170 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 00:08:25.422179 kernel: ACPI: bus type drm_connector registered Aug 19 00:08:25.422188 kernel: loop: module loaded Aug 19 00:08:25.422198 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 00:08:25.422208 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 00:08:25.422219 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 00:08:25.422230 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 00:08:25.422241 systemd[1]: Stopped verity-setup.service. Aug 19 00:08:25.422251 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 00:08:25.422290 systemd-journald[1161]: Collecting audit messages is disabled. Aug 19 00:08:25.422320 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 00:08:25.422333 systemd-journald[1161]: Journal started Aug 19 00:08:25.422356 systemd-journald[1161]: Runtime Journal (/run/log/journal/b0272e9c9d44442db9ed143de3b8ae6e) is 6M, max 48.5M, 42.4M free. Aug 19 00:08:25.426876 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 00:08:25.426912 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 00:08:25.426925 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 00:08:25.181582 systemd[1]: Queued start job for default target multi-user.target. Aug 19 00:08:25.203929 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 19 00:08:25.204347 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 00:08:25.430651 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 00:08:25.431377 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 00:08:25.433854 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 00:08:25.435335 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:08:25.436900 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 00:08:25.437075 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 00:08:25.438544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:08:25.438712 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:08:25.441227 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 00:08:25.441469 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 00:08:25.442855 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:08:25.444855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:08:25.446481 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 00:08:25.446667 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 00:08:25.448178 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:08:25.448373 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:08:25.449872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 00:08:25.452872 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:08:25.454488 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 00:08:25.456116 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 00:08:25.469476 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 00:08:25.472030 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 00:08:25.474317 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 00:08:25.475627 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 00:08:25.475670 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 00:08:25.477748 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 00:08:25.489721 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 00:08:25.491028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:08:25.492253 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 00:08:25.494474 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 00:08:25.495732 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 00:08:25.497390 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 00:08:25.498588 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 00:08:25.499670 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:08:25.506435 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 00:08:25.508755 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 00:08:25.511663 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:08:25.513180 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 00:08:25.513571 systemd-journald[1161]: Time spent on flushing to /var/log/journal/b0272e9c9d44442db9ed143de3b8ae6e is 14.570ms for 886 entries. Aug 19 00:08:25.513571 systemd-journald[1161]: System Journal (/var/log/journal/b0272e9c9d44442db9ed143de3b8ae6e) is 8M, max 195.6M, 187.6M free. Aug 19 00:08:25.567905 systemd-journald[1161]: Received client request to flush runtime journal. Aug 19 00:08:25.567961 kernel: loop0: detected capacity change from 0 to 100608 Aug 19 00:08:25.515429 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 00:08:25.517183 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 00:08:25.523358 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 00:08:25.529002 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 00:08:25.544869 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:08:25.570573 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 00:08:25.573880 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 00:08:25.575052 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 00:08:25.575890 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 00:08:25.580513 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 00:08:25.585799 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 00:08:25.605093 kernel: loop1: detected capacity change from 0 to 203944 Aug 19 00:08:25.616492 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Aug 19 00:08:25.616509 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Aug 19 00:08:25.620579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:08:25.630842 kernel: loop2: detected capacity change from 0 to 119320 Aug 19 00:08:25.655852 kernel: loop3: detected capacity change from 0 to 100608 Aug 19 00:08:25.661837 kernel: loop4: detected capacity change from 0 to 203944 Aug 19 00:08:25.669854 kernel: loop5: detected capacity change from 0 to 119320 Aug 19 00:08:25.674149 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 19 00:08:25.674550 (sd-merge)[1226]: Merged extensions into '/usr'. Aug 19 00:08:25.678460 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 00:08:25.678480 systemd[1]: Reloading... Aug 19 00:08:25.729114 zram_generator::config[1248]: No configuration found. Aug 19 00:08:25.838343 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 00:08:25.895391 systemd[1]: Reloading finished in 216 ms. Aug 19 00:08:25.926747 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 00:08:25.929856 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 00:08:25.952230 systemd[1]: Starting ensure-sysext.service... Aug 19 00:08:25.958780 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 00:08:25.969491 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Aug 19 00:08:25.969512 systemd[1]: Reloading... Aug 19 00:08:25.987145 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 00:08:25.987495 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 00:08:25.987792 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 00:08:25.988009 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 00:08:25.988655 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 00:08:25.988893 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Aug 19 00:08:25.988944 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Aug 19 00:08:25.991909 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 00:08:25.991922 systemd-tmpfiles[1287]: Skipping /boot Aug 19 00:08:25.997724 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 00:08:25.997742 systemd-tmpfiles[1287]: Skipping /boot Aug 19 00:08:26.030963 zram_generator::config[1317]: No configuration found. Aug 19 00:08:26.173643 systemd[1]: Reloading finished in 203 ms. Aug 19 00:08:26.184756 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 00:08:26.191031 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:08:26.201924 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:08:26.204672 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 00:08:26.207224 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 00:08:26.210190 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 00:08:26.212995 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:08:26.219867 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 00:08:26.239325 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 00:08:26.242582 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 00:08:26.248458 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:08:26.255220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:08:26.258223 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:08:26.262976 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:08:26.264576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:08:26.264762 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:08:26.267848 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Aug 19 00:08:26.274959 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 00:08:26.278247 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 00:08:26.281083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:08:26.281237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:08:26.284159 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:08:26.284431 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:08:26.288593 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 00:08:26.290881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:08:26.291093 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:08:26.295656 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 00:08:26.300285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:08:26.301754 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:08:26.306126 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:08:26.311723 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:08:26.313387 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:08:26.313530 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:08:26.313626 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 00:08:26.314489 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 00:08:26.317090 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:08:26.320958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:08:26.321345 augenrules[1408]: No rules Aug 19 00:08:26.321897 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:08:26.326572 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:08:26.326769 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:08:26.330081 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:08:26.330329 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:08:26.345672 systemd[1]: Finished ensure-sysext.service. Aug 19 00:08:26.354213 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:08:26.354404 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:08:26.361175 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:08:26.362476 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:08:26.365748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:08:26.370316 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 00:08:26.373183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:08:26.376019 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:08:26.376081 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:08:26.378926 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 00:08:26.383142 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 19 00:08:26.385550 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 00:08:26.386035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:08:26.386234 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:08:26.388369 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 00:08:26.388554 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 00:08:26.392567 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 19 00:08:26.402429 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 00:08:26.422324 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:08:26.423273 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:08:26.425746 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 00:08:26.434004 augenrules[1432]: /sbin/augenrules: No change Aug 19 00:08:26.443415 augenrules[1465]: No rules Aug 19 00:08:26.446040 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:08:26.446341 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:08:26.480707 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 00:08:26.482521 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 19 00:08:26.484312 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 00:08:26.487988 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 00:08:26.507381 systemd-resolved[1353]: Positive Trust Anchors: Aug 19 00:08:26.512158 systemd-networkd[1441]: lo: Link UP Aug 19 00:08:26.512163 systemd-networkd[1441]: lo: Gained carrier Aug 19 00:08:26.513135 systemd-networkd[1441]: Enumeration completed Aug 19 00:08:26.513239 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 00:08:26.513578 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:08:26.513588 systemd-networkd[1441]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 00:08:26.514415 systemd-networkd[1441]: eth0: Link UP Aug 19 00:08:26.514537 systemd-networkd[1441]: eth0: Gained carrier Aug 19 00:08:26.514558 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:08:26.515866 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 00:08:26.516011 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 00:08:26.517332 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 00:08:26.519914 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 00:08:26.526624 systemd-resolved[1353]: Defaulting to hostname 'linux'. Aug 19 00:08:26.532844 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 00:08:26.534786 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 00:08:26.535873 systemd-networkd[1441]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 00:08:26.536487 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Aug 19 00:08:26.538089 systemd-timesyncd[1443]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 19 00:08:26.538149 systemd-timesyncd[1443]: Initial clock synchronization to Tue 2025-08-19 00:08:26.346260 UTC. Aug 19 00:08:26.538583 systemd[1]: Reached target network.target - Network. Aug 19 00:08:26.539653 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:08:26.541239 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 00:08:26.543065 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 00:08:26.544507 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 00:08:26.547156 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 00:08:26.548535 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 00:08:26.550038 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 00:08:26.551346 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 00:08:26.551395 systemd[1]: Reached target paths.target - Path Units. Aug 19 00:08:26.552365 systemd[1]: Reached target timers.target - Timer Units. Aug 19 00:08:26.554601 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 00:08:26.558051 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 00:08:26.562046 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 00:08:26.563761 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 00:08:26.565672 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 00:08:26.571869 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 00:08:26.573944 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 00:08:26.579413 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 00:08:26.581416 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 00:08:26.590697 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 00:08:26.591765 systemd[1]: Reached target basic.target - Basic System. Aug 19 00:08:26.592753 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 00:08:26.592786 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 00:08:26.593978 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 00:08:26.596226 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 00:08:26.598287 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 00:08:26.611806 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 00:08:26.614075 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 00:08:26.615173 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 00:08:26.616345 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 00:08:26.620854 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 00:08:26.622598 jq[1498]: false Aug 19 00:08:26.625497 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 00:08:26.629734 extend-filesystems[1499]: Found /dev/vda6 Aug 19 00:08:26.632590 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 00:08:26.641436 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 00:08:26.645056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:08:26.645543 extend-filesystems[1499]: Found /dev/vda9 Aug 19 00:08:26.647184 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 19 00:08:26.647721 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 00:08:26.649009 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 00:08:26.650496 extend-filesystems[1499]: Checking size of /dev/vda9 Aug 19 00:08:26.652932 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 00:08:26.657289 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 00:08:26.659085 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 00:08:26.659336 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 00:08:26.659599 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 00:08:26.659760 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 00:08:26.677491 extend-filesystems[1499]: Resized partition /dev/vda9 Aug 19 00:08:26.678431 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 00:08:26.678891 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 00:08:26.681483 jq[1521]: true Aug 19 00:08:26.686638 extend-filesystems[1529]: resize2fs 1.47.2 (1-Jan-2025) Aug 19 00:08:26.689347 (ntainerd)[1530]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 00:08:26.700854 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 19 00:08:26.715120 jq[1531]: true Aug 19 00:08:26.735858 tar[1524]: linux-arm64/helm Aug 19 00:08:26.754020 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (Power Button) Aug 19 00:08:26.754747 systemd-logind[1514]: New seat seat0. Aug 19 00:08:26.755897 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 00:08:26.762015 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 19 00:08:26.784430 update_engine[1518]: I20250819 00:08:26.776527 1518 main.cc:92] Flatcar Update Engine starting Aug 19 00:08:26.782965 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 00:08:26.782701 dbus-daemon[1496]: [system] SELinux support is enabled Aug 19 00:08:26.785670 extend-filesystems[1529]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 19 00:08:26.785670 extend-filesystems[1529]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 19 00:08:26.785670 extend-filesystems[1529]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 19 00:08:26.795308 extend-filesystems[1499]: Resized filesystem in /dev/vda9 Aug 19 00:08:26.786575 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 00:08:26.786603 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 00:08:26.793757 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 00:08:26.793776 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 00:08:26.796422 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 00:08:26.796648 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 00:08:26.804434 update_engine[1518]: I20250819 00:08:26.804261 1518 update_check_scheduler.cc:74] Next update check in 5m12s Aug 19 00:08:26.808475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:08:26.811660 systemd[1]: Started update-engine.service - Update Engine. Aug 19 00:08:26.819243 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 00:08:26.825349 bash[1559]: Updated "/home/core/.ssh/authorized_keys" Aug 19 00:08:26.829182 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 00:08:26.831365 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 19 00:08:26.892124 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 00:08:26.985967 containerd[1530]: time="2025-08-19T00:08:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 00:08:26.986643 containerd[1530]: time="2025-08-19T00:08:26.986600600Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 00:08:26.996982 containerd[1530]: time="2025-08-19T00:08:26.996938920Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.56µs" Aug 19 00:08:26.996982 containerd[1530]: time="2025-08-19T00:08:26.996980960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 00:08:26.997065 containerd[1530]: time="2025-08-19T00:08:26.997001120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 00:08:26.997186 containerd[1530]: time="2025-08-19T00:08:26.997166280Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 00:08:26.997221 containerd[1530]: time="2025-08-19T00:08:26.997187280Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 00:08:26.997221 containerd[1530]: time="2025-08-19T00:08:26.997213880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 00:08:26.997838 containerd[1530]: time="2025-08-19T00:08:26.997261880Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 00:08:26.997838 containerd[1530]: time="2025-08-19T00:08:26.997275800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 00:08:26.997838 containerd[1530]: time="2025-08-19T00:08:26.997525040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 00:08:26.997838 containerd[1530]: time="2025-08-19T00:08:26.997542560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 00:08:26.997838 containerd[1530]: time="2025-08-19T00:08:26.997553720Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 00:08:26.997838 containerd[1530]: time="2025-08-19T00:08:26.997561200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 00:08:26.997838 containerd[1530]: time="2025-08-19T00:08:26.997628920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 00:08:26.997969 containerd[1530]: time="2025-08-19T00:08:26.997868000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 00:08:26.997969 containerd[1530]: time="2025-08-19T00:08:26.997900760Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 00:08:26.997969 containerd[1530]: time="2025-08-19T00:08:26.997914520Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 00:08:26.997969 containerd[1530]: time="2025-08-19T00:08:26.997947560Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 00:08:26.998566 containerd[1530]: time="2025-08-19T00:08:26.998537120Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 00:08:26.998677 containerd[1530]: time="2025-08-19T00:08:26.998653880Z" level=info msg="metadata content store policy set" policy=shared Aug 19 00:08:27.009012 containerd[1530]: time="2025-08-19T00:08:27.008963386Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 00:08:27.009064 containerd[1530]: time="2025-08-19T00:08:27.009052010Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 00:08:27.009084 containerd[1530]: time="2025-08-19T00:08:27.009069734Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 00:08:27.009107 containerd[1530]: time="2025-08-19T00:08:27.009082618Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 00:08:27.009107 containerd[1530]: time="2025-08-19T00:08:27.009103076Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 00:08:27.009147 containerd[1530]: time="2025-08-19T00:08:27.009116350Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 00:08:27.009147 containerd[1530]: time="2025-08-19T00:08:27.009137510Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 00:08:27.009182 containerd[1530]: time="2025-08-19T00:08:27.009149886Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 00:08:27.009182 containerd[1530]: time="2025-08-19T00:08:27.009162457Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 00:08:27.009182 containerd[1530]: time="2025-08-19T00:08:27.009173350Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 00:08:27.009226 containerd[1530]: time="2025-08-19T00:08:27.009183813Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 00:08:27.009226 containerd[1530]: time="2025-08-19T00:08:27.009197868Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 00:08:27.009364 containerd[1530]: time="2025-08-19T00:08:27.009344194Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 00:08:27.009393 containerd[1530]: time="2025-08-19T00:08:27.009372655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 00:08:27.009393 containerd[1530]: time="2025-08-19T00:08:27.009390419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 00:08:27.009425 containerd[1530]: time="2025-08-19T00:08:27.009400882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 00:08:27.009425 containerd[1530]: time="2025-08-19T00:08:27.009411541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 00:08:27.009425 containerd[1530]: time="2025-08-19T00:08:27.009421145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 00:08:27.009474 containerd[1530]: time="2025-08-19T00:08:27.009434106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 00:08:27.009474 containerd[1530]: time="2025-08-19T00:08:27.009444999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 00:08:27.009474 containerd[1530]: time="2025-08-19T00:08:27.009467135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 00:08:27.009524 containerd[1530]: time="2025-08-19T00:08:27.009482596Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 00:08:27.009524 containerd[1530]: time="2025-08-19T00:08:27.009499227Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 00:08:27.009828 containerd[1530]: time="2025-08-19T00:08:27.009683424Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 00:08:27.009828 containerd[1530]: time="2025-08-19T00:08:27.009707903Z" level=info msg="Start snapshots syncer" Aug 19 00:08:27.009828 containerd[1530]: time="2025-08-19T00:08:27.009732811Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 00:08:27.010007 containerd[1530]: time="2025-08-19T00:08:27.009969089Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 00:08:27.010123 containerd[1530]: time="2025-08-19T00:08:27.010030383Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 00:08:27.010123 containerd[1530]: time="2025-08-19T00:08:27.010105460Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010217001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010247336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010259516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010270799Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010283019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010293873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010303945Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010328737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010340215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010351927Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010389836Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010405804Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 00:08:27.010730 containerd[1530]: time="2025-08-19T00:08:27.010414510Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 00:08:27.011180 containerd[1530]: time="2025-08-19T00:08:27.010423177Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 00:08:27.011180 containerd[1530]: time="2025-08-19T00:08:27.010430634Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 00:08:27.011180 containerd[1530]: time="2025-08-19T00:08:27.010440277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 00:08:27.011180 containerd[1530]: time="2025-08-19T00:08:27.010450233Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 00:08:27.011180 containerd[1530]: time="2025-08-19T00:08:27.010532883Z" level=info msg="runtime interface created" Aug 19 00:08:27.011180 containerd[1530]: time="2025-08-19T00:08:27.010540379Z" level=info msg="created NRI interface" Aug 19 00:08:27.011180 containerd[1530]: time="2025-08-19T00:08:27.010550686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 00:08:27.011180 containerd[1530]: time="2025-08-19T00:08:27.010563179Z" level=info msg="Connect containerd service" Aug 19 00:08:27.011180 containerd[1530]: time="2025-08-19T00:08:27.010601323Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 00:08:27.011387 containerd[1530]: time="2025-08-19T00:08:27.011347986Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 00:08:27.029344 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 00:08:27.046751 tar[1524]: linux-arm64/LICENSE Aug 19 00:08:27.046868 tar[1524]: linux-arm64/README.md Aug 19 00:08:27.052945 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 00:08:27.060070 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 00:08:27.061514 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 00:08:27.085832 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 00:08:27.086081 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 00:08:27.090150 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 00:08:27.111659 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 00:08:27.114323 containerd[1530]: time="2025-08-19T00:08:27.114264580Z" level=info msg="Start subscribing containerd event" Aug 19 00:08:27.114382 containerd[1530]: time="2025-08-19T00:08:27.114349221Z" level=info msg="Start recovering state" Aug 19 00:08:27.114842 containerd[1530]: time="2025-08-19T00:08:27.114443935Z" level=info msg="Start event monitor" Aug 19 00:08:27.114842 containerd[1530]: time="2025-08-19T00:08:27.114463221Z" level=info msg="Start cni network conf syncer for default" Aug 19 00:08:27.114842 containerd[1530]: time="2025-08-19T00:08:27.114482937Z" level=info msg="Start streaming server" Aug 19 00:08:27.114842 containerd[1530]: time="2025-08-19T00:08:27.114491839Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 00:08:27.114842 containerd[1530]: time="2025-08-19T00:08:27.114499139Z" level=info msg="runtime interface starting up..." Aug 19 00:08:27.114842 containerd[1530]: time="2025-08-19T00:08:27.114512960Z" level=info msg="starting plugins..." Aug 19 00:08:27.114842 containerd[1530]: time="2025-08-19T00:08:27.114530919Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 00:08:27.114842 containerd[1530]: time="2025-08-19T00:08:27.114600451Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 00:08:27.114842 containerd[1530]: time="2025-08-19T00:08:27.114658193Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 00:08:27.114842 containerd[1530]: time="2025-08-19T00:08:27.114709103Z" level=info msg="containerd successfully booted in 0.129268s" Aug 19 00:08:27.116148 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 00:08:27.119689 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 19 00:08:27.121244 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 00:08:27.122622 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 00:08:28.542975 systemd-networkd[1441]: eth0: Gained IPv6LL Aug 19 00:08:28.545317 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 00:08:28.547138 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 00:08:28.549720 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 19 00:08:28.552396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:08:28.567846 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 00:08:28.588016 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 19 00:08:28.588298 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 19 00:08:28.590411 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 00:08:28.599550 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 00:08:29.222623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:08:29.224519 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 00:08:29.226572 systemd[1]: Startup finished in 2.068s (kernel) + 5.173s (initrd) + 4.482s (userspace) = 11.723s. Aug 19 00:08:29.227215 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:08:29.737358 kubelet[1636]: E0819 00:08:29.737302 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:08:29.739864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:08:29.740009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:08:29.740345 systemd[1]: kubelet.service: Consumed 879ms CPU time, 256.5M memory peak. Aug 19 00:08:32.695680 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 00:08:32.696699 systemd[1]: Started sshd@0-10.0.0.31:22-10.0.0.1:45376.service - OpenSSH per-connection server daemon (10.0.0.1:45376). Aug 19 00:08:32.803164 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 45376 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:08:32.807033 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:08:32.823497 systemd-logind[1514]: New session 1 of user core. Aug 19 00:08:32.824505 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 00:08:32.825876 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 00:08:32.854877 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 00:08:32.859564 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 00:08:32.884322 (systemd)[1655]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 00:08:32.886625 systemd-logind[1514]: New session c1 of user core. Aug 19 00:08:33.014130 systemd[1655]: Queued start job for default target default.target. Aug 19 00:08:33.024889 systemd[1655]: Created slice app.slice - User Application Slice. Aug 19 00:08:33.024917 systemd[1655]: Reached target paths.target - Paths. Aug 19 00:08:33.024955 systemd[1655]: Reached target timers.target - Timers. Aug 19 00:08:33.026199 systemd[1655]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 00:08:33.044289 systemd[1655]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 00:08:33.044406 systemd[1655]: Reached target sockets.target - Sockets. Aug 19 00:08:33.044450 systemd[1655]: Reached target basic.target - Basic System. Aug 19 00:08:33.044477 systemd[1655]: Reached target default.target - Main User Target. Aug 19 00:08:33.044506 systemd[1655]: Startup finished in 147ms. Aug 19 00:08:33.044893 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 00:08:33.047615 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 00:08:33.114616 systemd[1]: Started sshd@1-10.0.0.31:22-10.0.0.1:45384.service - OpenSSH per-connection server daemon (10.0.0.1:45384). Aug 19 00:08:33.182209 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 45384 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:08:33.184168 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:08:33.187862 systemd-logind[1514]: New session 2 of user core. Aug 19 00:08:33.197986 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 00:08:33.249471 sshd[1669]: Connection closed by 10.0.0.1 port 45384 Aug 19 00:08:33.249825 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Aug 19 00:08:33.263101 systemd[1]: sshd@1-10.0.0.31:22-10.0.0.1:45384.service: Deactivated successfully. Aug 19 00:08:33.264667 systemd[1]: session-2.scope: Deactivated successfully. Aug 19 00:08:33.265430 systemd-logind[1514]: Session 2 logged out. Waiting for processes to exit. Aug 19 00:08:33.268080 systemd[1]: Started sshd@2-10.0.0.31:22-10.0.0.1:45398.service - OpenSSH per-connection server daemon (10.0.0.1:45398). Aug 19 00:08:33.268894 systemd-logind[1514]: Removed session 2. Aug 19 00:08:33.347046 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 45398 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:08:33.348457 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:08:33.352902 systemd-logind[1514]: New session 3 of user core. Aug 19 00:08:33.370026 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 00:08:33.417910 sshd[1678]: Connection closed by 10.0.0.1 port 45398 Aug 19 00:08:33.418836 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Aug 19 00:08:33.428556 systemd[1]: sshd@2-10.0.0.31:22-10.0.0.1:45398.service: Deactivated successfully. Aug 19 00:08:33.431634 systemd[1]: session-3.scope: Deactivated successfully. Aug 19 00:08:33.432437 systemd-logind[1514]: Session 3 logged out. Waiting for processes to exit. Aug 19 00:08:33.435332 systemd[1]: Started sshd@3-10.0.0.31:22-10.0.0.1:45414.service - OpenSSH per-connection server daemon (10.0.0.1:45414). Aug 19 00:08:33.435794 systemd-logind[1514]: Removed session 3. Aug 19 00:08:33.494846 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 45414 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:08:33.496139 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:08:33.499646 systemd-logind[1514]: New session 4 of user core. Aug 19 00:08:33.515082 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 00:08:33.566878 sshd[1687]: Connection closed by 10.0.0.1 port 45414 Aug 19 00:08:33.566919 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Aug 19 00:08:33.581161 systemd[1]: sshd@3-10.0.0.31:22-10.0.0.1:45414.service: Deactivated successfully. Aug 19 00:08:33.584202 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 00:08:33.584853 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. Aug 19 00:08:33.587107 systemd[1]: Started sshd@4-10.0.0.31:22-10.0.0.1:45420.service - OpenSSH per-connection server daemon (10.0.0.1:45420). Aug 19 00:08:33.588219 systemd-logind[1514]: Removed session 4. Aug 19 00:08:33.642634 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 45420 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:08:33.643989 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:08:33.647855 systemd-logind[1514]: New session 5 of user core. Aug 19 00:08:33.659021 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 00:08:33.729579 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 00:08:33.729897 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:08:33.740760 sudo[1698]: pam_unix(sudo:session): session closed for user root Aug 19 00:08:33.745013 sshd[1697]: Connection closed by 10.0.0.1 port 45420 Aug 19 00:08:33.745643 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Aug 19 00:08:33.764572 systemd[1]: sshd@4-10.0.0.31:22-10.0.0.1:45420.service: Deactivated successfully. Aug 19 00:08:33.767310 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 00:08:33.768270 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. Aug 19 00:08:33.770097 systemd-logind[1514]: Removed session 5. Aug 19 00:08:33.771968 systemd[1]: Started sshd@5-10.0.0.31:22-10.0.0.1:45428.service - OpenSSH per-connection server daemon (10.0.0.1:45428). Aug 19 00:08:33.833902 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 45428 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:08:33.834466 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:08:33.838873 systemd-logind[1514]: New session 6 of user core. Aug 19 00:08:33.855026 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 00:08:33.908526 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 00:08:33.909186 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:08:33.981347 sudo[1709]: pam_unix(sudo:session): session closed for user root Aug 19 00:08:33.988162 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 00:08:33.988421 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:08:33.998607 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:08:34.049754 augenrules[1731]: No rules Aug 19 00:08:34.050920 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:08:34.051133 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:08:34.052038 sudo[1708]: pam_unix(sudo:session): session closed for user root Aug 19 00:08:34.053177 sshd[1707]: Connection closed by 10.0.0.1 port 45428 Aug 19 00:08:34.053529 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Aug 19 00:08:34.065768 systemd[1]: sshd@5-10.0.0.31:22-10.0.0.1:45428.service: Deactivated successfully. Aug 19 00:08:34.067356 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 00:08:34.068204 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Aug 19 00:08:34.070654 systemd[1]: Started sshd@6-10.0.0.31:22-10.0.0.1:45434.service - OpenSSH per-connection server daemon (10.0.0.1:45434). Aug 19 00:08:34.071391 systemd-logind[1514]: Removed session 6. Aug 19 00:08:34.110733 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 45434 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:08:34.112033 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:08:34.115886 systemd-logind[1514]: New session 7 of user core. Aug 19 00:08:34.124989 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 00:08:34.174705 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 00:08:34.174992 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:08:34.588024 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 00:08:34.611194 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 00:08:34.919146 dockerd[1765]: time="2025-08-19T00:08:34.919008143Z" level=info msg="Starting up" Aug 19 00:08:34.920234 dockerd[1765]: time="2025-08-19T00:08:34.920202356Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 00:08:34.931058 dockerd[1765]: time="2025-08-19T00:08:34.931014262Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 00:08:34.945471 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport769198352-merged.mount: Deactivated successfully. Aug 19 00:08:34.977353 dockerd[1765]: time="2025-08-19T00:08:34.977298435Z" level=info msg="Loading containers: start." Aug 19 00:08:34.988846 kernel: Initializing XFRM netlink socket Aug 19 00:08:35.227449 systemd-networkd[1441]: docker0: Link UP Aug 19 00:08:35.233771 dockerd[1765]: time="2025-08-19T00:08:35.233714407Z" level=info msg="Loading containers: done." Aug 19 00:08:35.249085 dockerd[1765]: time="2025-08-19T00:08:35.249018160Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 00:08:35.249254 dockerd[1765]: time="2025-08-19T00:08:35.249128722Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 00:08:35.249254 dockerd[1765]: time="2025-08-19T00:08:35.249230398Z" level=info msg="Initializing buildkit" Aug 19 00:08:35.277861 dockerd[1765]: time="2025-08-19T00:08:35.277785666Z" level=info msg="Completed buildkit initialization" Aug 19 00:08:35.286619 dockerd[1765]: time="2025-08-19T00:08:35.286493871Z" level=info msg="Daemon has completed initialization" Aug 19 00:08:35.286766 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 00:08:35.287706 dockerd[1765]: time="2025-08-19T00:08:35.287299263Z" level=info msg="API listen on /run/docker.sock" Aug 19 00:08:36.009350 containerd[1530]: time="2025-08-19T00:08:36.009267584Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Aug 19 00:08:36.668471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343905884.mount: Deactivated successfully. Aug 19 00:08:38.197502 containerd[1530]: time="2025-08-19T00:08:38.197441518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:38.197953 containerd[1530]: time="2025-08-19T00:08:38.197925871Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Aug 19 00:08:38.198956 containerd[1530]: time="2025-08-19T00:08:38.198896406Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:38.204404 containerd[1530]: time="2025-08-19T00:08:38.204318803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:38.205266 containerd[1530]: time="2025-08-19T00:08:38.205119560Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 2.195780235s" Aug 19 00:08:38.205266 containerd[1530]: time="2025-08-19T00:08:38.205164709Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Aug 19 00:08:38.209482 containerd[1530]: time="2025-08-19T00:08:38.209437961Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Aug 19 00:08:39.525766 containerd[1530]: time="2025-08-19T00:08:39.525686071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:39.609724 containerd[1530]: time="2025-08-19T00:08:39.609666111Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Aug 19 00:08:39.610741 containerd[1530]: time="2025-08-19T00:08:39.610694283Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:39.613825 containerd[1530]: time="2025-08-19T00:08:39.613629245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:39.616333 containerd[1530]: time="2025-08-19T00:08:39.616289022Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.406805862s" Aug 19 00:08:39.616523 containerd[1530]: time="2025-08-19T00:08:39.616460988Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Aug 19 00:08:39.616961 containerd[1530]: time="2025-08-19T00:08:39.616943328Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Aug 19 00:08:39.990367 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 00:08:39.991775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:08:40.132513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:08:40.136743 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:08:40.186067 kubelet[2049]: E0819 00:08:40.186006 2049 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:08:40.192919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:08:40.193062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:08:40.193367 systemd[1]: kubelet.service: Consumed 163ms CPU time, 108.7M memory peak. Aug 19 00:08:41.156167 containerd[1530]: time="2025-08-19T00:08:41.155976711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:41.157366 containerd[1530]: time="2025-08-19T00:08:41.157329532Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Aug 19 00:08:41.158833 containerd[1530]: time="2025-08-19T00:08:41.158401436Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:41.161447 containerd[1530]: time="2025-08-19T00:08:41.161407324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:41.162143 containerd[1530]: time="2025-08-19T00:08:41.162106968Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.545050738s" Aug 19 00:08:41.162143 containerd[1530]: time="2025-08-19T00:08:41.162141998Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Aug 19 00:08:41.162885 containerd[1530]: time="2025-08-19T00:08:41.162792943Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Aug 19 00:08:42.306014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3993901963.mount: Deactivated successfully. Aug 19 00:08:42.521382 containerd[1530]: time="2025-08-19T00:08:42.521317694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:42.522352 containerd[1530]: time="2025-08-19T00:08:42.522320640Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Aug 19 00:08:42.523392 containerd[1530]: time="2025-08-19T00:08:42.523334470Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:42.525836 containerd[1530]: time="2025-08-19T00:08:42.525469702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:42.525985 containerd[1530]: time="2025-08-19T00:08:42.525958715Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.363064424s" Aug 19 00:08:42.526058 containerd[1530]: time="2025-08-19T00:08:42.526043321Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Aug 19 00:08:42.526746 containerd[1530]: time="2025-08-19T00:08:42.526532494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 19 00:08:43.097162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558702612.mount: Deactivated successfully. Aug 19 00:08:43.769944 containerd[1530]: time="2025-08-19T00:08:43.769882157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:43.771961 containerd[1530]: time="2025-08-19T00:08:43.771915746Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 19 00:08:43.781360 containerd[1530]: time="2025-08-19T00:08:43.781263019Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:43.785866 containerd[1530]: time="2025-08-19T00:08:43.785781197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:43.786871 containerd[1530]: time="2025-08-19T00:08:43.786831536Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.260249991s" Aug 19 00:08:43.786871 containerd[1530]: time="2025-08-19T00:08:43.786872619Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 19 00:08:43.787664 containerd[1530]: time="2025-08-19T00:08:43.787440567Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 00:08:44.216444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3715554121.mount: Deactivated successfully. Aug 19 00:08:44.220794 containerd[1530]: time="2025-08-19T00:08:44.220733478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:08:44.221394 containerd[1530]: time="2025-08-19T00:08:44.221360920Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 19 00:08:44.222442 containerd[1530]: time="2025-08-19T00:08:44.222400819Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:08:44.224945 containerd[1530]: time="2025-08-19T00:08:44.224870927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:08:44.226218 containerd[1530]: time="2025-08-19T00:08:44.226084435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 438.610878ms" Aug 19 00:08:44.226218 containerd[1530]: time="2025-08-19T00:08:44.226137623Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 19 00:08:44.227053 containerd[1530]: time="2025-08-19T00:08:44.226977937Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 19 00:08:44.737474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237751242.mount: Deactivated successfully. Aug 19 00:08:46.725751 containerd[1530]: time="2025-08-19T00:08:46.725695071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:46.726912 containerd[1530]: time="2025-08-19T00:08:46.726874351Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Aug 19 00:08:46.728391 containerd[1530]: time="2025-08-19T00:08:46.728346116Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:46.732355 containerd[1530]: time="2025-08-19T00:08:46.732309188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:08:46.733475 containerd[1530]: time="2025-08-19T00:08:46.733429740Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.506373991s" Aug 19 00:08:46.733475 containerd[1530]: time="2025-08-19T00:08:46.733467428Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 19 00:08:50.443478 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 19 00:08:50.445097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:08:50.690178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:08:50.707655 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:08:50.748956 kubelet[2211]: E0819 00:08:50.748888 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:08:50.751461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:08:50.751596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:08:50.751938 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.6M memory peak. Aug 19 00:08:52.640185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:08:52.640330 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.6M memory peak. Aug 19 00:08:52.642405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:08:52.665391 systemd[1]: Reload requested from client PID 2226 ('systemctl') (unit session-7.scope)... Aug 19 00:08:52.665410 systemd[1]: Reloading... Aug 19 00:08:52.737841 zram_generator::config[2271]: No configuration found. Aug 19 00:08:52.965113 systemd[1]: Reloading finished in 299 ms. Aug 19 00:08:53.038590 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 00:08:53.038684 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 00:08:53.039017 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:08:53.039072 systemd[1]: kubelet.service: Consumed 100ms CPU time, 95M memory peak. Aug 19 00:08:53.040808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:08:53.167391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:08:53.173374 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 00:08:53.223537 kubelet[2313]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:08:53.223537 kubelet[2313]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 19 00:08:53.223537 kubelet[2313]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:08:53.223537 kubelet[2313]: I0819 00:08:53.223249 2313 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 00:08:54.209849 kubelet[2313]: I0819 00:08:54.209297 2313 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 19 00:08:54.209849 kubelet[2313]: I0819 00:08:54.209332 2313 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 00:08:54.209849 kubelet[2313]: I0819 00:08:54.209609 2313 server.go:934] "Client rotation is on, will bootstrap in background" Aug 19 00:08:54.254276 kubelet[2313]: E0819 00:08:54.253126 2313 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:08:54.257874 kubelet[2313]: I0819 00:08:54.257846 2313 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 00:08:54.265120 kubelet[2313]: I0819 00:08:54.265086 2313 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 00:08:54.269122 kubelet[2313]: I0819 00:08:54.269086 2313 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 00:08:54.270021 kubelet[2313]: I0819 00:08:54.269986 2313 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 19 00:08:54.270204 kubelet[2313]: I0819 00:08:54.270158 2313 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 00:08:54.270467 kubelet[2313]: I0819 00:08:54.270199 2313 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 00:08:54.270613 kubelet[2313]: I0819 00:08:54.270529 2313 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 00:08:54.270613 kubelet[2313]: I0819 00:08:54.270540 2313 container_manager_linux.go:300] "Creating device plugin manager" Aug 19 00:08:54.271061 kubelet[2313]: I0819 00:08:54.270982 2313 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:08:54.274169 kubelet[2313]: I0819 00:08:54.274131 2313 kubelet.go:408] "Attempting to sync node with API server" Aug 19 00:08:54.274214 kubelet[2313]: I0819 00:08:54.274177 2313 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 00:08:54.274214 kubelet[2313]: I0819 00:08:54.274205 2313 kubelet.go:314] "Adding apiserver pod source" Aug 19 00:08:54.274580 kubelet[2313]: I0819 00:08:54.274318 2313 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 00:08:54.275921 kubelet[2313]: W0819 00:08:54.275482 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Aug 19 00:08:54.276001 kubelet[2313]: E0819 00:08:54.275935 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:08:54.278422 kubelet[2313]: W0819 00:08:54.278362 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Aug 19 00:08:54.278485 kubelet[2313]: E0819 00:08:54.278425 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:08:54.280445 kubelet[2313]: I0819 00:08:54.280415 2313 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 00:08:54.283202 kubelet[2313]: I0819 00:08:54.283171 2313 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 00:08:54.285055 kubelet[2313]: W0819 00:08:54.285022 2313 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 00:08:54.286435 kubelet[2313]: I0819 00:08:54.286256 2313 server.go:1274] "Started kubelet" Aug 19 00:08:54.286924 kubelet[2313]: I0819 00:08:54.286870 2313 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 00:08:54.291104 kubelet[2313]: I0819 00:08:54.290759 2313 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 00:08:54.291242 kubelet[2313]: I0819 00:08:54.291211 2313 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 00:08:54.291973 kubelet[2313]: I0819 00:08:54.291945 2313 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 00:08:54.293452 kubelet[2313]: I0819 00:08:54.293418 2313 server.go:449] "Adding debug handlers to kubelet server" Aug 19 00:08:54.293865 kubelet[2313]: E0819 00:08:54.292787 2313 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185d0275b341bad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-19 00:08:54.286219993 +0000 UTC m=+1.108862040,LastTimestamp:2025-08-19 00:08:54.286219993 +0000 UTC m=+1.108862040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 19 00:08:54.294621 kubelet[2313]: I0819 00:08:54.294594 2313 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 00:08:54.295505 kubelet[2313]: I0819 00:08:54.295430 2313 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 19 00:08:54.295565 kubelet[2313]: E0819 00:08:54.295547 2313 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 00:08:54.295589 kubelet[2313]: I0819 00:08:54.295583 2313 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 19 00:08:54.295660 kubelet[2313]: I0819 00:08:54.295638 2313 reconciler.go:26] "Reconciler: start to sync state" Aug 19 00:08:54.296388 kubelet[2313]: W0819 00:08:54.296321 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Aug 19 00:08:54.296491 kubelet[2313]: E0819 00:08:54.296460 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:08:54.296918 kubelet[2313]: E0819 00:08:54.296892 2313 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 00:08:54.297080 kubelet[2313]: I0819 00:08:54.297055 2313 factory.go:221] Registration of the systemd container factory successfully Aug 19 00:08:54.297080 kubelet[2313]: E0819 00:08:54.297062 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="200ms" Aug 19 00:08:54.297169 kubelet[2313]: I0819 00:08:54.297150 2313 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 00:08:54.298556 kubelet[2313]: I0819 00:08:54.298520 2313 factory.go:221] Registration of the containerd container factory successfully Aug 19 00:08:54.309417 kubelet[2313]: I0819 00:08:54.309332 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 00:08:54.310598 kubelet[2313]: I0819 00:08:54.310555 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 00:08:54.310598 kubelet[2313]: I0819 00:08:54.310590 2313 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 19 00:08:54.310703 kubelet[2313]: I0819 00:08:54.310614 2313 kubelet.go:2321] "Starting kubelet main sync loop" Aug 19 00:08:54.310703 kubelet[2313]: E0819 00:08:54.310665 2313 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 00:08:54.315342 kubelet[2313]: W0819 00:08:54.315170 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Aug 19 00:08:54.315342 kubelet[2313]: E0819 00:08:54.315231 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:08:54.316300 kubelet[2313]: I0819 00:08:54.316248 2313 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 19 00:08:54.316300 kubelet[2313]: I0819 00:08:54.316267 2313 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 19 00:08:54.316300 kubelet[2313]: I0819 00:08:54.316287 2313 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:08:54.396009 kubelet[2313]: E0819 00:08:54.395958 2313 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 00:08:54.403336 kubelet[2313]: I0819 00:08:54.403279 2313 policy_none.go:49] "None policy: Start" Aug 19 00:08:54.404158 kubelet[2313]: I0819 00:08:54.404130 2313 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 19 00:08:54.404158 kubelet[2313]: I0819 00:08:54.404157 2313 state_mem.go:35] "Initializing new in-memory state store" Aug 19 00:08:54.410317 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 00:08:54.410847 kubelet[2313]: E0819 00:08:54.410804 2313 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 00:08:54.422220 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 00:08:54.425579 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 00:08:54.433991 kubelet[2313]: I0819 00:08:54.433797 2313 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 00:08:54.434428 kubelet[2313]: I0819 00:08:54.434042 2313 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 00:08:54.434428 kubelet[2313]: I0819 00:08:54.434054 2313 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 00:08:54.434428 kubelet[2313]: I0819 00:08:54.434305 2313 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 00:08:54.438794 kubelet[2313]: E0819 00:08:54.438755 2313 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 19 00:08:54.498192 kubelet[2313]: E0819 00:08:54.498064 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="400ms" Aug 19 00:08:54.535298 kubelet[2313]: I0819 00:08:54.535268 2313 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 00:08:54.535759 kubelet[2313]: E0819 00:08:54.535734 2313 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Aug 19 00:08:54.622613 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Aug 19 00:08:54.639509 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Aug 19 00:08:54.643274 systemd[1]: Created slice kubepods-burstable-pod1a742e571b2d4fb3e19afeabaa8ef3d7.slice - libcontainer container kubepods-burstable-pod1a742e571b2d4fb3e19afeabaa8ef3d7.slice. Aug 19 00:08:54.697260 kubelet[2313]: I0819 00:08:54.697218 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Aug 19 00:08:54.697441 kubelet[2313]: I0819 00:08:54.697424 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a742e571b2d4fb3e19afeabaa8ef3d7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a742e571b2d4fb3e19afeabaa8ef3d7\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:08:54.697518 kubelet[2313]: I0819 00:08:54.697506 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:08:54.697605 kubelet[2313]: I0819 00:08:54.697581 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:08:54.697681 kubelet[2313]: I0819 00:08:54.697668 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a742e571b2d4fb3e19afeabaa8ef3d7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a742e571b2d4fb3e19afeabaa8ef3d7\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:08:54.697747 kubelet[2313]: I0819 00:08:54.697736 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a742e571b2d4fb3e19afeabaa8ef3d7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a742e571b2d4fb3e19afeabaa8ef3d7\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:08:54.697883 kubelet[2313]: I0819 00:08:54.697798 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:08:54.697883 kubelet[2313]: I0819 00:08:54.697839 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:08:54.697883 kubelet[2313]: I0819 00:08:54.697857 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:08:54.737786 kubelet[2313]: I0819 00:08:54.737743 2313 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 00:08:54.738192 kubelet[2313]: E0819 00:08:54.738155 2313 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Aug 19 00:08:54.898796 kubelet[2313]: E0819 00:08:54.898646 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="800ms" Aug 19 00:08:54.937592 containerd[1530]: time="2025-08-19T00:08:54.937549584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Aug 19 00:08:54.942257 containerd[1530]: time="2025-08-19T00:08:54.942198114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Aug 19 00:08:54.946124 containerd[1530]: time="2025-08-19T00:08:54.946027458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a742e571b2d4fb3e19afeabaa8ef3d7,Namespace:kube-system,Attempt:0,}" Aug 19 00:08:54.963161 containerd[1530]: time="2025-08-19T00:08:54.962979008Z" level=info msg="connecting to shim 4ff15896127ea4399418c04eef6ce3125d29126fce975bb007fda6232a176f47" address="unix:///run/containerd/s/ec311c999e8c778f20deb85ecf306cf42e654a766403a8020861142c0bb68473" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:08:54.977202 containerd[1530]: time="2025-08-19T00:08:54.977022853Z" level=info msg="connecting to shim c64097848babf23563fbd5a251c2d2688792121062235c3609327593fe20d468" address="unix:///run/containerd/s/2593007842d9ad3e849c0267c379c0935ef1ea6e552d1fa886b81d8f9c5d1acd" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:08:54.994109 systemd[1]: Started cri-containerd-4ff15896127ea4399418c04eef6ce3125d29126fce975bb007fda6232a176f47.scope - libcontainer container 4ff15896127ea4399418c04eef6ce3125d29126fce975bb007fda6232a176f47. Aug 19 00:08:54.997204 containerd[1530]: time="2025-08-19T00:08:54.997149854Z" level=info msg="connecting to shim fa0916b3d5f0c7e7ccb388f6b7e8f6c53197d2083ab9c27d46622636db606e33" address="unix:///run/containerd/s/158211dfffa8fddedc477a90a155e0b4ed0117cfe35e62076cf9d0267a06228d" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:08:55.013019 systemd[1]: Started cri-containerd-c64097848babf23563fbd5a251c2d2688792121062235c3609327593fe20d468.scope - libcontainer container c64097848babf23563fbd5a251c2d2688792121062235c3609327593fe20d468. Aug 19 00:08:55.018005 systemd[1]: Started cri-containerd-fa0916b3d5f0c7e7ccb388f6b7e8f6c53197d2083ab9c27d46622636db606e33.scope - libcontainer container fa0916b3d5f0c7e7ccb388f6b7e8f6c53197d2083ab9c27d46622636db606e33. Aug 19 00:08:55.059314 containerd[1530]: time="2025-08-19T00:08:55.059200886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ff15896127ea4399418c04eef6ce3125d29126fce975bb007fda6232a176f47\"" Aug 19 00:08:55.064485 containerd[1530]: time="2025-08-19T00:08:55.064445855Z" level=info msg="CreateContainer within sandbox \"4ff15896127ea4399418c04eef6ce3125d29126fce975bb007fda6232a176f47\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 00:08:55.088575 containerd[1530]: time="2025-08-19T00:08:55.088524722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a742e571b2d4fb3e19afeabaa8ef3d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa0916b3d5f0c7e7ccb388f6b7e8f6c53197d2083ab9c27d46622636db606e33\"" Aug 19 00:08:55.091395 containerd[1530]: time="2025-08-19T00:08:55.091327004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"c64097848babf23563fbd5a251c2d2688792121062235c3609327593fe20d468\"" Aug 19 00:08:55.092158 containerd[1530]: time="2025-08-19T00:08:55.091545479Z" level=info msg="CreateContainer within sandbox \"fa0916b3d5f0c7e7ccb388f6b7e8f6c53197d2083ab9c27d46622636db606e33\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 00:08:55.094398 containerd[1530]: time="2025-08-19T00:08:55.094347241Z" level=info msg="CreateContainer within sandbox \"c64097848babf23563fbd5a251c2d2688792121062235c3609327593fe20d468\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 00:08:55.106372 containerd[1530]: time="2025-08-19T00:08:55.106220670Z" level=info msg="Container 8f5f6e15f62cb9c16d623c8a4fa8fe35fc4c97cbaa23607e270ce5c6313da90c: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:08:55.116831 containerd[1530]: time="2025-08-19T00:08:55.116668871Z" level=info msg="Container 43b9211b33184bd77d5297d19083945701058d2c2341867073564d8ad5177f5a: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:08:55.119553 containerd[1530]: time="2025-08-19T00:08:55.119065344Z" level=info msg="Container e04ceed52c3ad843f93b0abe11bc8bcbb3df5de422c3e24b23d34a68b373934f: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:08:55.120387 containerd[1530]: time="2025-08-19T00:08:55.120307356Z" level=info msg="CreateContainer within sandbox \"4ff15896127ea4399418c04eef6ce3125d29126fce975bb007fda6232a176f47\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8f5f6e15f62cb9c16d623c8a4fa8fe35fc4c97cbaa23607e270ce5c6313da90c\"" Aug 19 00:08:55.121033 containerd[1530]: time="2025-08-19T00:08:55.121002239Z" level=info msg="StartContainer for \"8f5f6e15f62cb9c16d623c8a4fa8fe35fc4c97cbaa23607e270ce5c6313da90c\"" Aug 19 00:08:55.122105 containerd[1530]: time="2025-08-19T00:08:55.122078586Z" level=info msg="connecting to shim 8f5f6e15f62cb9c16d623c8a4fa8fe35fc4c97cbaa23607e270ce5c6313da90c" address="unix:///run/containerd/s/ec311c999e8c778f20deb85ecf306cf42e654a766403a8020861142c0bb68473" protocol=ttrpc version=3 Aug 19 00:08:55.140154 kubelet[2313]: I0819 00:08:55.140114 2313 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 00:08:55.140498 kubelet[2313]: E0819 00:08:55.140471 2313 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Aug 19 00:08:55.143062 systemd[1]: Started cri-containerd-8f5f6e15f62cb9c16d623c8a4fa8fe35fc4c97cbaa23607e270ce5c6313da90c.scope - libcontainer container 8f5f6e15f62cb9c16d623c8a4fa8fe35fc4c97cbaa23607e270ce5c6313da90c. Aug 19 00:08:55.150769 containerd[1530]: time="2025-08-19T00:08:55.150561141Z" level=info msg="CreateContainer within sandbox \"c64097848babf23563fbd5a251c2d2688792121062235c3609327593fe20d468\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"43b9211b33184bd77d5297d19083945701058d2c2341867073564d8ad5177f5a\"" Aug 19 00:08:55.151228 containerd[1530]: time="2025-08-19T00:08:55.151191942Z" level=info msg="StartContainer for \"43b9211b33184bd77d5297d19083945701058d2c2341867073564d8ad5177f5a\"" Aug 19 00:08:55.152730 containerd[1530]: time="2025-08-19T00:08:55.152682212Z" level=info msg="connecting to shim 43b9211b33184bd77d5297d19083945701058d2c2341867073564d8ad5177f5a" address="unix:///run/containerd/s/2593007842d9ad3e849c0267c379c0935ef1ea6e552d1fa886b81d8f9c5d1acd" protocol=ttrpc version=3 Aug 19 00:08:55.162609 containerd[1530]: time="2025-08-19T00:08:55.162535152Z" level=info msg="CreateContainer within sandbox \"fa0916b3d5f0c7e7ccb388f6b7e8f6c53197d2083ab9c27d46622636db606e33\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e04ceed52c3ad843f93b0abe11bc8bcbb3df5de422c3e24b23d34a68b373934f\"" Aug 19 00:08:55.163718 containerd[1530]: time="2025-08-19T00:08:55.163673503Z" level=info msg="StartContainer for \"e04ceed52c3ad843f93b0abe11bc8bcbb3df5de422c3e24b23d34a68b373934f\"" Aug 19 00:08:55.168645 containerd[1530]: time="2025-08-19T00:08:55.168508186Z" level=info msg="connecting to shim e04ceed52c3ad843f93b0abe11bc8bcbb3df5de422c3e24b23d34a68b373934f" address="unix:///run/containerd/s/158211dfffa8fddedc477a90a155e0b4ed0117cfe35e62076cf9d0267a06228d" protocol=ttrpc version=3 Aug 19 00:08:55.176093 systemd[1]: Started cri-containerd-43b9211b33184bd77d5297d19083945701058d2c2341867073564d8ad5177f5a.scope - libcontainer container 43b9211b33184bd77d5297d19083945701058d2c2341867073564d8ad5177f5a. Aug 19 00:08:55.189936 containerd[1530]: time="2025-08-19T00:08:55.189644251Z" level=info msg="StartContainer for \"8f5f6e15f62cb9c16d623c8a4fa8fe35fc4c97cbaa23607e270ce5c6313da90c\" returns successfully" Aug 19 00:08:55.217023 systemd[1]: Started cri-containerd-e04ceed52c3ad843f93b0abe11bc8bcbb3df5de422c3e24b23d34a68b373934f.scope - libcontainer container e04ceed52c3ad843f93b0abe11bc8bcbb3df5de422c3e24b23d34a68b373934f. Aug 19 00:08:55.245702 containerd[1530]: time="2025-08-19T00:08:55.245657226Z" level=info msg="StartContainer for \"43b9211b33184bd77d5297d19083945701058d2c2341867073564d8ad5177f5a\" returns successfully" Aug 19 00:08:55.284222 kubelet[2313]: W0819 00:08:55.284155 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Aug 19 00:08:55.284582 kubelet[2313]: E0819 00:08:55.284239 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:08:55.313946 containerd[1530]: time="2025-08-19T00:08:55.312606883Z" level=info msg="StartContainer for \"e04ceed52c3ad843f93b0abe11bc8bcbb3df5de422c3e24b23d34a68b373934f\" returns successfully" Aug 19 00:08:55.942642 kubelet[2313]: I0819 00:08:55.942305 2313 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 00:08:57.175947 kubelet[2313]: E0819 00:08:57.175899 2313 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 19 00:08:57.234324 kubelet[2313]: E0819 00:08:57.234054 2313 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185d0275b341bad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-19 00:08:54.286219993 +0000 UTC m=+1.108862040,LastTimestamp:2025-08-19 00:08:54.286219993 +0000 UTC m=+1.108862040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 19 00:08:57.275837 kubelet[2313]: I0819 00:08:57.275754 2313 apiserver.go:52] "Watching apiserver" Aug 19 00:08:57.296271 kubelet[2313]: I0819 00:08:57.295799 2313 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 19 00:08:57.305177 kubelet[2313]: I0819 00:08:57.305119 2313 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 19 00:08:59.660839 systemd[1]: Reload requested from client PID 2589 ('systemctl') (unit session-7.scope)... Aug 19 00:08:59.660855 systemd[1]: Reloading... Aug 19 00:08:59.725856 zram_generator::config[2632]: No configuration found. Aug 19 00:08:59.906948 systemd[1]: Reloading finished in 245 ms. Aug 19 00:08:59.943209 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:08:59.963081 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 00:08:59.963370 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:08:59.963462 systemd[1]: kubelet.service: Consumed 1.564s CPU time, 127.5M memory peak. Aug 19 00:08:59.965987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:09:00.111401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:09:00.117059 (kubelet)[2674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 00:09:00.182993 kubelet[2674]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:09:00.182993 kubelet[2674]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 19 00:09:00.182993 kubelet[2674]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:09:00.183775 kubelet[2674]: I0819 00:09:00.183726 2674 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 00:09:00.201205 kubelet[2674]: I0819 00:09:00.201079 2674 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 19 00:09:00.201205 kubelet[2674]: I0819 00:09:00.201117 2674 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 00:09:00.202111 kubelet[2674]: I0819 00:09:00.202080 2674 server.go:934] "Client rotation is on, will bootstrap in background" Aug 19 00:09:00.204005 kubelet[2674]: I0819 00:09:00.203975 2674 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 19 00:09:00.207290 kubelet[2674]: I0819 00:09:00.207123 2674 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 00:09:00.221220 kubelet[2674]: I0819 00:09:00.221190 2674 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 00:09:00.224186 kubelet[2674]: I0819 00:09:00.224147 2674 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 00:09:00.224313 kubelet[2674]: I0819 00:09:00.224288 2674 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 19 00:09:00.224485 kubelet[2674]: I0819 00:09:00.224432 2674 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 00:09:00.224689 kubelet[2674]: I0819 00:09:00.224467 2674 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 00:09:00.224689 kubelet[2674]: I0819 00:09:00.224686 2674 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 00:09:00.224836 kubelet[2674]: I0819 00:09:00.224700 2674 container_manager_linux.go:300] "Creating device plugin manager" Aug 19 00:09:00.224836 kubelet[2674]: I0819 00:09:00.224739 2674 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:09:00.224895 kubelet[2674]: I0819 00:09:00.224879 2674 kubelet.go:408] "Attempting to sync node with API server" Aug 19 00:09:00.224895 kubelet[2674]: I0819 00:09:00.224894 2674 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 00:09:00.224938 kubelet[2674]: I0819 00:09:00.224914 2674 kubelet.go:314] "Adding apiserver pod source" Aug 19 00:09:00.224938 kubelet[2674]: I0819 00:09:00.224927 2674 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 00:09:00.227019 kubelet[2674]: I0819 00:09:00.226989 2674 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 00:09:00.227914 kubelet[2674]: I0819 00:09:00.227880 2674 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 00:09:00.228553 kubelet[2674]: I0819 00:09:00.228528 2674 server.go:1274] "Started kubelet" Aug 19 00:09:00.229140 kubelet[2674]: I0819 00:09:00.229099 2674 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 00:09:00.229615 kubelet[2674]: I0819 00:09:00.229557 2674 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 00:09:00.229804 kubelet[2674]: I0819 00:09:00.229777 2674 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 00:09:00.231550 kubelet[2674]: I0819 00:09:00.231520 2674 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 00:09:00.231732 kubelet[2674]: I0819 00:09:00.231699 2674 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 00:09:00.238593 kubelet[2674]: I0819 00:09:00.237606 2674 server.go:449] "Adding debug handlers to kubelet server" Aug 19 00:09:00.239449 kubelet[2674]: E0819 00:09:00.239119 2674 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 00:09:00.239449 kubelet[2674]: I0819 00:09:00.239167 2674 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 19 00:09:00.240367 kubelet[2674]: I0819 00:09:00.240345 2674 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 19 00:09:00.245139 kubelet[2674]: I0819 00:09:00.244343 2674 reconciler.go:26] "Reconciler: start to sync state" Aug 19 00:09:00.252554 kubelet[2674]: E0819 00:09:00.252371 2674 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 00:09:00.262516 kubelet[2674]: I0819 00:09:00.261985 2674 factory.go:221] Registration of the systemd container factory successfully Aug 19 00:09:00.262516 kubelet[2674]: I0819 00:09:00.262104 2674 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 00:09:00.264081 kubelet[2674]: I0819 00:09:00.263732 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 00:09:00.265910 kubelet[2674]: I0819 00:09:00.265872 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 00:09:00.265910 kubelet[2674]: I0819 00:09:00.265909 2674 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 19 00:09:00.266057 kubelet[2674]: I0819 00:09:00.265931 2674 kubelet.go:2321] "Starting kubelet main sync loop" Aug 19 00:09:00.266057 kubelet[2674]: E0819 00:09:00.265982 2674 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 00:09:00.266577 kubelet[2674]: I0819 00:09:00.266264 2674 factory.go:221] Registration of the containerd container factory successfully Aug 19 00:09:00.306454 kubelet[2674]: I0819 00:09:00.306421 2674 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 19 00:09:00.306454 kubelet[2674]: I0819 00:09:00.306444 2674 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 19 00:09:00.306619 kubelet[2674]: I0819 00:09:00.306477 2674 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:09:00.306675 kubelet[2674]: I0819 00:09:00.306656 2674 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 00:09:00.306709 kubelet[2674]: I0819 00:09:00.306672 2674 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 00:09:00.306709 kubelet[2674]: I0819 00:09:00.306694 2674 policy_none.go:49] "None policy: Start" Aug 19 00:09:00.307726 kubelet[2674]: I0819 00:09:00.307702 2674 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 19 00:09:00.307804 kubelet[2674]: I0819 00:09:00.307738 2674 state_mem.go:35] "Initializing new in-memory state store" Aug 19 00:09:00.307945 kubelet[2674]: I0819 00:09:00.307929 2674 state_mem.go:75] "Updated machine memory state" Aug 19 00:09:00.312452 kubelet[2674]: I0819 00:09:00.312403 2674 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 00:09:00.313302 kubelet[2674]: I0819 00:09:00.313285 2674 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 00:09:00.313382 kubelet[2674]: I0819 00:09:00.313301 2674 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 00:09:00.313984 kubelet[2674]: I0819 00:09:00.313563 2674 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 00:09:00.416702 kubelet[2674]: I0819 00:09:00.416178 2674 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 00:09:00.427882 kubelet[2674]: I0819 00:09:00.427839 2674 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 19 00:09:00.428054 kubelet[2674]: I0819 00:09:00.427940 2674 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 19 00:09:00.445889 kubelet[2674]: I0819 00:09:00.445843 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a742e571b2d4fb3e19afeabaa8ef3d7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a742e571b2d4fb3e19afeabaa8ef3d7\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:09:00.445889 kubelet[2674]: I0819 00:09:00.445885 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a742e571b2d4fb3e19afeabaa8ef3d7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a742e571b2d4fb3e19afeabaa8ef3d7\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:09:00.446060 kubelet[2674]: I0819 00:09:00.445927 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Aug 19 00:09:00.446060 kubelet[2674]: I0819 00:09:00.445944 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:09:00.446060 kubelet[2674]: I0819 00:09:00.445963 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:09:00.446060 kubelet[2674]: I0819 00:09:00.445979 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:09:00.446060 kubelet[2674]: I0819 00:09:00.446018 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a742e571b2d4fb3e19afeabaa8ef3d7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a742e571b2d4fb3e19afeabaa8ef3d7\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:09:00.446213 kubelet[2674]: I0819 00:09:00.446033 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:09:00.446213 kubelet[2674]: I0819 00:09:00.446049 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:09:01.225886 kubelet[2674]: I0819 00:09:01.225527 2674 apiserver.go:52] "Watching apiserver" Aug 19 00:09:01.246058 kubelet[2674]: I0819 00:09:01.246015 2674 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 19 00:09:01.324599 kubelet[2674]: I0819 00:09:01.324473 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.324442756 podStartE2EDuration="1.324442756s" podCreationTimestamp="2025-08-19 00:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:09:01.314755309 +0000 UTC m=+1.194144167" watchObservedRunningTime="2025-08-19 00:09:01.324442756 +0000 UTC m=+1.203831574" Aug 19 00:09:01.334993 kubelet[2674]: I0819 00:09:01.334892 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.334871518 podStartE2EDuration="1.334871518s" podCreationTimestamp="2025-08-19 00:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:09:01.325080272 +0000 UTC m=+1.204469130" watchObservedRunningTime="2025-08-19 00:09:01.334871518 +0000 UTC m=+1.214260336" Aug 19 00:09:01.347373 kubelet[2674]: I0819 00:09:01.347309 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.347286625 podStartE2EDuration="1.347286625s" podCreationTimestamp="2025-08-19 00:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:09:01.335384154 +0000 UTC m=+1.214773012" watchObservedRunningTime="2025-08-19 00:09:01.347286625 +0000 UTC m=+1.226675443" Aug 19 00:09:05.070544 kubelet[2674]: I0819 00:09:05.070440 2674 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 00:09:05.071762 kubelet[2674]: I0819 00:09:05.071390 2674 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 00:09:05.072081 containerd[1530]: time="2025-08-19T00:09:05.071139687Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 00:09:05.877378 systemd[1]: Created slice kubepods-besteffort-pod0c2d0763_ecb3_47b8_b907_672b398e3d29.slice - libcontainer container kubepods-besteffort-pod0c2d0763_ecb3_47b8_b907_672b398e3d29.slice. Aug 19 00:09:05.878532 kubelet[2674]: I0819 00:09:05.878167 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c2d0763-ecb3-47b8-b907-672b398e3d29-kube-proxy\") pod \"kube-proxy-z7b85\" (UID: \"0c2d0763-ecb3-47b8-b907-672b398e3d29\") " pod="kube-system/kube-proxy-z7b85" Aug 19 00:09:05.878532 kubelet[2674]: I0819 00:09:05.878211 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c2d0763-ecb3-47b8-b907-672b398e3d29-xtables-lock\") pod \"kube-proxy-z7b85\" (UID: \"0c2d0763-ecb3-47b8-b907-672b398e3d29\") " pod="kube-system/kube-proxy-z7b85" Aug 19 00:09:05.878532 kubelet[2674]: I0819 00:09:05.878230 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c2d0763-ecb3-47b8-b907-672b398e3d29-lib-modules\") pod \"kube-proxy-z7b85\" (UID: \"0c2d0763-ecb3-47b8-b907-672b398e3d29\") " pod="kube-system/kube-proxy-z7b85" Aug 19 00:09:05.878532 kubelet[2674]: I0819 00:09:05.878250 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g7lv\" (UniqueName: \"kubernetes.io/projected/0c2d0763-ecb3-47b8-b907-672b398e3d29-kube-api-access-7g7lv\") pod \"kube-proxy-z7b85\" (UID: \"0c2d0763-ecb3-47b8-b907-672b398e3d29\") " pod="kube-system/kube-proxy-z7b85" Aug 19 00:09:06.041596 systemd[1]: Created slice kubepods-besteffort-pod9df48282_f048_42f3_b5a7_3a1bbed60315.slice - libcontainer container kubepods-besteffort-pod9df48282_f048_42f3_b5a7_3a1bbed60315.slice. Aug 19 00:09:06.079705 kubelet[2674]: I0819 00:09:06.079660 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s57nz\" (UniqueName: \"kubernetes.io/projected/9df48282-f048-42f3-b5a7-3a1bbed60315-kube-api-access-s57nz\") pod \"tigera-operator-5bf8dfcb4-96k4w\" (UID: \"9df48282-f048-42f3-b5a7-3a1bbed60315\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-96k4w" Aug 19 00:09:06.080236 kubelet[2674]: I0819 00:09:06.080187 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9df48282-f048-42f3-b5a7-3a1bbed60315-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-96k4w\" (UID: \"9df48282-f048-42f3-b5a7-3a1bbed60315\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-96k4w" Aug 19 00:09:06.189738 containerd[1530]: time="2025-08-19T00:09:06.189575141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7b85,Uid:0c2d0763-ecb3-47b8-b907-672b398e3d29,Namespace:kube-system,Attempt:0,}" Aug 19 00:09:06.240334 containerd[1530]: time="2025-08-19T00:09:06.240270252Z" level=info msg="connecting to shim dc66553f8f02e02e0a18b8366bb27dd34c488eb4884973ae76917230c5096656" address="unix:///run/containerd/s/7b6d1d26fe034f4f7f81ffff3a5fbf429c4a020c7b7d3b41aa1835df5734f38a" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:09:06.270561 systemd[1]: Started cri-containerd-dc66553f8f02e02e0a18b8366bb27dd34c488eb4884973ae76917230c5096656.scope - libcontainer container dc66553f8f02e02e0a18b8366bb27dd34c488eb4884973ae76917230c5096656. Aug 19 00:09:06.294209 containerd[1530]: time="2025-08-19T00:09:06.294152505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7b85,Uid:0c2d0763-ecb3-47b8-b907-672b398e3d29,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc66553f8f02e02e0a18b8366bb27dd34c488eb4884973ae76917230c5096656\"" Aug 19 00:09:06.299360 containerd[1530]: time="2025-08-19T00:09:06.299317435Z" level=info msg="CreateContainer within sandbox \"dc66553f8f02e02e0a18b8366bb27dd34c488eb4884973ae76917230c5096656\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 00:09:06.318112 containerd[1530]: time="2025-08-19T00:09:06.318053849Z" level=info msg="Container 06ed36a6b56acb8c46e75647645f32deb85fd62a7fbb688f95897a3c9f1be05d: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:06.329625 containerd[1530]: time="2025-08-19T00:09:06.329487224Z" level=info msg="CreateContainer within sandbox \"dc66553f8f02e02e0a18b8366bb27dd34c488eb4884973ae76917230c5096656\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"06ed36a6b56acb8c46e75647645f32deb85fd62a7fbb688f95897a3c9f1be05d\"" Aug 19 00:09:06.330246 containerd[1530]: time="2025-08-19T00:09:06.330218619Z" level=info msg="StartContainer for \"06ed36a6b56acb8c46e75647645f32deb85fd62a7fbb688f95897a3c9f1be05d\"" Aug 19 00:09:06.332845 containerd[1530]: time="2025-08-19T00:09:06.332664966Z" level=info msg="connecting to shim 06ed36a6b56acb8c46e75647645f32deb85fd62a7fbb688f95897a3c9f1be05d" address="unix:///run/containerd/s/7b6d1d26fe034f4f7f81ffff3a5fbf429c4a020c7b7d3b41aa1835df5734f38a" protocol=ttrpc version=3 Aug 19 00:09:06.346077 containerd[1530]: time="2025-08-19T00:09:06.345930130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-96k4w,Uid:9df48282-f048-42f3-b5a7-3a1bbed60315,Namespace:tigera-operator,Attempt:0,}" Aug 19 00:09:06.356074 systemd[1]: Started cri-containerd-06ed36a6b56acb8c46e75647645f32deb85fd62a7fbb688f95897a3c9f1be05d.scope - libcontainer container 06ed36a6b56acb8c46e75647645f32deb85fd62a7fbb688f95897a3c9f1be05d. Aug 19 00:09:06.379928 containerd[1530]: time="2025-08-19T00:09:06.379870697Z" level=info msg="connecting to shim 48ac9ef44262bcdf03b0e9286633c0d583430bb57a382ba22d4efc3aee737732" address="unix:///run/containerd/s/d16fca1bb339cb4078cb9a0933a8d0afbf00a2634a385bdd36a9d48db888ee98" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:09:06.413106 systemd[1]: Started cri-containerd-48ac9ef44262bcdf03b0e9286633c0d583430bb57a382ba22d4efc3aee737732.scope - libcontainer container 48ac9ef44262bcdf03b0e9286633c0d583430bb57a382ba22d4efc3aee737732. Aug 19 00:09:06.432646 containerd[1530]: time="2025-08-19T00:09:06.432589436Z" level=info msg="StartContainer for \"06ed36a6b56acb8c46e75647645f32deb85fd62a7fbb688f95897a3c9f1be05d\" returns successfully" Aug 19 00:09:06.470272 containerd[1530]: time="2025-08-19T00:09:06.469848624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-96k4w,Uid:9df48282-f048-42f3-b5a7-3a1bbed60315,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"48ac9ef44262bcdf03b0e9286633c0d583430bb57a382ba22d4efc3aee737732\"" Aug 19 00:09:06.471451 containerd[1530]: time="2025-08-19T00:09:06.471418375Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 19 00:09:07.318364 kubelet[2674]: I0819 00:09:07.318180 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z7b85" podStartSLOduration=2.318154006 podStartE2EDuration="2.318154006s" podCreationTimestamp="2025-08-19 00:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:09:07.318041567 +0000 UTC m=+7.197430425" watchObservedRunningTime="2025-08-19 00:09:07.318154006 +0000 UTC m=+7.197542864" Aug 19 00:09:07.467796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405120031.mount: Deactivated successfully. Aug 19 00:09:07.856789 containerd[1530]: time="2025-08-19T00:09:07.856716459Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:07.857359 containerd[1530]: time="2025-08-19T00:09:07.857324615Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 19 00:09:07.858265 containerd[1530]: time="2025-08-19T00:09:07.858206211Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:07.860859 containerd[1530]: time="2025-08-19T00:09:07.860570558Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:07.861790 containerd[1530]: time="2025-08-19T00:09:07.861339314Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.389883619s" Aug 19 00:09:07.861790 containerd[1530]: time="2025-08-19T00:09:07.861378594Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 19 00:09:07.866034 containerd[1530]: time="2025-08-19T00:09:07.865995449Z" level=info msg="CreateContainer within sandbox \"48ac9ef44262bcdf03b0e9286633c0d583430bb57a382ba22d4efc3aee737732\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 19 00:09:07.876958 containerd[1530]: time="2025-08-19T00:09:07.876900270Z" level=info msg="Container 9d2a052fb6b1049b39010070b04438269686b11a56a4efbd1bc930a7568143dd: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:07.884976 containerd[1530]: time="2025-08-19T00:09:07.884927226Z" level=info msg="CreateContainer within sandbox \"48ac9ef44262bcdf03b0e9286633c0d583430bb57a382ba22d4efc3aee737732\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9d2a052fb6b1049b39010070b04438269686b11a56a4efbd1bc930a7568143dd\"" Aug 19 00:09:07.885900 containerd[1530]: time="2025-08-19T00:09:07.885844181Z" level=info msg="StartContainer for \"9d2a052fb6b1049b39010070b04438269686b11a56a4efbd1bc930a7568143dd\"" Aug 19 00:09:07.887128 containerd[1530]: time="2025-08-19T00:09:07.887061655Z" level=info msg="connecting to shim 9d2a052fb6b1049b39010070b04438269686b11a56a4efbd1bc930a7568143dd" address="unix:///run/containerd/s/d16fca1bb339cb4078cb9a0933a8d0afbf00a2634a385bdd36a9d48db888ee98" protocol=ttrpc version=3 Aug 19 00:09:07.907066 systemd[1]: Started cri-containerd-9d2a052fb6b1049b39010070b04438269686b11a56a4efbd1bc930a7568143dd.scope - libcontainer container 9d2a052fb6b1049b39010070b04438269686b11a56a4efbd1bc930a7568143dd. Aug 19 00:09:07.945424 containerd[1530]: time="2025-08-19T00:09:07.945372420Z" level=info msg="StartContainer for \"9d2a052fb6b1049b39010070b04438269686b11a56a4efbd1bc930a7568143dd\" returns successfully" Aug 19 00:09:08.318282 kubelet[2674]: I0819 00:09:08.317471 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-96k4w" podStartSLOduration=0.9239947 podStartE2EDuration="2.31745366s" podCreationTimestamp="2025-08-19 00:09:06 +0000 UTC" firstStartedPulling="2025-08-19 00:09:06.471014097 +0000 UTC m=+6.350402955" lastFinishedPulling="2025-08-19 00:09:07.864473057 +0000 UTC m=+7.743861915" observedRunningTime="2025-08-19 00:09:08.31732722 +0000 UTC m=+8.196716038" watchObservedRunningTime="2025-08-19 00:09:08.31745366 +0000 UTC m=+8.196842518" Aug 19 00:09:12.507553 update_engine[1518]: I20250819 00:09:12.507441 1518 update_attempter.cc:509] Updating boot flags... Aug 19 00:09:13.685748 sudo[1744]: pam_unix(sudo:session): session closed for user root Aug 19 00:09:13.688870 sshd[1743]: Connection closed by 10.0.0.1 port 45434 Aug 19 00:09:13.688784 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Aug 19 00:09:13.695743 systemd[1]: sshd@6-10.0.0.31:22-10.0.0.1:45434.service: Deactivated successfully. Aug 19 00:09:13.696276 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Aug 19 00:09:13.699588 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 00:09:13.699837 systemd[1]: session-7.scope: Consumed 8.224s CPU time, 217.9M memory peak. Aug 19 00:09:13.703558 systemd-logind[1514]: Removed session 7. Aug 19 00:09:18.661312 systemd[1]: Created slice kubepods-besteffort-pod204c183b_5d6c_4c97_a70c_ae4f93d497a6.slice - libcontainer container kubepods-besteffort-pod204c183b_5d6c_4c97_a70c_ae4f93d497a6.slice. Aug 19 00:09:18.765279 kubelet[2674]: I0819 00:09:18.765229 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx957\" (UniqueName: \"kubernetes.io/projected/204c183b-5d6c-4c97-a70c-ae4f93d497a6-kube-api-access-xx957\") pod \"calico-typha-7c46d678f-h8svs\" (UID: \"204c183b-5d6c-4c97-a70c-ae4f93d497a6\") " pod="calico-system/calico-typha-7c46d678f-h8svs" Aug 19 00:09:18.765279 kubelet[2674]: I0819 00:09:18.765281 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/204c183b-5d6c-4c97-a70c-ae4f93d497a6-tigera-ca-bundle\") pod \"calico-typha-7c46d678f-h8svs\" (UID: \"204c183b-5d6c-4c97-a70c-ae4f93d497a6\") " pod="calico-system/calico-typha-7c46d678f-h8svs" Aug 19 00:09:18.765685 kubelet[2674]: I0819 00:09:18.765317 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/204c183b-5d6c-4c97-a70c-ae4f93d497a6-typha-certs\") pod \"calico-typha-7c46d678f-h8svs\" (UID: \"204c183b-5d6c-4c97-a70c-ae4f93d497a6\") " pod="calico-system/calico-typha-7c46d678f-h8svs" Aug 19 00:09:18.964799 containerd[1530]: time="2025-08-19T00:09:18.964471873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c46d678f-h8svs,Uid:204c183b-5d6c-4c97-a70c-ae4f93d497a6,Namespace:calico-system,Attempt:0,}" Aug 19 00:09:18.999935 systemd[1]: Created slice kubepods-besteffort-pod2ae50e05_8093_45c0_b07b_ff1b7f1f1a4d.slice - libcontainer container kubepods-besteffort-pod2ae50e05_8093_45c0_b07b_ff1b7f1f1a4d.slice. Aug 19 00:09:19.013959 containerd[1530]: time="2025-08-19T00:09:19.013900840Z" level=info msg="connecting to shim 6866c531d88268a9189ecefd176fa5a1932e271f17524fab6e20f369e7184e48" address="unix:///run/containerd/s/656ed60a98d4fe7dcefd8633bb8af3354e47809e536a962eed0faaab4ba62f3e" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:09:19.077133 systemd[1]: Started cri-containerd-6866c531d88268a9189ecefd176fa5a1932e271f17524fab6e20f369e7184e48.scope - libcontainer container 6866c531d88268a9189ecefd176fa5a1932e271f17524fab6e20f369e7184e48. Aug 19 00:09:19.111720 containerd[1530]: time="2025-08-19T00:09:19.111674587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c46d678f-h8svs,Uid:204c183b-5d6c-4c97-a70c-ae4f93d497a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"6866c531d88268a9189ecefd176fa5a1932e271f17524fab6e20f369e7184e48\"" Aug 19 00:09:19.113714 containerd[1530]: time="2025-08-19T00:09:19.113601301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 19 00:09:19.168761 kubelet[2674]: I0819 00:09:19.168617 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d-cni-bin-dir\") pod \"calico-node-scd48\" (UID: \"2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d\") " pod="calico-system/calico-node-scd48" Aug 19 00:09:19.168761 kubelet[2674]: I0819 00:09:19.168678 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d-cni-net-dir\") pod \"calico-node-scd48\" (UID: \"2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d\") " pod="calico-system/calico-node-scd48" Aug 19 00:09:19.168761 kubelet[2674]: I0819 00:09:19.168700 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d-var-run-calico\") pod \"calico-node-scd48\" (UID: \"2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d\") " pod="calico-system/calico-node-scd48" Aug 19 00:09:19.168761 kubelet[2674]: I0819 00:09:19.168719 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d-xtables-lock\") pod \"calico-node-scd48\" (UID: \"2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d\") " pod="calico-system/calico-node-scd48" Aug 19 00:09:19.169316 kubelet[2674]: I0819 00:09:19.169226 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c52q\" (UniqueName: \"kubernetes.io/projected/2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d-kube-api-access-7c52q\") pod \"calico-node-scd48\" (UID: \"2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d\") " pod="calico-system/calico-node-scd48" Aug 19 00:09:19.169679 kubelet[2674]: I0819 00:09:19.169485 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d-flexvol-driver-host\") pod \"calico-node-scd48\" (UID: \"2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d\") " pod="calico-system/calico-node-scd48" Aug 19 00:09:19.169871 kubelet[2674]: I0819 00:09:19.169823 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d-var-lib-calico\") pod \"calico-node-scd48\" (UID: \"2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d\") " pod="calico-system/calico-node-scd48" Aug 19 00:09:19.169999 kubelet[2674]: I0819 00:09:19.169943 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d-cni-log-dir\") pod \"calico-node-scd48\" (UID: \"2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d\") " pod="calico-system/calico-node-scd48" Aug 19 00:09:19.169999 kubelet[2674]: I0819 00:09:19.169972 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d-lib-modules\") pod \"calico-node-scd48\" (UID: \"2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d\") " pod="calico-system/calico-node-scd48" Aug 19 00:09:19.170130 kubelet[2674]: I0819 00:09:19.170113 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d-policysync\") pod \"calico-node-scd48\" (UID: \"2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d\") " pod="calico-system/calico-node-scd48" Aug 19 00:09:19.170348 kubelet[2674]: I0819 00:09:19.170331 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d-node-certs\") pod \"calico-node-scd48\" (UID: \"2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d\") " pod="calico-system/calico-node-scd48" Aug 19 00:09:19.170521 kubelet[2674]: I0819 00:09:19.170470 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d-tigera-ca-bundle\") pod \"calico-node-scd48\" (UID: \"2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d\") " pod="calico-system/calico-node-scd48" Aug 19 00:09:19.191978 kubelet[2674]: E0819 00:09:19.191681 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gm46" podUID="7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013" Aug 19 00:09:19.271658 kubelet[2674]: I0819 00:09:19.271619 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013-socket-dir\") pod \"csi-node-driver-7gm46\" (UID: \"7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013\") " pod="calico-system/csi-node-driver-7gm46" Aug 19 00:09:19.272052 kubelet[2674]: I0819 00:09:19.272029 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrzjc\" (UniqueName: \"kubernetes.io/projected/7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013-kube-api-access-qrzjc\") pod \"csi-node-driver-7gm46\" (UID: \"7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013\") " pod="calico-system/csi-node-driver-7gm46" Aug 19 00:09:19.272099 kubelet[2674]: I0819 00:09:19.272086 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013-registration-dir\") pod \"csi-node-driver-7gm46\" (UID: \"7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013\") " pod="calico-system/csi-node-driver-7gm46" Aug 19 00:09:19.272140 kubelet[2674]: I0819 00:09:19.272124 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013-varrun\") pod \"csi-node-driver-7gm46\" (UID: \"7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013\") " pod="calico-system/csi-node-driver-7gm46" Aug 19 00:09:19.272169 kubelet[2674]: I0819 00:09:19.272147 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013-kubelet-dir\") pod \"csi-node-driver-7gm46\" (UID: \"7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013\") " pod="calico-system/csi-node-driver-7gm46" Aug 19 00:09:19.291696 kubelet[2674]: E0819 00:09:19.291249 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.291696 kubelet[2674]: W0819 00:09:19.291281 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.291696 kubelet[2674]: E0819 00:09:19.291323 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.292504 kubelet[2674]: E0819 00:09:19.292345 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.292751 kubelet[2674]: W0819 00:09:19.292590 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.292751 kubelet[2674]: E0819 00:09:19.292620 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.300278 kubelet[2674]: E0819 00:09:19.300229 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.300278 kubelet[2674]: W0819 00:09:19.300266 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.300440 kubelet[2674]: E0819 00:09:19.300299 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.303953 containerd[1530]: time="2025-08-19T00:09:19.303914372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-scd48,Uid:2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d,Namespace:calico-system,Attempt:0,}" Aug 19 00:09:19.327835 containerd[1530]: time="2025-08-19T00:09:19.327708461Z" level=info msg="connecting to shim f6f8154ff8c22ed22d8944f0c4de1db47e32e7ae0ba47f022bd1b5372777c917" address="unix:///run/containerd/s/726d082633161ad8d0a307ab29685dace84a85eaa1eb1d9939c32936f9e12dba" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:09:19.363051 systemd[1]: Started cri-containerd-f6f8154ff8c22ed22d8944f0c4de1db47e32e7ae0ba47f022bd1b5372777c917.scope - libcontainer container f6f8154ff8c22ed22d8944f0c4de1db47e32e7ae0ba47f022bd1b5372777c917. Aug 19 00:09:19.373714 kubelet[2674]: E0819 00:09:19.373660 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.373714 kubelet[2674]: W0819 00:09:19.373688 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.373714 kubelet[2674]: E0819 00:09:19.373709 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.374924 kubelet[2674]: E0819 00:09:19.374898 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.374924 kubelet[2674]: W0819 00:09:19.374917 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.375229 kubelet[2674]: E0819 00:09:19.374938 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.375867 kubelet[2674]: E0819 00:09:19.375780 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.375867 kubelet[2674]: W0819 00:09:19.375825 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.375867 kubelet[2674]: E0819 00:09:19.375849 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.376107 kubelet[2674]: E0819 00:09:19.376082 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.376107 kubelet[2674]: W0819 00:09:19.376106 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.376207 kubelet[2674]: E0819 00:09:19.376129 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.376347 kubelet[2674]: E0819 00:09:19.376332 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.376383 kubelet[2674]: W0819 00:09:19.376347 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.376464 kubelet[2674]: E0819 00:09:19.376412 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.377016 kubelet[2674]: E0819 00:09:19.376791 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.377016 kubelet[2674]: W0819 00:09:19.376808 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.377016 kubelet[2674]: E0819 00:09:19.376978 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.378435 kubelet[2674]: E0819 00:09:19.378411 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.378435 kubelet[2674]: W0819 00:09:19.378432 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.378522 kubelet[2674]: E0819 00:09:19.378456 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.378893 kubelet[2674]: E0819 00:09:19.378787 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.378893 kubelet[2674]: W0819 00:09:19.378802 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.379146 kubelet[2674]: E0819 00:09:19.379054 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.379146 kubelet[2674]: W0819 00:09:19.379072 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.379146 kubelet[2674]: E0819 00:09:19.379066 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.379146 kubelet[2674]: E0819 00:09:19.379107 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.379376 kubelet[2674]: E0819 00:09:19.379353 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.379376 kubelet[2674]: W0819 00:09:19.379368 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.379376 kubelet[2674]: E0819 00:09:19.379404 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.379756 kubelet[2674]: E0819 00:09:19.379526 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.379756 kubelet[2674]: W0819 00:09:19.379537 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.379756 kubelet[2674]: E0819 00:09:19.379563 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.379756 kubelet[2674]: E0819 00:09:19.379692 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.379756 kubelet[2674]: W0819 00:09:19.379711 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.379756 kubelet[2674]: E0819 00:09:19.379730 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.380544 kubelet[2674]: E0819 00:09:19.379908 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.380544 kubelet[2674]: W0819 00:09:19.379918 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.380544 kubelet[2674]: E0819 00:09:19.379929 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.380544 kubelet[2674]: E0819 00:09:19.380071 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.380544 kubelet[2674]: W0819 00:09:19.380080 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.380544 kubelet[2674]: E0819 00:09:19.380094 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.380544 kubelet[2674]: E0819 00:09:19.380296 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.380544 kubelet[2674]: W0819 00:09:19.380306 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.380544 kubelet[2674]: E0819 00:09:19.380316 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.380544 kubelet[2674]: E0819 00:09:19.380458 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.380994 kubelet[2674]: W0819 00:09:19.380467 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.380994 kubelet[2674]: E0819 00:09:19.380475 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.380994 kubelet[2674]: E0819 00:09:19.380661 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.380994 kubelet[2674]: W0819 00:09:19.380669 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.380994 kubelet[2674]: E0819 00:09:19.380711 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.380994 kubelet[2674]: E0819 00:09:19.380834 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.380994 kubelet[2674]: W0819 00:09:19.380843 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.380994 kubelet[2674]: E0819 00:09:19.380933 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.380994 kubelet[2674]: E0819 00:09:19.381012 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.381461 kubelet[2674]: W0819 00:09:19.381020 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.381461 kubelet[2674]: E0819 00:09:19.381057 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.381951 kubelet[2674]: E0819 00:09:19.381928 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.381951 kubelet[2674]: W0819 00:09:19.381949 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.382165 kubelet[2674]: E0819 00:09:19.382040 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.382754 kubelet[2674]: E0819 00:09:19.382733 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.382754 kubelet[2674]: W0819 00:09:19.382751 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.382871 kubelet[2674]: E0819 00:09:19.382788 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.383008 kubelet[2674]: E0819 00:09:19.382963 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.383008 kubelet[2674]: W0819 00:09:19.382975 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.383136 kubelet[2674]: E0819 00:09:19.383122 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.383136 kubelet[2674]: W0819 00:09:19.383131 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.383188 kubelet[2674]: E0819 00:09:19.383141 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.383188 kubelet[2674]: E0819 00:09:19.383169 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.383902 kubelet[2674]: E0819 00:09:19.383875 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.383902 kubelet[2674]: W0819 00:09:19.383895 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.384104 kubelet[2674]: E0819 00:09:19.383917 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.385309 kubelet[2674]: E0819 00:09:19.385276 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.385309 kubelet[2674]: W0819 00:09:19.385306 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.385389 kubelet[2674]: E0819 00:09:19.385324 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.394566 kubelet[2674]: E0819 00:09:19.394471 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:19.394566 kubelet[2674]: W0819 00:09:19.394494 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:19.394566 kubelet[2674]: E0819 00:09:19.394525 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:19.411229 containerd[1530]: time="2025-08-19T00:09:19.411173771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-scd48,Uid:2ae50e05-8093-45c0-b07b-ff1b7f1f1a4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6f8154ff8c22ed22d8944f0c4de1db47e32e7ae0ba47f022bd1b5372777c917\"" Aug 19 00:09:20.609910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711097304.mount: Deactivated successfully. Aug 19 00:09:21.266795 kubelet[2674]: E0819 00:09:21.266730 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gm46" podUID="7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013" Aug 19 00:09:21.597231 containerd[1530]: time="2025-08-19T00:09:21.597092466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:21.597884 containerd[1530]: time="2025-08-19T00:09:21.597781104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 19 00:09:21.598690 containerd[1530]: time="2025-08-19T00:09:21.598650702Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:21.600783 containerd[1530]: time="2025-08-19T00:09:21.600750976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:21.601729 containerd[1530]: time="2025-08-19T00:09:21.601696613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.488014352s" Aug 19 00:09:21.601788 containerd[1530]: time="2025-08-19T00:09:21.601735333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 19 00:09:21.602778 containerd[1530]: time="2025-08-19T00:09:21.602741091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 19 00:09:21.623345 containerd[1530]: time="2025-08-19T00:09:21.623285354Z" level=info msg="CreateContainer within sandbox \"6866c531d88268a9189ecefd176fa5a1932e271f17524fab6e20f369e7184e48\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 19 00:09:21.631774 containerd[1530]: time="2025-08-19T00:09:21.630965613Z" level=info msg="Container 026b45ebc0652a48491d9604d593058dc902d298f6ab288c41d36df28d9f8082: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:21.647660 containerd[1530]: time="2025-08-19T00:09:21.647614327Z" level=info msg="CreateContainer within sandbox \"6866c531d88268a9189ecefd176fa5a1932e271f17524fab6e20f369e7184e48\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"026b45ebc0652a48491d9604d593058dc902d298f6ab288c41d36df28d9f8082\"" Aug 19 00:09:21.648194 containerd[1530]: time="2025-08-19T00:09:21.648140046Z" level=info msg="StartContainer for \"026b45ebc0652a48491d9604d593058dc902d298f6ab288c41d36df28d9f8082\"" Aug 19 00:09:21.649375 containerd[1530]: time="2025-08-19T00:09:21.649282203Z" level=info msg="connecting to shim 026b45ebc0652a48491d9604d593058dc902d298f6ab288c41d36df28d9f8082" address="unix:///run/containerd/s/656ed60a98d4fe7dcefd8633bb8af3354e47809e536a962eed0faaab4ba62f3e" protocol=ttrpc version=3 Aug 19 00:09:21.673016 systemd[1]: Started cri-containerd-026b45ebc0652a48491d9604d593058dc902d298f6ab288c41d36df28d9f8082.scope - libcontainer container 026b45ebc0652a48491d9604d593058dc902d298f6ab288c41d36df28d9f8082. Aug 19 00:09:21.723468 containerd[1530]: time="2025-08-19T00:09:21.723425159Z" level=info msg="StartContainer for \"026b45ebc0652a48491d9604d593058dc902d298f6ab288c41d36df28d9f8082\" returns successfully" Aug 19 00:09:22.357050 kubelet[2674]: I0819 00:09:22.356979 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c46d678f-h8svs" podStartSLOduration=1.8673173140000001 podStartE2EDuration="4.356962982s" podCreationTimestamp="2025-08-19 00:09:18 +0000 UTC" firstStartedPulling="2025-08-19 00:09:19.112969983 +0000 UTC m=+18.992358841" lastFinishedPulling="2025-08-19 00:09:21.602615651 +0000 UTC m=+21.482004509" observedRunningTime="2025-08-19 00:09:22.356742902 +0000 UTC m=+22.236131760" watchObservedRunningTime="2025-08-19 00:09:22.356962982 +0000 UTC m=+22.236351840" Aug 19 00:09:22.393523 kubelet[2674]: E0819 00:09:22.393479 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.393523 kubelet[2674]: W0819 00:09:22.393516 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.393688 kubelet[2674]: E0819 00:09:22.393541 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.393736 kubelet[2674]: E0819 00:09:22.393717 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.393736 kubelet[2674]: W0819 00:09:22.393733 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.393779 kubelet[2674]: E0819 00:09:22.393742 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.393932 kubelet[2674]: E0819 00:09:22.393918 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.393932 kubelet[2674]: W0819 00:09:22.393932 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.394006 kubelet[2674]: E0819 00:09:22.393940 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.394114 kubelet[2674]: E0819 00:09:22.394102 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.394114 kubelet[2674]: W0819 00:09:22.394112 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.394158 kubelet[2674]: E0819 00:09:22.394120 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.394294 kubelet[2674]: E0819 00:09:22.394282 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.394294 kubelet[2674]: W0819 00:09:22.394293 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.394343 kubelet[2674]: E0819 00:09:22.394301 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.394448 kubelet[2674]: E0819 00:09:22.394437 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.394470 kubelet[2674]: W0819 00:09:22.394447 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.394470 kubelet[2674]: E0819 00:09:22.394455 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.394650 kubelet[2674]: E0819 00:09:22.394638 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.394674 kubelet[2674]: W0819 00:09:22.394649 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.394674 kubelet[2674]: E0819 00:09:22.394658 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.394833 kubelet[2674]: E0819 00:09:22.394804 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.394860 kubelet[2674]: W0819 00:09:22.394833 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.394860 kubelet[2674]: E0819 00:09:22.394843 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.395033 kubelet[2674]: E0819 00:09:22.395010 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.395033 kubelet[2674]: W0819 00:09:22.395031 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.395087 kubelet[2674]: E0819 00:09:22.395041 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.395227 kubelet[2674]: E0819 00:09:22.395214 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.395227 kubelet[2674]: W0819 00:09:22.395224 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.395280 kubelet[2674]: E0819 00:09:22.395232 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.395381 kubelet[2674]: E0819 00:09:22.395370 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.395403 kubelet[2674]: W0819 00:09:22.395380 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.395403 kubelet[2674]: E0819 00:09:22.395396 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.395522 kubelet[2674]: E0819 00:09:22.395512 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.395549 kubelet[2674]: W0819 00:09:22.395521 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.395549 kubelet[2674]: E0819 00:09:22.395531 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.395685 kubelet[2674]: E0819 00:09:22.395675 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.395712 kubelet[2674]: W0819 00:09:22.395685 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.395712 kubelet[2674]: E0819 00:09:22.395693 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.395853 kubelet[2674]: E0819 00:09:22.395842 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.395882 kubelet[2674]: W0819 00:09:22.395853 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.395882 kubelet[2674]: E0819 00:09:22.395866 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.396008 kubelet[2674]: E0819 00:09:22.395997 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.396008 kubelet[2674]: W0819 00:09:22.396007 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.396051 kubelet[2674]: E0819 00:09:22.396014 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.401457 kubelet[2674]: E0819 00:09:22.401421 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.401457 kubelet[2674]: W0819 00:09:22.401443 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.401457 kubelet[2674]: E0819 00:09:22.401458 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.401681 kubelet[2674]: E0819 00:09:22.401652 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.401681 kubelet[2674]: W0819 00:09:22.401666 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.401681 kubelet[2674]: E0819 00:09:22.401680 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.401894 kubelet[2674]: E0819 00:09:22.401866 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.401894 kubelet[2674]: W0819 00:09:22.401879 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.401894 kubelet[2674]: E0819 00:09:22.401893 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.402141 kubelet[2674]: E0819 00:09:22.402110 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.402141 kubelet[2674]: W0819 00:09:22.402126 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.402195 kubelet[2674]: E0819 00:09:22.402141 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.402384 kubelet[2674]: E0819 00:09:22.402356 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.402384 kubelet[2674]: W0819 00:09:22.402372 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.402439 kubelet[2674]: E0819 00:09:22.402387 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.402579 kubelet[2674]: E0819 00:09:22.402565 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.402602 kubelet[2674]: W0819 00:09:22.402577 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.402602 kubelet[2674]: E0819 00:09:22.402594 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.402864 kubelet[2674]: E0819 00:09:22.402850 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.402904 kubelet[2674]: W0819 00:09:22.402863 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.402928 kubelet[2674]: E0819 00:09:22.402894 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.403008 kubelet[2674]: E0819 00:09:22.402996 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.403008 kubelet[2674]: W0819 00:09:22.403007 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.403085 kubelet[2674]: E0819 00:09:22.403057 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.403218 kubelet[2674]: E0819 00:09:22.403204 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.403249 kubelet[2674]: W0819 00:09:22.403218 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.403249 kubelet[2674]: E0819 00:09:22.403233 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.403398 kubelet[2674]: E0819 00:09:22.403384 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.403398 kubelet[2674]: W0819 00:09:22.403395 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.403445 kubelet[2674]: E0819 00:09:22.403409 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.403541 kubelet[2674]: E0819 00:09:22.403530 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.403568 kubelet[2674]: W0819 00:09:22.403541 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.403568 kubelet[2674]: E0819 00:09:22.403553 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.403717 kubelet[2674]: E0819 00:09:22.403705 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.403739 kubelet[2674]: W0819 00:09:22.403717 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.403739 kubelet[2674]: E0819 00:09:22.403729 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.403992 kubelet[2674]: E0819 00:09:22.403977 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.404025 kubelet[2674]: W0819 00:09:22.403993 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.404025 kubelet[2674]: E0819 00:09:22.404011 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.404169 kubelet[2674]: E0819 00:09:22.404156 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.404192 kubelet[2674]: W0819 00:09:22.404169 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.404192 kubelet[2674]: E0819 00:09:22.404185 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.404340 kubelet[2674]: E0819 00:09:22.404326 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.404340 kubelet[2674]: W0819 00:09:22.404338 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.404386 kubelet[2674]: E0819 00:09:22.404353 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.404528 kubelet[2674]: E0819 00:09:22.404515 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.404554 kubelet[2674]: W0819 00:09:22.404527 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.404554 kubelet[2674]: E0819 00:09:22.404542 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.404786 kubelet[2674]: E0819 00:09:22.404772 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.404828 kubelet[2674]: W0819 00:09:22.404787 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.404828 kubelet[2674]: E0819 00:09:22.404799 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.405363 kubelet[2674]: E0819 00:09:22.405319 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:09:22.405363 kubelet[2674]: W0819 00:09:22.405347 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:09:22.405363 kubelet[2674]: E0819 00:09:22.405362 2674 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:09:22.690214 containerd[1530]: time="2025-08-19T00:09:22.690093145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:22.694627 containerd[1530]: time="2025-08-19T00:09:22.694587454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 19 00:09:22.695846 containerd[1530]: time="2025-08-19T00:09:22.695793690Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:22.701435 containerd[1530]: time="2025-08-19T00:09:22.701366076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:22.702116 containerd[1530]: time="2025-08-19T00:09:22.702074714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.099299943s" Aug 19 00:09:22.702116 containerd[1530]: time="2025-08-19T00:09:22.702115474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 19 00:09:22.705593 containerd[1530]: time="2025-08-19T00:09:22.705547905Z" level=info msg="CreateContainer within sandbox \"f6f8154ff8c22ed22d8944f0c4de1db47e32e7ae0ba47f022bd1b5372777c917\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 19 00:09:22.716007 containerd[1530]: time="2025-08-19T00:09:22.715956317Z" level=info msg="Container c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:22.730946 containerd[1530]: time="2025-08-19T00:09:22.730904958Z" level=info msg="CreateContainer within sandbox \"f6f8154ff8c22ed22d8944f0c4de1db47e32e7ae0ba47f022bd1b5372777c917\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a\"" Aug 19 00:09:22.731451 containerd[1530]: time="2025-08-19T00:09:22.731425277Z" level=info msg="StartContainer for \"c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a\"" Aug 19 00:09:22.734505 containerd[1530]: time="2025-08-19T00:09:22.734471429Z" level=info msg="connecting to shim c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a" address="unix:///run/containerd/s/726d082633161ad8d0a307ab29685dace84a85eaa1eb1d9939c32936f9e12dba" protocol=ttrpc version=3 Aug 19 00:09:22.754019 systemd[1]: Started cri-containerd-c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a.scope - libcontainer container c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a. Aug 19 00:09:22.802477 containerd[1530]: time="2025-08-19T00:09:22.801956371Z" level=info msg="StartContainer for \"c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a\" returns successfully" Aug 19 00:09:22.864694 systemd[1]: cri-containerd-c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a.scope: Deactivated successfully. Aug 19 00:09:22.885325 containerd[1530]: time="2025-08-19T00:09:22.885197432Z" level=info msg="received exit event container_id:\"c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a\" id:\"c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a\" pid:3326 exited_at:{seconds:1755562162 nanos:878067171}" Aug 19 00:09:22.885453 containerd[1530]: time="2025-08-19T00:09:22.885277392Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a\" id:\"c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a\" pid:3326 exited_at:{seconds:1755562162 nanos:878067171}" Aug 19 00:09:22.929797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c325dd7c201ad4937522b62cc15857c777f634a150efcbf41226a49fc214ab5a-rootfs.mount: Deactivated successfully. Aug 19 00:09:23.266599 kubelet[2674]: E0819 00:09:23.266248 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gm46" podUID="7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013" Aug 19 00:09:23.346636 kubelet[2674]: I0819 00:09:23.346436 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:09:23.349058 containerd[1530]: time="2025-08-19T00:09:23.348988449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 19 00:09:25.267257 kubelet[2674]: E0819 00:09:25.267151 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gm46" podUID="7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013" Aug 19 00:09:26.925700 containerd[1530]: time="2025-08-19T00:09:26.925644731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:26.926933 containerd[1530]: time="2025-08-19T00:09:26.926906488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 19 00:09:26.927741 containerd[1530]: time="2025-08-19T00:09:26.927710566Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:26.930159 containerd[1530]: time="2025-08-19T00:09:26.930121521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:26.930725 containerd[1530]: time="2025-08-19T00:09:26.930549400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.581517671s" Aug 19 00:09:26.930725 containerd[1530]: time="2025-08-19T00:09:26.930590359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 19 00:09:26.933011 containerd[1530]: time="2025-08-19T00:09:26.932974034Z" level=info msg="CreateContainer within sandbox \"f6f8154ff8c22ed22d8944f0c4de1db47e32e7ae0ba47f022bd1b5372777c917\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 19 00:09:26.940865 containerd[1530]: time="2025-08-19T00:09:26.940416777Z" level=info msg="Container 41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:26.958916 containerd[1530]: time="2025-08-19T00:09:26.958856576Z" level=info msg="CreateContainer within sandbox \"f6f8154ff8c22ed22d8944f0c4de1db47e32e7ae0ba47f022bd1b5372777c917\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215\"" Aug 19 00:09:26.959426 containerd[1530]: time="2025-08-19T00:09:26.959404495Z" level=info msg="StartContainer for \"41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215\"" Aug 19 00:09:26.960789 containerd[1530]: time="2025-08-19T00:09:26.960753692Z" level=info msg="connecting to shim 41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215" address="unix:///run/containerd/s/726d082633161ad8d0a307ab29685dace84a85eaa1eb1d9939c32936f9e12dba" protocol=ttrpc version=3 Aug 19 00:09:26.981999 systemd[1]: Started cri-containerd-41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215.scope - libcontainer container 41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215. Aug 19 00:09:27.018584 containerd[1530]: time="2025-08-19T00:09:27.018532924Z" level=info msg="StartContainer for \"41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215\" returns successfully" Aug 19 00:09:27.266622 kubelet[2674]: E0819 00:09:27.266519 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gm46" podUID="7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013" Aug 19 00:09:27.751471 systemd[1]: cri-containerd-41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215.scope: Deactivated successfully. Aug 19 00:09:27.751766 systemd[1]: cri-containerd-41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215.scope: Consumed 479ms CPU time, 182.5M memory peak, 2.2M read from disk, 165.8M written to disk. Aug 19 00:09:27.753193 containerd[1530]: time="2025-08-19T00:09:27.753152015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215\" id:\"41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215\" pid:3386 exited_at:{seconds:1755562167 nanos:752801415}" Aug 19 00:09:27.753324 containerd[1530]: time="2025-08-19T00:09:27.753264454Z" level=info msg="received exit event container_id:\"41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215\" id:\"41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215\" pid:3386 exited_at:{seconds:1755562167 nanos:752801415}" Aug 19 00:09:27.779299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41dcf08869224a01a83f3f9942222db7a19f114f64806975c6d0ad6ab5437215-rootfs.mount: Deactivated successfully. Aug 19 00:09:27.825878 kubelet[2674]: I0819 00:09:27.825561 2674 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 19 00:09:27.889743 systemd[1]: Created slice kubepods-burstable-pod881746fb_5e8f_498c_b589_664afd8f39e4.slice - libcontainer container kubepods-burstable-pod881746fb_5e8f_498c_b589_664afd8f39e4.slice. Aug 19 00:09:27.900332 systemd[1]: Created slice kubepods-besteffort-pod1a02df7e_9cb9_459a_aede_2386a78b244a.slice - libcontainer container kubepods-besteffort-pod1a02df7e_9cb9_459a_aede_2386a78b244a.slice. Aug 19 00:09:27.908722 systemd[1]: Created slice kubepods-besteffort-pod20fc4aab_a0f0_47e6_8649_b56f7b092a7c.slice - libcontainer container kubepods-besteffort-pod20fc4aab_a0f0_47e6_8649_b56f7b092a7c.slice. Aug 19 00:09:27.916874 systemd[1]: Created slice kubepods-besteffort-pod9e9944e5_5c31_47b8_bc20_b9bae23763b0.slice - libcontainer container kubepods-besteffort-pod9e9944e5_5c31_47b8_bc20_b9bae23763b0.slice. Aug 19 00:09:27.924113 systemd[1]: Created slice kubepods-burstable-pod2968b672_0a30_46ea_b465_e350384939ca.slice - libcontainer container kubepods-burstable-pod2968b672_0a30_46ea_b465_e350384939ca.slice. Aug 19 00:09:27.930558 systemd[1]: Created slice kubepods-besteffort-podad4442bf_80b9_4ac4_a91e_7f33b5c10474.slice - libcontainer container kubepods-besteffort-podad4442bf_80b9_4ac4_a91e_7f33b5c10474.slice. Aug 19 00:09:27.937756 systemd[1]: Created slice kubepods-besteffort-pod00f65b9a_10f0_4e4c_9799_a76138297cdc.slice - libcontainer container kubepods-besteffort-pod00f65b9a_10f0_4e4c_9799_a76138297cdc.slice. Aug 19 00:09:28.040673 kubelet[2674]: I0819 00:09:28.040437 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x98zl\" (UniqueName: \"kubernetes.io/projected/ad4442bf-80b9-4ac4-a91e-7f33b5c10474-kube-api-access-x98zl\") pod \"goldmane-58fd7646b9-qlkx4\" (UID: \"ad4442bf-80b9-4ac4-a91e-7f33b5c10474\") " pod="calico-system/goldmane-58fd7646b9-qlkx4" Aug 19 00:09:28.040673 kubelet[2674]: I0819 00:09:28.040498 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00f65b9a-10f0-4e4c-9799-a76138297cdc-whisker-ca-bundle\") pod \"whisker-785db985-4g5tw\" (UID: \"00f65b9a-10f0-4e4c-9799-a76138297cdc\") " pod="calico-system/whisker-785db985-4g5tw" Aug 19 00:09:28.040673 kubelet[2674]: I0819 00:09:28.040537 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1a02df7e-9cb9-459a-aede-2386a78b244a-calico-apiserver-certs\") pod \"calico-apiserver-7566bc84c-fvsmj\" (UID: \"1a02df7e-9cb9-459a-aede-2386a78b244a\") " pod="calico-apiserver/calico-apiserver-7566bc84c-fvsmj" Aug 19 00:09:28.040673 kubelet[2674]: I0819 00:09:28.040555 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/00f65b9a-10f0-4e4c-9799-a76138297cdc-whisker-backend-key-pair\") pod \"whisker-785db985-4g5tw\" (UID: \"00f65b9a-10f0-4e4c-9799-a76138297cdc\") " pod="calico-system/whisker-785db985-4g5tw" Aug 19 00:09:28.040673 kubelet[2674]: I0819 00:09:28.040574 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz8x2\" (UniqueName: \"kubernetes.io/projected/881746fb-5e8f-498c-b589-664afd8f39e4-kube-api-access-nz8x2\") pod \"coredns-7c65d6cfc9-m8hv5\" (UID: \"881746fb-5e8f-498c-b589-664afd8f39e4\") " pod="kube-system/coredns-7c65d6cfc9-m8hv5" Aug 19 00:09:28.040943 kubelet[2674]: I0819 00:09:28.040592 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts6dj\" (UniqueName: \"kubernetes.io/projected/00f65b9a-10f0-4e4c-9799-a76138297cdc-kube-api-access-ts6dj\") pod \"whisker-785db985-4g5tw\" (UID: \"00f65b9a-10f0-4e4c-9799-a76138297cdc\") " pod="calico-system/whisker-785db985-4g5tw" Aug 19 00:09:28.040943 kubelet[2674]: I0819 00:09:28.040611 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad4442bf-80b9-4ac4-a91e-7f33b5c10474-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-qlkx4\" (UID: \"ad4442bf-80b9-4ac4-a91e-7f33b5c10474\") " pod="calico-system/goldmane-58fd7646b9-qlkx4" Aug 19 00:09:28.040943 kubelet[2674]: I0819 00:09:28.040626 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ad4442bf-80b9-4ac4-a91e-7f33b5c10474-goldmane-key-pair\") pod \"goldmane-58fd7646b9-qlkx4\" (UID: \"ad4442bf-80b9-4ac4-a91e-7f33b5c10474\") " pod="calico-system/goldmane-58fd7646b9-qlkx4" Aug 19 00:09:28.040943 kubelet[2674]: I0819 00:09:28.040642 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrmzv\" (UniqueName: \"kubernetes.io/projected/9e9944e5-5c31-47b8-bc20-b9bae23763b0-kube-api-access-mrmzv\") pod \"calico-apiserver-7566bc84c-jzq68\" (UID: \"9e9944e5-5c31-47b8-bc20-b9bae23763b0\") " pod="calico-apiserver/calico-apiserver-7566bc84c-jzq68" Aug 19 00:09:28.040943 kubelet[2674]: I0819 00:09:28.040710 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad4442bf-80b9-4ac4-a91e-7f33b5c10474-config\") pod \"goldmane-58fd7646b9-qlkx4\" (UID: \"ad4442bf-80b9-4ac4-a91e-7f33b5c10474\") " pod="calico-system/goldmane-58fd7646b9-qlkx4" Aug 19 00:09:28.041078 kubelet[2674]: I0819 00:09:28.040750 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881746fb-5e8f-498c-b589-664afd8f39e4-config-volume\") pod \"coredns-7c65d6cfc9-m8hv5\" (UID: \"881746fb-5e8f-498c-b589-664afd8f39e4\") " pod="kube-system/coredns-7c65d6cfc9-m8hv5" Aug 19 00:09:28.041078 kubelet[2674]: I0819 00:09:28.040777 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e9944e5-5c31-47b8-bc20-b9bae23763b0-calico-apiserver-certs\") pod \"calico-apiserver-7566bc84c-jzq68\" (UID: \"9e9944e5-5c31-47b8-bc20-b9bae23763b0\") " pod="calico-apiserver/calico-apiserver-7566bc84c-jzq68" Aug 19 00:09:28.041078 kubelet[2674]: I0819 00:09:28.040802 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddlgt\" (UniqueName: \"kubernetes.io/projected/2968b672-0a30-46ea-b465-e350384939ca-kube-api-access-ddlgt\") pod \"coredns-7c65d6cfc9-q64vx\" (UID: \"2968b672-0a30-46ea-b465-e350384939ca\") " pod="kube-system/coredns-7c65d6cfc9-q64vx" Aug 19 00:09:28.041078 kubelet[2674]: I0819 00:09:28.040853 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20fc4aab-a0f0-47e6-8649-b56f7b092a7c-tigera-ca-bundle\") pod \"calico-kube-controllers-55b8dfc5c6-j5fk8\" (UID: \"20fc4aab-a0f0-47e6-8649-b56f7b092a7c\") " pod="calico-system/calico-kube-controllers-55b8dfc5c6-j5fk8" Aug 19 00:09:28.041078 kubelet[2674]: I0819 00:09:28.040871 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmbk6\" (UniqueName: \"kubernetes.io/projected/20fc4aab-a0f0-47e6-8649-b56f7b092a7c-kube-api-access-tmbk6\") pod \"calico-kube-controllers-55b8dfc5c6-j5fk8\" (UID: \"20fc4aab-a0f0-47e6-8649-b56f7b092a7c\") " pod="calico-system/calico-kube-controllers-55b8dfc5c6-j5fk8" Aug 19 00:09:28.041192 kubelet[2674]: I0819 00:09:28.040890 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ghm5\" (UniqueName: \"kubernetes.io/projected/1a02df7e-9cb9-459a-aede-2386a78b244a-kube-api-access-9ghm5\") pod \"calico-apiserver-7566bc84c-fvsmj\" (UID: \"1a02df7e-9cb9-459a-aede-2386a78b244a\") " pod="calico-apiserver/calico-apiserver-7566bc84c-fvsmj" Aug 19 00:09:28.041192 kubelet[2674]: I0819 00:09:28.040905 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2968b672-0a30-46ea-b465-e350384939ca-config-volume\") pod \"coredns-7c65d6cfc9-q64vx\" (UID: \"2968b672-0a30-46ea-b465-e350384939ca\") " pod="kube-system/coredns-7c65d6cfc9-q64vx" Aug 19 00:09:28.197527 containerd[1530]: time="2025-08-19T00:09:28.197475509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m8hv5,Uid:881746fb-5e8f-498c-b589-664afd8f39e4,Namespace:kube-system,Attempt:0,}" Aug 19 00:09:28.207022 containerd[1530]: time="2025-08-19T00:09:28.206965809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7566bc84c-fvsmj,Uid:1a02df7e-9cb9-459a-aede-2386a78b244a,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:09:28.213097 containerd[1530]: time="2025-08-19T00:09:28.213060956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b8dfc5c6-j5fk8,Uid:20fc4aab-a0f0-47e6-8649-b56f7b092a7c,Namespace:calico-system,Attempt:0,}" Aug 19 00:09:28.233686 containerd[1530]: time="2025-08-19T00:09:28.233604033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q64vx,Uid:2968b672-0a30-46ea-b465-e350384939ca,Namespace:kube-system,Attempt:0,}" Aug 19 00:09:28.233862 containerd[1530]: time="2025-08-19T00:09:28.233771393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7566bc84c-jzq68,Uid:9e9944e5-5c31-47b8-bc20-b9bae23763b0,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:09:28.234120 containerd[1530]: time="2025-08-19T00:09:28.234090992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qlkx4,Uid:ad4442bf-80b9-4ac4-a91e-7f33b5c10474,Namespace:calico-system,Attempt:0,}" Aug 19 00:09:28.262097 containerd[1530]: time="2025-08-19T00:09:28.261732414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-785db985-4g5tw,Uid:00f65b9a-10f0-4e4c-9799-a76138297cdc,Namespace:calico-system,Attempt:0,}" Aug 19 00:09:28.393046 containerd[1530]: time="2025-08-19T00:09:28.392886581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 19 00:09:28.719007 containerd[1530]: time="2025-08-19T00:09:28.718849621Z" level=error msg="Failed to destroy network for sandbox \"4b365667c8ccc2b4cd31417d6af0c194f7d073a2e1eedf57679031e9aac5c282\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.728933 containerd[1530]: time="2025-08-19T00:09:28.728877360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qlkx4,Uid:ad4442bf-80b9-4ac4-a91e-7f33b5c10474,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b365667c8ccc2b4cd31417d6af0c194f7d073a2e1eedf57679031e9aac5c282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.732145 kubelet[2674]: E0819 00:09:28.732067 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b365667c8ccc2b4cd31417d6af0c194f7d073a2e1eedf57679031e9aac5c282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.733163 kubelet[2674]: E0819 00:09:28.732164 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b365667c8ccc2b4cd31417d6af0c194f7d073a2e1eedf57679031e9aac5c282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-qlkx4" Aug 19 00:09:28.733163 kubelet[2674]: E0819 00:09:28.732186 2674 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b365667c8ccc2b4cd31417d6af0c194f7d073a2e1eedf57679031e9aac5c282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-qlkx4" Aug 19 00:09:28.733163 kubelet[2674]: E0819 00:09:28.732241 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-qlkx4_calico-system(ad4442bf-80b9-4ac4-a91e-7f33b5c10474)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-qlkx4_calico-system(ad4442bf-80b9-4ac4-a91e-7f33b5c10474)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b365667c8ccc2b4cd31417d6af0c194f7d073a2e1eedf57679031e9aac5c282\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-qlkx4" podUID="ad4442bf-80b9-4ac4-a91e-7f33b5c10474" Aug 19 00:09:28.734354 containerd[1530]: time="2025-08-19T00:09:28.734253509Z" level=error msg="Failed to destroy network for sandbox \"3d4ea24d232544ef4b74a88917a700f306ffe216ee44477f157d02ff3df07824\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.736063 containerd[1530]: time="2025-08-19T00:09:28.736020545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q64vx,Uid:2968b672-0a30-46ea-b465-e350384939ca,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4ea24d232544ef4b74a88917a700f306ffe216ee44477f157d02ff3df07824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.736785 kubelet[2674]: E0819 00:09:28.736654 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4ea24d232544ef4b74a88917a700f306ffe216ee44477f157d02ff3df07824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.736785 kubelet[2674]: E0819 00:09:28.736758 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4ea24d232544ef4b74a88917a700f306ffe216ee44477f157d02ff3df07824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q64vx" Aug 19 00:09:28.736991 kubelet[2674]: E0819 00:09:28.736773 2674 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4ea24d232544ef4b74a88917a700f306ffe216ee44477f157d02ff3df07824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q64vx" Aug 19 00:09:28.737095 kubelet[2674]: E0819 00:09:28.737072 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-q64vx_kube-system(2968b672-0a30-46ea-b465-e350384939ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-q64vx_kube-system(2968b672-0a30-46ea-b465-e350384939ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d4ea24d232544ef4b74a88917a700f306ffe216ee44477f157d02ff3df07824\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q64vx" podUID="2968b672-0a30-46ea-b465-e350384939ca" Aug 19 00:09:28.741653 containerd[1530]: time="2025-08-19T00:09:28.741585933Z" level=error msg="Failed to destroy network for sandbox \"7cd13ab4d1f056662c205e13e3c46dffc2291cd121b23c2a0f7cff2a3b3bf805\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.744729 containerd[1530]: time="2025-08-19T00:09:28.744102528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b8dfc5c6-j5fk8,Uid:20fc4aab-a0f0-47e6-8649-b56f7b092a7c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd13ab4d1f056662c205e13e3c46dffc2291cd121b23c2a0f7cff2a3b3bf805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.744932 containerd[1530]: time="2025-08-19T00:09:28.744597127Z" level=error msg="Failed to destroy network for sandbox \"ce655df6987a434497274eeb4403604f3a6f93497ffc4173148ad6fe13d7e2e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.745036 kubelet[2674]: E0819 00:09:28.744991 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd13ab4d1f056662c205e13e3c46dffc2291cd121b23c2a0f7cff2a3b3bf805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.745097 kubelet[2674]: E0819 00:09:28.745061 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd13ab4d1f056662c205e13e3c46dffc2291cd121b23c2a0f7cff2a3b3bf805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b8dfc5c6-j5fk8" Aug 19 00:09:28.745097 kubelet[2674]: E0819 00:09:28.745082 2674 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd13ab4d1f056662c205e13e3c46dffc2291cd121b23c2a0f7cff2a3b3bf805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b8dfc5c6-j5fk8" Aug 19 00:09:28.745483 kubelet[2674]: E0819 00:09:28.745123 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55b8dfc5c6-j5fk8_calico-system(20fc4aab-a0f0-47e6-8649-b56f7b092a7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55b8dfc5c6-j5fk8_calico-system(20fc4aab-a0f0-47e6-8649-b56f7b092a7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cd13ab4d1f056662c205e13e3c46dffc2291cd121b23c2a0f7cff2a3b3bf805\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55b8dfc5c6-j5fk8" podUID="20fc4aab-a0f0-47e6-8649-b56f7b092a7c" Aug 19 00:09:28.746674 containerd[1530]: time="2025-08-19T00:09:28.746599243Z" level=error msg="Failed to destroy network for sandbox \"bbb314efdde8a2a6a9f1c24b14bcb24bb9bad5ccf71c282e872acfbaf600cfd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.748644 containerd[1530]: time="2025-08-19T00:09:28.748566519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7566bc84c-fvsmj,Uid:1a02df7e-9cb9-459a-aede-2386a78b244a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce655df6987a434497274eeb4403604f3a6f93497ffc4173148ad6fe13d7e2e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.749188 kubelet[2674]: E0819 00:09:28.749127 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce655df6987a434497274eeb4403604f3a6f93497ffc4173148ad6fe13d7e2e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.749394 kubelet[2674]: E0819 00:09:28.749311 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce655df6987a434497274eeb4403604f3a6f93497ffc4173148ad6fe13d7e2e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7566bc84c-fvsmj" Aug 19 00:09:28.749394 kubelet[2674]: E0819 00:09:28.749340 2674 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce655df6987a434497274eeb4403604f3a6f93497ffc4173148ad6fe13d7e2e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7566bc84c-fvsmj" Aug 19 00:09:28.749552 kubelet[2674]: E0819 00:09:28.749492 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7566bc84c-fvsmj_calico-apiserver(1a02df7e-9cb9-459a-aede-2386a78b244a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7566bc84c-fvsmj_calico-apiserver(1a02df7e-9cb9-459a-aede-2386a78b244a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce655df6987a434497274eeb4403604f3a6f93497ffc4173148ad6fe13d7e2e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7566bc84c-fvsmj" podUID="1a02df7e-9cb9-459a-aede-2386a78b244a" Aug 19 00:09:28.751313 containerd[1530]: time="2025-08-19T00:09:28.751262873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m8hv5,Uid:881746fb-5e8f-498c-b589-664afd8f39e4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbb314efdde8a2a6a9f1c24b14bcb24bb9bad5ccf71c282e872acfbaf600cfd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.752308 kubelet[2674]: E0819 00:09:28.752268 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbb314efdde8a2a6a9f1c24b14bcb24bb9bad5ccf71c282e872acfbaf600cfd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.752383 kubelet[2674]: E0819 00:09:28.752328 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbb314efdde8a2a6a9f1c24b14bcb24bb9bad5ccf71c282e872acfbaf600cfd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-m8hv5" Aug 19 00:09:28.752383 kubelet[2674]: E0819 00:09:28.752348 2674 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbb314efdde8a2a6a9f1c24b14bcb24bb9bad5ccf71c282e872acfbaf600cfd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-m8hv5" Aug 19 00:09:28.752431 kubelet[2674]: E0819 00:09:28.752385 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-m8hv5_kube-system(881746fb-5e8f-498c-b589-664afd8f39e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-m8hv5_kube-system(881746fb-5e8f-498c-b589-664afd8f39e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbb314efdde8a2a6a9f1c24b14bcb24bb9bad5ccf71c282e872acfbaf600cfd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-m8hv5" podUID="881746fb-5e8f-498c-b589-664afd8f39e4" Aug 19 00:09:28.756517 containerd[1530]: time="2025-08-19T00:09:28.756450182Z" level=error msg="Failed to destroy network for sandbox \"745462caf5a41185352fff8539b9cfe163bc4ac367059e492ac090aca7c3e4fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.759654 containerd[1530]: time="2025-08-19T00:09:28.759568056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7566bc84c-jzq68,Uid:9e9944e5-5c31-47b8-bc20-b9bae23763b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"745462caf5a41185352fff8539b9cfe163bc4ac367059e492ac090aca7c3e4fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.759891 kubelet[2674]: E0819 00:09:28.759849 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"745462caf5a41185352fff8539b9cfe163bc4ac367059e492ac090aca7c3e4fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.759939 kubelet[2674]: E0819 00:09:28.759907 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"745462caf5a41185352fff8539b9cfe163bc4ac367059e492ac090aca7c3e4fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7566bc84c-jzq68" Aug 19 00:09:28.759939 kubelet[2674]: E0819 00:09:28.759925 2674 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"745462caf5a41185352fff8539b9cfe163bc4ac367059e492ac090aca7c3e4fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7566bc84c-jzq68" Aug 19 00:09:28.760045 kubelet[2674]: E0819 00:09:28.759971 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7566bc84c-jzq68_calico-apiserver(9e9944e5-5c31-47b8-bc20-b9bae23763b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7566bc84c-jzq68_calico-apiserver(9e9944e5-5c31-47b8-bc20-b9bae23763b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"745462caf5a41185352fff8539b9cfe163bc4ac367059e492ac090aca7c3e4fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7566bc84c-jzq68" podUID="9e9944e5-5c31-47b8-bc20-b9bae23763b0" Aug 19 00:09:28.761019 containerd[1530]: time="2025-08-19T00:09:28.760979813Z" level=error msg="Failed to destroy network for sandbox \"052f8c3b8d1ce2ba02a50710080bcd727f4d2cfcc7269a9c26e6cef1271c8db1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.762373 containerd[1530]: time="2025-08-19T00:09:28.762321850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-785db985-4g5tw,Uid:00f65b9a-10f0-4e4c-9799-a76138297cdc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"052f8c3b8d1ce2ba02a50710080bcd727f4d2cfcc7269a9c26e6cef1271c8db1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.762600 kubelet[2674]: E0819 00:09:28.762551 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"052f8c3b8d1ce2ba02a50710080bcd727f4d2cfcc7269a9c26e6cef1271c8db1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:28.762672 kubelet[2674]: E0819 00:09:28.762610 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"052f8c3b8d1ce2ba02a50710080bcd727f4d2cfcc7269a9c26e6cef1271c8db1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-785db985-4g5tw" Aug 19 00:09:28.762672 kubelet[2674]: E0819 00:09:28.762629 2674 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"052f8c3b8d1ce2ba02a50710080bcd727f4d2cfcc7269a9c26e6cef1271c8db1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-785db985-4g5tw" Aug 19 00:09:28.762753 kubelet[2674]: E0819 00:09:28.762670 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-785db985-4g5tw_calico-system(00f65b9a-10f0-4e4c-9799-a76138297cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-785db985-4g5tw_calico-system(00f65b9a-10f0-4e4c-9799-a76138297cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"052f8c3b8d1ce2ba02a50710080bcd727f4d2cfcc7269a9c26e6cef1271c8db1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-785db985-4g5tw" podUID="00f65b9a-10f0-4e4c-9799-a76138297cdc" Aug 19 00:09:29.273045 systemd[1]: Created slice kubepods-besteffort-pod7dfdfb8d_69a9_402c_a1bf_fd66f5c4d013.slice - libcontainer container kubepods-besteffort-pod7dfdfb8d_69a9_402c_a1bf_fd66f5c4d013.slice. Aug 19 00:09:29.284372 containerd[1530]: time="2025-08-19T00:09:29.284096342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gm46,Uid:7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013,Namespace:calico-system,Attempt:0,}" Aug 19 00:09:29.351207 containerd[1530]: time="2025-08-19T00:09:29.350697928Z" level=error msg="Failed to destroy network for sandbox \"19f23ee4ffc6c9032304e7bac0649cc19ff96a43735250534c54017fd384c6ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:29.353555 systemd[1]: run-netns-cni\x2dce9b2d48\x2dd319\x2da4f9\x2d514b\x2da0c1ebdf0697.mount: Deactivated successfully. Aug 19 00:09:29.354311 containerd[1530]: time="2025-08-19T00:09:29.354004041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gm46,Uid:7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19f23ee4ffc6c9032304e7bac0649cc19ff96a43735250534c54017fd384c6ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:29.354549 kubelet[2674]: E0819 00:09:29.354214 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19f23ee4ffc6c9032304e7bac0649cc19ff96a43735250534c54017fd384c6ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:09:29.354549 kubelet[2674]: E0819 00:09:29.354286 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19f23ee4ffc6c9032304e7bac0649cc19ff96a43735250534c54017fd384c6ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7gm46" Aug 19 00:09:29.354549 kubelet[2674]: E0819 00:09:29.354330 2674 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19f23ee4ffc6c9032304e7bac0649cc19ff96a43735250534c54017fd384c6ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7gm46" Aug 19 00:09:29.354654 kubelet[2674]: E0819 00:09:29.354378 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7gm46_calico-system(7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7gm46_calico-system(7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19f23ee4ffc6c9032304e7bac0649cc19ff96a43735250534c54017fd384c6ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7gm46" podUID="7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013" Aug 19 00:09:31.599931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386438439.mount: Deactivated successfully. Aug 19 00:09:31.683621 containerd[1530]: time="2025-08-19T00:09:31.663939943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 19 00:09:31.683621 containerd[1530]: time="2025-08-19T00:09:31.667614936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.274670795s" Aug 19 00:09:31.684113 containerd[1530]: time="2025-08-19T00:09:31.683651466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 19 00:09:31.684113 containerd[1530]: time="2025-08-19T00:09:31.676923758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:31.684431 containerd[1530]: time="2025-08-19T00:09:31.684367904Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:31.684877 containerd[1530]: time="2025-08-19T00:09:31.684844543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:31.692176 containerd[1530]: time="2025-08-19T00:09:31.692131690Z" level=info msg="CreateContainer within sandbox \"f6f8154ff8c22ed22d8944f0c4de1db47e32e7ae0ba47f022bd1b5372777c917\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 19 00:09:31.707540 containerd[1530]: time="2025-08-19T00:09:31.707110741Z" level=info msg="Container bc90163f378c1f53746ff3c25b50c86b0b5d3e1189e7f95d306a82c63d4e558e: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:31.727765 containerd[1530]: time="2025-08-19T00:09:31.727702543Z" level=info msg="CreateContainer within sandbox \"f6f8154ff8c22ed22d8944f0c4de1db47e32e7ae0ba47f022bd1b5372777c917\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc90163f378c1f53746ff3c25b50c86b0b5d3e1189e7f95d306a82c63d4e558e\"" Aug 19 00:09:31.728522 containerd[1530]: time="2025-08-19T00:09:31.728473741Z" level=info msg="StartContainer for \"bc90163f378c1f53746ff3c25b50c86b0b5d3e1189e7f95d306a82c63d4e558e\"" Aug 19 00:09:31.730070 containerd[1530]: time="2025-08-19T00:09:31.730037618Z" level=info msg="connecting to shim bc90163f378c1f53746ff3c25b50c86b0b5d3e1189e7f95d306a82c63d4e558e" address="unix:///run/containerd/s/726d082633161ad8d0a307ab29685dace84a85eaa1eb1d9939c32936f9e12dba" protocol=ttrpc version=3 Aug 19 00:09:31.755030 systemd[1]: Started cri-containerd-bc90163f378c1f53746ff3c25b50c86b0b5d3e1189e7f95d306a82c63d4e558e.scope - libcontainer container bc90163f378c1f53746ff3c25b50c86b0b5d3e1189e7f95d306a82c63d4e558e. Aug 19 00:09:31.805997 containerd[1530]: time="2025-08-19T00:09:31.805536836Z" level=info msg="StartContainer for \"bc90163f378c1f53746ff3c25b50c86b0b5d3e1189e7f95d306a82c63d4e558e\" returns successfully" Aug 19 00:09:32.014570 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 19 00:09:32.014709 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 19 00:09:32.385746 kubelet[2674]: I0819 00:09:32.385600 2674 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/00f65b9a-10f0-4e4c-9799-a76138297cdc-whisker-backend-key-pair\") pod \"00f65b9a-10f0-4e4c-9799-a76138297cdc\" (UID: \"00f65b9a-10f0-4e4c-9799-a76138297cdc\") " Aug 19 00:09:32.385746 kubelet[2674]: I0819 00:09:32.385664 2674 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00f65b9a-10f0-4e4c-9799-a76138297cdc-whisker-ca-bundle\") pod \"00f65b9a-10f0-4e4c-9799-a76138297cdc\" (UID: \"00f65b9a-10f0-4e4c-9799-a76138297cdc\") " Aug 19 00:09:32.385746 kubelet[2674]: I0819 00:09:32.385693 2674 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts6dj\" (UniqueName: \"kubernetes.io/projected/00f65b9a-10f0-4e4c-9799-a76138297cdc-kube-api-access-ts6dj\") pod \"00f65b9a-10f0-4e4c-9799-a76138297cdc\" (UID: \"00f65b9a-10f0-4e4c-9799-a76138297cdc\") " Aug 19 00:09:32.416896 kubelet[2674]: I0819 00:09:32.416603 2674 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00f65b9a-10f0-4e4c-9799-a76138297cdc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "00f65b9a-10f0-4e4c-9799-a76138297cdc" (UID: "00f65b9a-10f0-4e4c-9799-a76138297cdc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 19 00:09:32.418994 kubelet[2674]: I0819 00:09:32.418953 2674 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f65b9a-10f0-4e4c-9799-a76138297cdc-kube-api-access-ts6dj" (OuterVolumeSpecName: "kube-api-access-ts6dj") pod "00f65b9a-10f0-4e4c-9799-a76138297cdc" (UID: "00f65b9a-10f0-4e4c-9799-a76138297cdc"). InnerVolumeSpecName "kube-api-access-ts6dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 19 00:09:32.419386 kubelet[2674]: I0819 00:09:32.419048 2674 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f65b9a-10f0-4e4c-9799-a76138297cdc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "00f65b9a-10f0-4e4c-9799-a76138297cdc" (UID: "00f65b9a-10f0-4e4c-9799-a76138297cdc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 19 00:09:32.440888 systemd[1]: Removed slice kubepods-besteffort-pod00f65b9a_10f0_4e4c_9799_a76138297cdc.slice - libcontainer container kubepods-besteffort-pod00f65b9a_10f0_4e4c_9799_a76138297cdc.slice. Aug 19 00:09:32.449219 kubelet[2674]: I0819 00:09:32.449017 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-scd48" podStartSLOduration=2.176231796 podStartE2EDuration="14.448997811s" podCreationTimestamp="2025-08-19 00:09:18 +0000 UTC" firstStartedPulling="2025-08-19 00:09:19.412271728 +0000 UTC m=+19.291660586" lastFinishedPulling="2025-08-19 00:09:31.685037703 +0000 UTC m=+31.564426601" observedRunningTime="2025-08-19 00:09:32.445506057 +0000 UTC m=+32.324894955" watchObservedRunningTime="2025-08-19 00:09:32.448997811 +0000 UTC m=+32.328386629" Aug 19 00:09:32.486681 kubelet[2674]: I0819 00:09:32.486260 2674 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00f65b9a-10f0-4e4c-9799-a76138297cdc-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 19 00:09:32.486681 kubelet[2674]: I0819 00:09:32.486308 2674 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts6dj\" (UniqueName: \"kubernetes.io/projected/00f65b9a-10f0-4e4c-9799-a76138297cdc-kube-api-access-ts6dj\") on node \"localhost\" DevicePath \"\"" Aug 19 00:09:32.486681 kubelet[2674]: I0819 00:09:32.486327 2674 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/00f65b9a-10f0-4e4c-9799-a76138297cdc-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 19 00:09:32.498173 systemd[1]: Created slice kubepods-besteffort-pod305b190d_d8cf_48b0_a6cb_37b213d3c560.slice - libcontainer container kubepods-besteffort-pod305b190d_d8cf_48b0_a6cb_37b213d3c560.slice. Aug 19 00:09:32.600929 systemd[1]: var-lib-kubelet-pods-00f65b9a\x2d10f0\x2d4e4c\x2d9799\x2da76138297cdc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dts6dj.mount: Deactivated successfully. Aug 19 00:09:32.601035 systemd[1]: var-lib-kubelet-pods-00f65b9a\x2d10f0\x2d4e4c\x2d9799\x2da76138297cdc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 19 00:09:32.687579 kubelet[2674]: I0819 00:09:32.687449 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/305b190d-d8cf-48b0-a6cb-37b213d3c560-whisker-backend-key-pair\") pod \"whisker-644c68f7bc-k5tft\" (UID: \"305b190d-d8cf-48b0-a6cb-37b213d3c560\") " pod="calico-system/whisker-644c68f7bc-k5tft" Aug 19 00:09:32.687579 kubelet[2674]: I0819 00:09:32.687505 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/305b190d-d8cf-48b0-a6cb-37b213d3c560-whisker-ca-bundle\") pod \"whisker-644c68f7bc-k5tft\" (UID: \"305b190d-d8cf-48b0-a6cb-37b213d3c560\") " pod="calico-system/whisker-644c68f7bc-k5tft" Aug 19 00:09:32.687579 kubelet[2674]: I0819 00:09:32.687529 2674 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfx2\" (UniqueName: \"kubernetes.io/projected/305b190d-d8cf-48b0-a6cb-37b213d3c560-kube-api-access-sqfx2\") pod \"whisker-644c68f7bc-k5tft\" (UID: \"305b190d-d8cf-48b0-a6cb-37b213d3c560\") " pod="calico-system/whisker-644c68f7bc-k5tft" Aug 19 00:09:33.103176 containerd[1530]: time="2025-08-19T00:09:33.103117943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-644c68f7bc-k5tft,Uid:305b190d-d8cf-48b0-a6cb-37b213d3c560,Namespace:calico-system,Attempt:0,}" Aug 19 00:09:33.382472 systemd-networkd[1441]: calife1852edb5b: Link UP Aug 19 00:09:33.383006 systemd-networkd[1441]: calife1852edb5b: Gained carrier Aug 19 00:09:33.397498 containerd[1530]: 2025-08-19 00:09:33.170 [INFO][3759] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 19 00:09:33.397498 containerd[1530]: 2025-08-19 00:09:33.230 [INFO][3759] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--644c68f7bc--k5tft-eth0 whisker-644c68f7bc- calico-system 305b190d-d8cf-48b0-a6cb-37b213d3c560 884 0 2025-08-19 00:09:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:644c68f7bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-644c68f7bc-k5tft eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calife1852edb5b [] [] }} ContainerID="6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" Namespace="calico-system" Pod="whisker-644c68f7bc-k5tft" WorkloadEndpoint="localhost-k8s-whisker--644c68f7bc--k5tft-" Aug 19 00:09:33.397498 containerd[1530]: 2025-08-19 00:09:33.231 [INFO][3759] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" Namespace="calico-system" Pod="whisker-644c68f7bc-k5tft" WorkloadEndpoint="localhost-k8s-whisker--644c68f7bc--k5tft-eth0" Aug 19 00:09:33.397498 containerd[1530]: 2025-08-19 00:09:33.332 [INFO][3774] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" HandleID="k8s-pod-network.6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" Workload="localhost-k8s-whisker--644c68f7bc--k5tft-eth0" Aug 19 00:09:33.397699 containerd[1530]: 2025-08-19 00:09:33.332 [INFO][3774] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" HandleID="k8s-pod-network.6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" Workload="localhost-k8s-whisker--644c68f7bc--k5tft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001373a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-644c68f7bc-k5tft", "timestamp":"2025-08-19 00:09:33.332521057 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:09:33.397699 containerd[1530]: 2025-08-19 00:09:33.332 [INFO][3774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:09:33.397699 containerd[1530]: 2025-08-19 00:09:33.332 [INFO][3774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:09:33.397699 containerd[1530]: 2025-08-19 00:09:33.333 [INFO][3774] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:09:33.397699 containerd[1530]: 2025-08-19 00:09:33.343 [INFO][3774] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" host="localhost" Aug 19 00:09:33.397699 containerd[1530]: 2025-08-19 00:09:33.350 [INFO][3774] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:09:33.397699 containerd[1530]: 2025-08-19 00:09:33.355 [INFO][3774] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:09:33.397699 containerd[1530]: 2025-08-19 00:09:33.357 [INFO][3774] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:33.397699 containerd[1530]: 2025-08-19 00:09:33.359 [INFO][3774] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:33.397699 containerd[1530]: 2025-08-19 00:09:33.359 [INFO][3774] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" host="localhost" Aug 19 00:09:33.397933 containerd[1530]: 2025-08-19 00:09:33.361 [INFO][3774] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29 Aug 19 00:09:33.397933 containerd[1530]: 2025-08-19 00:09:33.364 [INFO][3774] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" host="localhost" Aug 19 00:09:33.397933 containerd[1530]: 2025-08-19 00:09:33.369 [INFO][3774] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" host="localhost" Aug 19 00:09:33.397933 containerd[1530]: 2025-08-19 00:09:33.369 [INFO][3774] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" host="localhost" Aug 19 00:09:33.397933 containerd[1530]: 2025-08-19 00:09:33.369 [INFO][3774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:09:33.397933 containerd[1530]: 2025-08-19 00:09:33.369 [INFO][3774] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" HandleID="k8s-pod-network.6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" Workload="localhost-k8s-whisker--644c68f7bc--k5tft-eth0" Aug 19 00:09:33.398037 containerd[1530]: 2025-08-19 00:09:33.372 [INFO][3759] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" Namespace="calico-system" Pod="whisker-644c68f7bc-k5tft" WorkloadEndpoint="localhost-k8s-whisker--644c68f7bc--k5tft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--644c68f7bc--k5tft-eth0", GenerateName:"whisker-644c68f7bc-", Namespace:"calico-system", SelfLink:"", UID:"305b190d-d8cf-48b0-a6cb-37b213d3c560", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"644c68f7bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-644c68f7bc-k5tft", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calife1852edb5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:33.398037 containerd[1530]: 2025-08-19 00:09:33.372 [INFO][3759] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" Namespace="calico-system" Pod="whisker-644c68f7bc-k5tft" WorkloadEndpoint="localhost-k8s-whisker--644c68f7bc--k5tft-eth0" Aug 19 00:09:33.398105 containerd[1530]: 2025-08-19 00:09:33.372 [INFO][3759] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife1852edb5b ContainerID="6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" Namespace="calico-system" Pod="whisker-644c68f7bc-k5tft" WorkloadEndpoint="localhost-k8s-whisker--644c68f7bc--k5tft-eth0" Aug 19 00:09:33.398105 containerd[1530]: 2025-08-19 00:09:33.383 [INFO][3759] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" Namespace="calico-system" Pod="whisker-644c68f7bc-k5tft" WorkloadEndpoint="localhost-k8s-whisker--644c68f7bc--k5tft-eth0" Aug 19 00:09:33.398146 containerd[1530]: 2025-08-19 00:09:33.383 [INFO][3759] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" Namespace="calico-system" Pod="whisker-644c68f7bc-k5tft" WorkloadEndpoint="localhost-k8s-whisker--644c68f7bc--k5tft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--644c68f7bc--k5tft-eth0", GenerateName:"whisker-644c68f7bc-", Namespace:"calico-system", SelfLink:"", UID:"305b190d-d8cf-48b0-a6cb-37b213d3c560", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"644c68f7bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29", Pod:"whisker-644c68f7bc-k5tft", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calife1852edb5b", MAC:"9a:98:9d:5d:66:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:33.398188 containerd[1530]: 2025-08-19 00:09:33.394 [INFO][3759] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" Namespace="calico-system" Pod="whisker-644c68f7bc-k5tft" WorkloadEndpoint="localhost-k8s-whisker--644c68f7bc--k5tft-eth0" Aug 19 00:09:33.429667 kubelet[2674]: I0819 00:09:33.429616 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:09:33.433598 containerd[1530]: time="2025-08-19T00:09:33.433296199Z" level=info msg="connecting to shim 6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29" address="unix:///run/containerd/s/4fab85b8cc7c2ad446a360a1002dac3280d7c645e95d49458cd6b3ff0fd78072" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:09:33.485111 systemd[1]: Started cri-containerd-6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29.scope - libcontainer container 6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29. Aug 19 00:09:33.530092 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:09:33.608495 containerd[1530]: time="2025-08-19T00:09:33.608446609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-644c68f7bc-k5tft,Uid:305b190d-d8cf-48b0-a6cb-37b213d3c560,Namespace:calico-system,Attempt:0,} returns sandbox id \"6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29\"" Aug 19 00:09:33.621833 containerd[1530]: time="2025-08-19T00:09:33.621781346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 19 00:09:34.269836 kubelet[2674]: I0819 00:09:34.269768 2674 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00f65b9a-10f0-4e4c-9799-a76138297cdc" path="/var/lib/kubelet/pods/00f65b9a-10f0-4e4c-9799-a76138297cdc/volumes" Aug 19 00:09:34.579298 kubelet[2674]: I0819 00:09:34.579166 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:09:34.587151 containerd[1530]: time="2025-08-19T00:09:34.587098509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:34.587784 containerd[1530]: time="2025-08-19T00:09:34.587742148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 19 00:09:34.588796 containerd[1530]: time="2025-08-19T00:09:34.588759546Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:34.593280 containerd[1530]: time="2025-08-19T00:09:34.593203098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:34.594231 containerd[1530]: time="2025-08-19T00:09:34.594065817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 972.225431ms" Aug 19 00:09:34.594231 containerd[1530]: time="2025-08-19T00:09:34.594104537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 19 00:09:34.603495 containerd[1530]: time="2025-08-19T00:09:34.603432641Z" level=info msg="CreateContainer within sandbox \"6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 19 00:09:34.630337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount308417012.mount: Deactivated successfully. Aug 19 00:09:34.632092 containerd[1530]: time="2025-08-19T00:09:34.632015872Z" level=info msg="Container cdb951e390a9f0febb277a19d614af91026808eaba1a417242978c82f69fd6da: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:34.640358 containerd[1530]: time="2025-08-19T00:09:34.640309137Z" level=info msg="CreateContainer within sandbox \"6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cdb951e390a9f0febb277a19d614af91026808eaba1a417242978c82f69fd6da\"" Aug 19 00:09:34.641462 containerd[1530]: time="2025-08-19T00:09:34.641047736Z" level=info msg="StartContainer for \"cdb951e390a9f0febb277a19d614af91026808eaba1a417242978c82f69fd6da\"" Aug 19 00:09:34.642416 containerd[1530]: time="2025-08-19T00:09:34.642381894Z" level=info msg="connecting to shim cdb951e390a9f0febb277a19d614af91026808eaba1a417242978c82f69fd6da" address="unix:///run/containerd/s/4fab85b8cc7c2ad446a360a1002dac3280d7c645e95d49458cd6b3ff0fd78072" protocol=ttrpc version=3 Aug 19 00:09:34.671440 systemd[1]: Started cri-containerd-cdb951e390a9f0febb277a19d614af91026808eaba1a417242978c82f69fd6da.scope - libcontainer container cdb951e390a9f0febb277a19d614af91026808eaba1a417242978c82f69fd6da. Aug 19 00:09:34.758079 containerd[1530]: time="2025-08-19T00:09:34.757775176Z" level=info msg="StartContainer for \"cdb951e390a9f0febb277a19d614af91026808eaba1a417242978c82f69fd6da\" returns successfully" Aug 19 00:09:34.760598 containerd[1530]: time="2025-08-19T00:09:34.760547931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 19 00:09:34.847013 systemd-networkd[1441]: calife1852edb5b: Gained IPv6LL Aug 19 00:09:35.113132 systemd-networkd[1441]: vxlan.calico: Link UP Aug 19 00:09:35.113138 systemd-networkd[1441]: vxlan.calico: Gained carrier Aug 19 00:09:36.160305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193503017.mount: Deactivated successfully. Aug 19 00:09:36.186067 containerd[1530]: time="2025-08-19T00:09:36.186016511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:36.187063 containerd[1530]: time="2025-08-19T00:09:36.186915869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 19 00:09:36.187872 containerd[1530]: time="2025-08-19T00:09:36.187841508Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:36.190056 containerd[1530]: time="2025-08-19T00:09:36.190014024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:36.191047 containerd[1530]: time="2025-08-19T00:09:36.191017382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.430425811s" Aug 19 00:09:36.191113 containerd[1530]: time="2025-08-19T00:09:36.191048662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 19 00:09:36.193958 containerd[1530]: time="2025-08-19T00:09:36.193926338Z" level=info msg="CreateContainer within sandbox \"6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 19 00:09:36.201850 containerd[1530]: time="2025-08-19T00:09:36.201043886Z" level=info msg="Container 7737c2d90f3ec1966c3c0e443aeb6e3d124b7050908c87d19e70ee2439911b8f: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:36.208602 containerd[1530]: time="2025-08-19T00:09:36.208552674Z" level=info msg="CreateContainer within sandbox \"6cf63aafba7821f7797763a42d1800145961955326c7277f27f8d3c6e711cd29\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7737c2d90f3ec1966c3c0e443aeb6e3d124b7050908c87d19e70ee2439911b8f\"" Aug 19 00:09:36.209277 containerd[1530]: time="2025-08-19T00:09:36.209203073Z" level=info msg="StartContainer for \"7737c2d90f3ec1966c3c0e443aeb6e3d124b7050908c87d19e70ee2439911b8f\"" Aug 19 00:09:36.210220 containerd[1530]: time="2025-08-19T00:09:36.210196311Z" level=info msg="connecting to shim 7737c2d90f3ec1966c3c0e443aeb6e3d124b7050908c87d19e70ee2439911b8f" address="unix:///run/containerd/s/4fab85b8cc7c2ad446a360a1002dac3280d7c645e95d49458cd6b3ff0fd78072" protocol=ttrpc version=3 Aug 19 00:09:36.237024 systemd[1]: Started cri-containerd-7737c2d90f3ec1966c3c0e443aeb6e3d124b7050908c87d19e70ee2439911b8f.scope - libcontainer container 7737c2d90f3ec1966c3c0e443aeb6e3d124b7050908c87d19e70ee2439911b8f. Aug 19 00:09:36.283153 containerd[1530]: time="2025-08-19T00:09:36.283104553Z" level=info msg="StartContainer for \"7737c2d90f3ec1966c3c0e443aeb6e3d124b7050908c87d19e70ee2439911b8f\" returns successfully" Aug 19 00:09:36.457082 kubelet[2674]: I0819 00:09:36.456729 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-644c68f7bc-k5tft" podStartSLOduration=1.879195089 podStartE2EDuration="4.456709791s" podCreationTimestamp="2025-08-19 00:09:32 +0000 UTC" firstStartedPulling="2025-08-19 00:09:33.614405119 +0000 UTC m=+33.493793937" lastFinishedPulling="2025-08-19 00:09:36.191919781 +0000 UTC m=+36.071308639" observedRunningTime="2025-08-19 00:09:36.456379912 +0000 UTC m=+36.335768810" watchObservedRunningTime="2025-08-19 00:09:36.456709791 +0000 UTC m=+36.336098649" Aug 19 00:09:36.511989 systemd-networkd[1441]: vxlan.calico: Gained IPv6LL Aug 19 00:09:39.965683 systemd[1]: Started sshd@7-10.0.0.31:22-10.0.0.1:40516.service - OpenSSH per-connection server daemon (10.0.0.1:40516). Aug 19 00:09:40.027044 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 40516 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:09:40.028647 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:09:40.035732 systemd-logind[1514]: New session 8 of user core. Aug 19 00:09:40.045031 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 00:09:40.260175 sshd[4163]: Connection closed by 10.0.0.1 port 40516 Aug 19 00:09:40.260451 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Aug 19 00:09:40.264970 systemd[1]: sshd@7-10.0.0.31:22-10.0.0.1:40516.service: Deactivated successfully. Aug 19 00:09:40.268481 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 00:09:40.270020 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Aug 19 00:09:40.270452 containerd[1530]: time="2025-08-19T00:09:40.270413215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7566bc84c-fvsmj,Uid:1a02df7e-9cb9-459a-aede-2386a78b244a,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:09:40.271757 systemd-logind[1514]: Removed session 8. Aug 19 00:09:40.273936 containerd[1530]: time="2025-08-19T00:09:40.273894610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7566bc84c-jzq68,Uid:9e9944e5-5c31-47b8-bc20-b9bae23763b0,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:09:40.418957 systemd-networkd[1441]: cali6ba6671b3e7: Link UP Aug 19 00:09:40.419190 systemd-networkd[1441]: cali6ba6671b3e7: Gained carrier Aug 19 00:09:40.438649 containerd[1530]: 2025-08-19 00:09:40.329 [INFO][4190] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0 calico-apiserver-7566bc84c- calico-apiserver 9e9944e5-5c31-47b8-bc20-b9bae23763b0 817 0 2025-08-19 00:09:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7566bc84c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7566bc84c-jzq68 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6ba6671b3e7 [] [] }} ContainerID="2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-jzq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--jzq68-" Aug 19 00:09:40.438649 containerd[1530]: 2025-08-19 00:09:40.329 [INFO][4190] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-jzq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0" Aug 19 00:09:40.438649 containerd[1530]: 2025-08-19 00:09:40.373 [INFO][4219] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" HandleID="k8s-pod-network.2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" Workload="localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0" Aug 19 00:09:40.438957 containerd[1530]: 2025-08-19 00:09:40.374 [INFO][4219] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" HandleID="k8s-pod-network.2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" Workload="localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7566bc84c-jzq68", "timestamp":"2025-08-19 00:09:40.373839463 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:09:40.438957 containerd[1530]: 2025-08-19 00:09:40.374 [INFO][4219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:09:40.438957 containerd[1530]: 2025-08-19 00:09:40.374 [INFO][4219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:09:40.438957 containerd[1530]: 2025-08-19 00:09:40.374 [INFO][4219] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:09:40.438957 containerd[1530]: 2025-08-19 00:09:40.384 [INFO][4219] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" host="localhost" Aug 19 00:09:40.438957 containerd[1530]: 2025-08-19 00:09:40.389 [INFO][4219] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:09:40.438957 containerd[1530]: 2025-08-19 00:09:40.396 [INFO][4219] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:09:40.438957 containerd[1530]: 2025-08-19 00:09:40.398 [INFO][4219] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:40.438957 containerd[1530]: 2025-08-19 00:09:40.400 [INFO][4219] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:40.438957 containerd[1530]: 2025-08-19 00:09:40.400 [INFO][4219] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" host="localhost" Aug 19 00:09:40.439259 containerd[1530]: 2025-08-19 00:09:40.402 [INFO][4219] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa Aug 19 00:09:40.439259 containerd[1530]: 2025-08-19 00:09:40.405 [INFO][4219] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" host="localhost" Aug 19 00:09:40.439259 containerd[1530]: 2025-08-19 00:09:40.412 [INFO][4219] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" host="localhost" Aug 19 00:09:40.439259 containerd[1530]: 2025-08-19 00:09:40.412 [INFO][4219] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" host="localhost" Aug 19 00:09:40.439259 containerd[1530]: 2025-08-19 00:09:40.412 [INFO][4219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:09:40.439259 containerd[1530]: 2025-08-19 00:09:40.412 [INFO][4219] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" HandleID="k8s-pod-network.2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" Workload="localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0" Aug 19 00:09:40.439369 containerd[1530]: 2025-08-19 00:09:40.415 [INFO][4190] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-jzq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0", GenerateName:"calico-apiserver-7566bc84c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e9944e5-5c31-47b8-bc20-b9bae23763b0", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7566bc84c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7566bc84c-jzq68", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ba6671b3e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:40.439421 containerd[1530]: 2025-08-19 00:09:40.415 [INFO][4190] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-jzq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0" Aug 19 00:09:40.439421 containerd[1530]: 2025-08-19 00:09:40.415 [INFO][4190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ba6671b3e7 ContainerID="2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-jzq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0" Aug 19 00:09:40.439421 containerd[1530]: 2025-08-19 00:09:40.418 [INFO][4190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-jzq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0" Aug 19 00:09:40.439757 containerd[1530]: 2025-08-19 00:09:40.419 [INFO][4190] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-jzq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0", GenerateName:"calico-apiserver-7566bc84c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e9944e5-5c31-47b8-bc20-b9bae23763b0", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7566bc84c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa", Pod:"calico-apiserver-7566bc84c-jzq68", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ba6671b3e7", MAC:"de:48:bc:d4:24:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:40.439867 containerd[1530]: 2025-08-19 00:09:40.430 [INFO][4190] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-jzq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--jzq68-eth0" Aug 19 00:09:40.510116 containerd[1530]: time="2025-08-19T00:09:40.510044904Z" level=info msg="connecting to shim 2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa" address="unix:///run/containerd/s/e1ddfdce6c01bf9081d0da8fa64b99cb76904f9f44fde595e39ea5d74392e11f" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:09:40.546117 systemd-networkd[1441]: cali6278dc27795: Link UP Aug 19 00:09:40.546804 systemd-networkd[1441]: cali6278dc27795: Gained carrier Aug 19 00:09:40.557060 systemd[1]: Started cri-containerd-2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa.scope - libcontainer container 2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa. Aug 19 00:09:40.573887 containerd[1530]: 2025-08-19 00:09:40.317 [INFO][4180] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0 calico-apiserver-7566bc84c- calico-apiserver 1a02df7e-9cb9-459a-aede-2386a78b244a 823 0 2025-08-19 00:09:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7566bc84c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7566bc84c-fvsmj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6278dc27795 [] [] }} ContainerID="cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-fvsmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--fvsmj-" Aug 19 00:09:40.573887 containerd[1530]: 2025-08-19 00:09:40.317 [INFO][4180] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-fvsmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0" Aug 19 00:09:40.573887 containerd[1530]: 2025-08-19 00:09:40.375 [INFO][4213] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" HandleID="k8s-pod-network.cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" Workload="localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0" Aug 19 00:09:40.574105 containerd[1530]: 2025-08-19 00:09:40.375 [INFO][4213] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" HandleID="k8s-pod-network.cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" Workload="localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400041fb20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7566bc84c-fvsmj", "timestamp":"2025-08-19 00:09:40.375416741 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:09:40.574105 containerd[1530]: 2025-08-19 00:09:40.375 [INFO][4213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:09:40.574105 containerd[1530]: 2025-08-19 00:09:40.412 [INFO][4213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:09:40.574105 containerd[1530]: 2025-08-19 00:09:40.412 [INFO][4213] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:09:40.574105 containerd[1530]: 2025-08-19 00:09:40.486 [INFO][4213] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" host="localhost" Aug 19 00:09:40.574105 containerd[1530]: 2025-08-19 00:09:40.502 [INFO][4213] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:09:40.574105 containerd[1530]: 2025-08-19 00:09:40.508 [INFO][4213] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:09:40.574105 containerd[1530]: 2025-08-19 00:09:40.512 [INFO][4213] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:40.574105 containerd[1530]: 2025-08-19 00:09:40.515 [INFO][4213] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:40.574105 containerd[1530]: 2025-08-19 00:09:40.515 [INFO][4213] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" host="localhost" Aug 19 00:09:40.575708 containerd[1530]: 2025-08-19 00:09:40.519 [INFO][4213] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a Aug 19 00:09:40.575708 containerd[1530]: 2025-08-19 00:09:40.530 [INFO][4213] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" host="localhost" Aug 19 00:09:40.575708 containerd[1530]: 2025-08-19 00:09:40.538 [INFO][4213] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" host="localhost" Aug 19 00:09:40.575708 containerd[1530]: 2025-08-19 00:09:40.538 [INFO][4213] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" host="localhost" Aug 19 00:09:40.575708 containerd[1530]: 2025-08-19 00:09:40.538 [INFO][4213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:09:40.575708 containerd[1530]: 2025-08-19 00:09:40.538 [INFO][4213] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" HandleID="k8s-pod-network.cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" Workload="localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0" Aug 19 00:09:40.575886 containerd[1530]: 2025-08-19 00:09:40.542 [INFO][4180] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-fvsmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0", GenerateName:"calico-apiserver-7566bc84c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a02df7e-9cb9-459a-aede-2386a78b244a", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7566bc84c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7566bc84c-fvsmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6278dc27795", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:40.575963 containerd[1530]: 2025-08-19 00:09:40.543 [INFO][4180] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-fvsmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0" Aug 19 00:09:40.575963 containerd[1530]: 2025-08-19 00:09:40.543 [INFO][4180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6278dc27795 ContainerID="cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-fvsmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0" Aug 19 00:09:40.575963 containerd[1530]: 2025-08-19 00:09:40.547 [INFO][4180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-fvsmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0" Aug 19 00:09:40.576028 containerd[1530]: 2025-08-19 00:09:40.548 [INFO][4180] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-fvsmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0", GenerateName:"calico-apiserver-7566bc84c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a02df7e-9cb9-459a-aede-2386a78b244a", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7566bc84c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a", Pod:"calico-apiserver-7566bc84c-fvsmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6278dc27795", MAC:"6a:49:3e:e3:9d:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:40.576131 containerd[1530]: 2025-08-19 00:09:40.568 [INFO][4180] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" Namespace="calico-apiserver" Pod="calico-apiserver-7566bc84c-fvsmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7566bc84c--fvsmj-eth0" Aug 19 00:09:40.592976 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:09:40.616613 containerd[1530]: time="2025-08-19T00:09:40.616260748Z" level=info msg="connecting to shim cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a" address="unix:///run/containerd/s/0de6e8e516ef3e9c86a801265531f03cb58a8ecd0b14bfcb0cb22e3f7bd38adc" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:09:40.637264 containerd[1530]: time="2025-08-19T00:09:40.637222277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7566bc84c-jzq68,Uid:9e9944e5-5c31-47b8-bc20-b9bae23763b0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa\"" Aug 19 00:09:40.643372 containerd[1530]: time="2025-08-19T00:09:40.643339268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 19 00:09:40.669044 systemd[1]: Started cri-containerd-cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a.scope - libcontainer container cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a. Aug 19 00:09:40.680930 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:09:40.706491 containerd[1530]: time="2025-08-19T00:09:40.706450216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7566bc84c-fvsmj,Uid:1a02df7e-9cb9-459a-aede-2386a78b244a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a\"" Aug 19 00:09:41.267492 containerd[1530]: time="2025-08-19T00:09:41.267437043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m8hv5,Uid:881746fb-5e8f-498c-b589-664afd8f39e4,Namespace:kube-system,Attempt:0,}" Aug 19 00:09:41.399663 systemd-networkd[1441]: cali47012bb0e0a: Link UP Aug 19 00:09:41.400654 systemd-networkd[1441]: cali47012bb0e0a: Gained carrier Aug 19 00:09:41.416428 containerd[1530]: 2025-08-19 00:09:41.322 [INFO][4345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0 coredns-7c65d6cfc9- kube-system 881746fb-5e8f-498c-b589-664afd8f39e4 815 0 2025-08-19 00:09:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-m8hv5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali47012bb0e0a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m8hv5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m8hv5-" Aug 19 00:09:41.416428 containerd[1530]: 2025-08-19 00:09:41.322 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m8hv5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0" Aug 19 00:09:41.416428 containerd[1530]: 2025-08-19 00:09:41.347 [INFO][4359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" HandleID="k8s-pod-network.758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" Workload="localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0" Aug 19 00:09:41.416904 containerd[1530]: 2025-08-19 00:09:41.347 [INFO][4359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" HandleID="k8s-pod-network.758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" Workload="localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000483ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-m8hv5", "timestamp":"2025-08-19 00:09:41.347091409 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:09:41.416904 containerd[1530]: 2025-08-19 00:09:41.347 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:09:41.416904 containerd[1530]: 2025-08-19 00:09:41.347 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:09:41.416904 containerd[1530]: 2025-08-19 00:09:41.347 [INFO][4359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:09:41.416904 containerd[1530]: 2025-08-19 00:09:41.360 [INFO][4359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" host="localhost" Aug 19 00:09:41.416904 containerd[1530]: 2025-08-19 00:09:41.365 [INFO][4359] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:09:41.416904 containerd[1530]: 2025-08-19 00:09:41.371 [INFO][4359] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:09:41.416904 containerd[1530]: 2025-08-19 00:09:41.374 [INFO][4359] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:41.416904 containerd[1530]: 2025-08-19 00:09:41.378 [INFO][4359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:41.416904 containerd[1530]: 2025-08-19 00:09:41.380 [INFO][4359] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" host="localhost" Aug 19 00:09:41.417163 containerd[1530]: 2025-08-19 00:09:41.382 [INFO][4359] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946 Aug 19 00:09:41.417163 containerd[1530]: 2025-08-19 00:09:41.386 [INFO][4359] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" host="localhost" Aug 19 00:09:41.417163 containerd[1530]: 2025-08-19 00:09:41.394 [INFO][4359] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" host="localhost" Aug 19 00:09:41.417163 containerd[1530]: 2025-08-19 00:09:41.394 [INFO][4359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" host="localhost" Aug 19 00:09:41.417163 containerd[1530]: 2025-08-19 00:09:41.394 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:09:41.417163 containerd[1530]: 2025-08-19 00:09:41.394 [INFO][4359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" HandleID="k8s-pod-network.758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" Workload="localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0" Aug 19 00:09:41.417291 containerd[1530]: 2025-08-19 00:09:41.397 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m8hv5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"881746fb-5e8f-498c-b589-664afd8f39e4", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-m8hv5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47012bb0e0a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:41.417355 containerd[1530]: 2025-08-19 00:09:41.397 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m8hv5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0" Aug 19 00:09:41.417355 containerd[1530]: 2025-08-19 00:09:41.397 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47012bb0e0a ContainerID="758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m8hv5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0" Aug 19 00:09:41.417355 containerd[1530]: 2025-08-19 00:09:41.400 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m8hv5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0" Aug 19 00:09:41.417965 containerd[1530]: 2025-08-19 00:09:41.401 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m8hv5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"881746fb-5e8f-498c-b589-664afd8f39e4", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946", Pod:"coredns-7c65d6cfc9-m8hv5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47012bb0e0a", MAC:"36:e6:79:6f:cb:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:41.417965 containerd[1530]: 2025-08-19 00:09:41.413 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m8hv5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m8hv5-eth0" Aug 19 00:09:41.454692 containerd[1530]: time="2025-08-19T00:09:41.454186455Z" level=info msg="connecting to shim 758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946" address="unix:///run/containerd/s/f5849f81de63cc1cc51926b54780e3df2b186618567e3d264b572ec5c1a48252" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:09:41.489033 systemd[1]: Started cri-containerd-758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946.scope - libcontainer container 758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946. Aug 19 00:09:41.504849 systemd-networkd[1441]: cali6ba6671b3e7: Gained IPv6LL Aug 19 00:09:41.506998 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:09:41.536945 containerd[1530]: time="2025-08-19T00:09:41.536738937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m8hv5,Uid:881746fb-5e8f-498c-b589-664afd8f39e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946\"" Aug 19 00:09:41.553713 containerd[1530]: time="2025-08-19T00:09:41.553649553Z" level=info msg="CreateContainer within sandbox \"758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 00:09:41.802963 containerd[1530]: time="2025-08-19T00:09:41.802710676Z" level=info msg="Container d3b3e7151ef7a973e630ee9522e2e4c9bcbbf15200937353bf5fef8a843cdd48: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:41.821566 containerd[1530]: time="2025-08-19T00:09:41.821495009Z" level=info msg="CreateContainer within sandbox \"758fdfa39a58979264a6d86d2cc825334e69bc6de3dd1d361d22a73148082946\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3b3e7151ef7a973e630ee9522e2e4c9bcbbf15200937353bf5fef8a843cdd48\"" Aug 19 00:09:41.822136 containerd[1530]: time="2025-08-19T00:09:41.822103208Z" level=info msg="StartContainer for \"d3b3e7151ef7a973e630ee9522e2e4c9bcbbf15200937353bf5fef8a843cdd48\"" Aug 19 00:09:41.825375 containerd[1530]: time="2025-08-19T00:09:41.825326883Z" level=info msg="connecting to shim d3b3e7151ef7a973e630ee9522e2e4c9bcbbf15200937353bf5fef8a843cdd48" address="unix:///run/containerd/s/f5849f81de63cc1cc51926b54780e3df2b186618567e3d264b572ec5c1a48252" protocol=ttrpc version=3 Aug 19 00:09:41.859040 systemd[1]: Started cri-containerd-d3b3e7151ef7a973e630ee9522e2e4c9bcbbf15200937353bf5fef8a843cdd48.scope - libcontainer container d3b3e7151ef7a973e630ee9522e2e4c9bcbbf15200937353bf5fef8a843cdd48. Aug 19 00:09:41.886678 containerd[1530]: time="2025-08-19T00:09:41.886637916Z" level=info msg="StartContainer for \"d3b3e7151ef7a973e630ee9522e2e4c9bcbbf15200937353bf5fef8a843cdd48\" returns successfully" Aug 19 00:09:42.143299 systemd-networkd[1441]: cali6278dc27795: Gained IPv6LL Aug 19 00:09:42.491315 kubelet[2674]: I0819 00:09:42.490910 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-m8hv5" podStartSLOduration=36.490890025 podStartE2EDuration="36.490890025s" podCreationTimestamp="2025-08-19 00:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:09:42.488286789 +0000 UTC m=+42.367675727" watchObservedRunningTime="2025-08-19 00:09:42.490890025 +0000 UTC m=+42.370278963" Aug 19 00:09:42.701914 containerd[1530]: time="2025-08-19T00:09:42.701860570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:42.703137 containerd[1530]: time="2025-08-19T00:09:42.702950568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 19 00:09:42.704069 containerd[1530]: time="2025-08-19T00:09:42.704035567Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:42.706847 containerd[1530]: time="2025-08-19T00:09:42.706792083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:42.708110 containerd[1530]: time="2025-08-19T00:09:42.707982681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.064468133s" Aug 19 00:09:42.708110 containerd[1530]: time="2025-08-19T00:09:42.708015001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 19 00:09:42.714491 containerd[1530]: time="2025-08-19T00:09:42.714452192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 19 00:09:42.720268 containerd[1530]: time="2025-08-19T00:09:42.720220104Z" level=info msg="CreateContainer within sandbox \"2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 19 00:09:42.727847 containerd[1530]: time="2025-08-19T00:09:42.727026974Z" level=info msg="Container fd3969ba6bc2e6e0bb32f1faf0b7da1a4328313574985fbc38cff647b2678192: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:42.730552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003163658.mount: Deactivated successfully. Aug 19 00:09:42.737031 containerd[1530]: time="2025-08-19T00:09:42.736983080Z" level=info msg="CreateContainer within sandbox \"2e9bdc2680fcc14eb03fbd867672b0059a7c3cda71719800d1513170d374e5aa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fd3969ba6bc2e6e0bb32f1faf0b7da1a4328313574985fbc38cff647b2678192\"" Aug 19 00:09:42.742161 containerd[1530]: time="2025-08-19T00:09:42.741993353Z" level=info msg="StartContainer for \"fd3969ba6bc2e6e0bb32f1faf0b7da1a4328313574985fbc38cff647b2678192\"" Aug 19 00:09:42.743277 containerd[1530]: time="2025-08-19T00:09:42.743249312Z" level=info msg="connecting to shim fd3969ba6bc2e6e0bb32f1faf0b7da1a4328313574985fbc38cff647b2678192" address="unix:///run/containerd/s/e1ddfdce6c01bf9081d0da8fa64b99cb76904f9f44fde595e39ea5d74392e11f" protocol=ttrpc version=3 Aug 19 00:09:42.764007 systemd[1]: Started cri-containerd-fd3969ba6bc2e6e0bb32f1faf0b7da1a4328313574985fbc38cff647b2678192.scope - libcontainer container fd3969ba6bc2e6e0bb32f1faf0b7da1a4328313574985fbc38cff647b2678192. Aug 19 00:09:42.803994 containerd[1530]: time="2025-08-19T00:09:42.803943627Z" level=info msg="StartContainer for \"fd3969ba6bc2e6e0bb32f1faf0b7da1a4328313574985fbc38cff647b2678192\" returns successfully" Aug 19 00:09:42.987432 containerd[1530]: time="2025-08-19T00:09:42.986729530Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:42.987432 containerd[1530]: time="2025-08-19T00:09:42.987306290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 19 00:09:42.989134 containerd[1530]: time="2025-08-19T00:09:42.989090687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 274.602015ms" Aug 19 00:09:42.989200 containerd[1530]: time="2025-08-19T00:09:42.989145007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 19 00:09:42.991282 containerd[1530]: time="2025-08-19T00:09:42.990873685Z" level=info msg="CreateContainer within sandbox \"cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 19 00:09:42.999749 containerd[1530]: time="2025-08-19T00:09:42.999466873Z" level=info msg="Container 43de3edcae143710b2ca17fa75506237b1ca235d24168c0f73ecd1d10d645841: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:43.007782 containerd[1530]: time="2025-08-19T00:09:43.007733101Z" level=info msg="CreateContainer within sandbox \"cb7fcfb80173fc3baafd02eb65a26363d37085107e07f5aab9c7655d5e90cf9a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"43de3edcae143710b2ca17fa75506237b1ca235d24168c0f73ecd1d10d645841\"" Aug 19 00:09:43.008599 containerd[1530]: time="2025-08-19T00:09:43.008572620Z" level=info msg="StartContainer for \"43de3edcae143710b2ca17fa75506237b1ca235d24168c0f73ecd1d10d645841\"" Aug 19 00:09:43.010278 containerd[1530]: time="2025-08-19T00:09:43.010249738Z" level=info msg="connecting to shim 43de3edcae143710b2ca17fa75506237b1ca235d24168c0f73ecd1d10d645841" address="unix:///run/containerd/s/0de6e8e516ef3e9c86a801265531f03cb58a8ecd0b14bfcb0cb22e3f7bd38adc" protocol=ttrpc version=3 Aug 19 00:09:43.037029 systemd[1]: Started cri-containerd-43de3edcae143710b2ca17fa75506237b1ca235d24168c0f73ecd1d10d645841.scope - libcontainer container 43de3edcae143710b2ca17fa75506237b1ca235d24168c0f73ecd1d10d645841. Aug 19 00:09:43.078883 containerd[1530]: time="2025-08-19T00:09:43.078798604Z" level=info msg="StartContainer for \"43de3edcae143710b2ca17fa75506237b1ca235d24168c0f73ecd1d10d645841\" returns successfully" Aug 19 00:09:43.269843 containerd[1530]: time="2025-08-19T00:09:43.269637462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qlkx4,Uid:ad4442bf-80b9-4ac4-a91e-7f33b5c10474,Namespace:calico-system,Attempt:0,}" Aug 19 00:09:43.269843 containerd[1530]: time="2025-08-19T00:09:43.269685022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b8dfc5c6-j5fk8,Uid:20fc4aab-a0f0-47e6-8649-b56f7b092a7c,Namespace:calico-system,Attempt:0,}" Aug 19 00:09:43.270075 containerd[1530]: time="2025-08-19T00:09:43.269637462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q64vx,Uid:2968b672-0a30-46ea-b465-e350384939ca,Namespace:kube-system,Attempt:0,}" Aug 19 00:09:43.270075 containerd[1530]: time="2025-08-19T00:09:43.269997101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gm46,Uid:7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013,Namespace:calico-system,Attempt:0,}" Aug 19 00:09:43.423026 systemd-networkd[1441]: cali47012bb0e0a: Gained IPv6LL Aug 19 00:09:43.518582 kubelet[2674]: I0819 00:09:43.518510 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7566bc84c-jzq68" podStartSLOduration=26.449538794 podStartE2EDuration="28.51847788s" podCreationTimestamp="2025-08-19 00:09:15 +0000 UTC" firstStartedPulling="2025-08-19 00:09:40.643076589 +0000 UTC m=+40.522465447" lastFinishedPulling="2025-08-19 00:09:42.712015675 +0000 UTC m=+42.591404533" observedRunningTime="2025-08-19 00:09:43.515776964 +0000 UTC m=+43.395165822" watchObservedRunningTime="2025-08-19 00:09:43.51847788 +0000 UTC m=+43.397866738" Aug 19 00:09:43.520245 kubelet[2674]: I0819 00:09:43.520115 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7566bc84c-fvsmj" podStartSLOduration=26.238060846 podStartE2EDuration="28.520091878s" podCreationTimestamp="2025-08-19 00:09:15 +0000 UTC" firstStartedPulling="2025-08-19 00:09:40.707734854 +0000 UTC m=+40.587123712" lastFinishedPulling="2025-08-19 00:09:42.989765886 +0000 UTC m=+42.869154744" observedRunningTime="2025-08-19 00:09:43.499801586 +0000 UTC m=+43.379190444" watchObservedRunningTime="2025-08-19 00:09:43.520091878 +0000 UTC m=+43.399480736" Aug 19 00:09:43.537551 systemd-networkd[1441]: calif2b2dee2ac3: Link UP Aug 19 00:09:43.538233 systemd-networkd[1441]: calif2b2dee2ac3: Gained carrier Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.392 [INFO][4544] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0 goldmane-58fd7646b9- calico-system ad4442bf-80b9-4ac4-a91e-7f33b5c10474 821 0 2025-08-19 00:09:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-qlkx4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif2b2dee2ac3 [] [] }} ContainerID="10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" Namespace="calico-system" Pod="goldmane-58fd7646b9-qlkx4" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qlkx4-" Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.393 [INFO][4544] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" Namespace="calico-system" Pod="goldmane-58fd7646b9-qlkx4" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0" Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.454 [INFO][4609] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" HandleID="k8s-pod-network.10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" Workload="localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0" Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.454 [INFO][4609] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" HandleID="k8s-pod-network.10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" Workload="localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-qlkx4", "timestamp":"2025-08-19 00:09:43.454311008 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.454 [INFO][4609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.454 [INFO][4609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.454 [INFO][4609] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.468 [INFO][4609] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" host="localhost" Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.480 [INFO][4609] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.488 [INFO][4609] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.491 [INFO][4609] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.493 [INFO][4609] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.493 [INFO][4609] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" host="localhost" Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.496 [INFO][4609] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24 Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.502 [INFO][4609] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" host="localhost" Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.513 [INFO][4609] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" host="localhost" Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.513 [INFO][4609] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" host="localhost" Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.514 [INFO][4609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:09:43.560465 containerd[1530]: 2025-08-19 00:09:43.517 [INFO][4609] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" HandleID="k8s-pod-network.10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" Workload="localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0" Aug 19 00:09:43.561008 containerd[1530]: 2025-08-19 00:09:43.525 [INFO][4544] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" Namespace="calico-system" Pod="goldmane-58fd7646b9-qlkx4" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ad4442bf-80b9-4ac4-a91e-7f33b5c10474", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-qlkx4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif2b2dee2ac3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:43.561008 containerd[1530]: 2025-08-19 00:09:43.525 [INFO][4544] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" Namespace="calico-system" Pod="goldmane-58fd7646b9-qlkx4" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0" Aug 19 00:09:43.561008 containerd[1530]: 2025-08-19 00:09:43.525 [INFO][4544] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2b2dee2ac3 ContainerID="10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" Namespace="calico-system" Pod="goldmane-58fd7646b9-qlkx4" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0" Aug 19 00:09:43.561008 containerd[1530]: 2025-08-19 00:09:43.540 [INFO][4544] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" Namespace="calico-system" Pod="goldmane-58fd7646b9-qlkx4" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0" Aug 19 00:09:43.561008 containerd[1530]: 2025-08-19 00:09:43.543 [INFO][4544] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" Namespace="calico-system" Pod="goldmane-58fd7646b9-qlkx4" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ad4442bf-80b9-4ac4-a91e-7f33b5c10474", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24", Pod:"goldmane-58fd7646b9-qlkx4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif2b2dee2ac3", MAC:"92:a5:99:fa:97:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:43.561008 containerd[1530]: 2025-08-19 00:09:43.552 [INFO][4544] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" Namespace="calico-system" Pod="goldmane-58fd7646b9-qlkx4" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qlkx4-eth0" Aug 19 00:09:43.591138 containerd[1530]: time="2025-08-19T00:09:43.591026341Z" level=info msg="connecting to shim 10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24" address="unix:///run/containerd/s/6998eb9e52a8af3d99f93b376d612961a4ebeffdf4820e0c7643c489e2c6dd6e" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:09:43.624216 systemd-networkd[1441]: calif96f7402bcf: Link UP Aug 19 00:09:43.624438 systemd-networkd[1441]: calif96f7402bcf: Gained carrier Aug 19 00:09:43.633048 systemd[1]: Started cri-containerd-10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24.scope - libcontainer container 10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24. Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.395 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7gm46-eth0 csi-node-driver- calico-system 7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013 718 0 2025-08-19 00:09:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7gm46 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif96f7402bcf [] [] }} ContainerID="00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" Namespace="calico-system" Pod="csi-node-driver-7gm46" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gm46-" Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.396 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" Namespace="calico-system" Pod="csi-node-driver-7gm46" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gm46-eth0" Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.460 [INFO][4611] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" HandleID="k8s-pod-network.00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" Workload="localhost-k8s-csi--node--driver--7gm46-eth0" Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.460 [INFO][4611] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" HandleID="k8s-pod-network.00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" Workload="localhost-k8s-csi--node--driver--7gm46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7gm46", "timestamp":"2025-08-19 00:09:43.46019696 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.460 [INFO][4611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.513 [INFO][4611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.513 [INFO][4611] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.568 [INFO][4611] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" host="localhost" Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.576 [INFO][4611] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.589 [INFO][4611] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.591 [INFO][4611] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.594 [INFO][4611] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.594 [INFO][4611] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" host="localhost" Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.598 [INFO][4611] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.604 [INFO][4611] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" host="localhost" Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.611 [INFO][4611] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" host="localhost" Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.611 [INFO][4611] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" host="localhost" Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.611 [INFO][4611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:09:43.641947 containerd[1530]: 2025-08-19 00:09:43.611 [INFO][4611] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" HandleID="k8s-pod-network.00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" Workload="localhost-k8s-csi--node--driver--7gm46-eth0" Aug 19 00:09:43.642486 containerd[1530]: 2025-08-19 00:09:43.616 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" Namespace="calico-system" Pod="csi-node-driver-7gm46" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gm46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7gm46-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7gm46", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif96f7402bcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:43.642486 containerd[1530]: 2025-08-19 00:09:43.618 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" Namespace="calico-system" Pod="csi-node-driver-7gm46" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gm46-eth0" Aug 19 00:09:43.642486 containerd[1530]: 2025-08-19 00:09:43.618 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif96f7402bcf ContainerID="00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" Namespace="calico-system" Pod="csi-node-driver-7gm46" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gm46-eth0" Aug 19 00:09:43.642486 containerd[1530]: 2025-08-19 00:09:43.624 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" Namespace="calico-system" Pod="csi-node-driver-7gm46" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gm46-eth0" Aug 19 00:09:43.642486 containerd[1530]: 2025-08-19 00:09:43.625 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" Namespace="calico-system" Pod="csi-node-driver-7gm46" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gm46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7gm46-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace", Pod:"csi-node-driver-7gm46", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif96f7402bcf", MAC:"e2:08:d0:35:27:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:43.642486 containerd[1530]: 2025-08-19 00:09:43.637 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" Namespace="calico-system" Pod="csi-node-driver-7gm46" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gm46-eth0" Aug 19 00:09:43.673982 containerd[1530]: time="2025-08-19T00:09:43.673889267Z" level=info msg="connecting to shim 00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace" address="unix:///run/containerd/s/76da12fba638adb754df3d48936b9001592189917a25325c7d0601795123aa8d" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:09:43.690023 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:09:43.707012 systemd[1]: Started cri-containerd-00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace.scope - libcontainer container 00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace. Aug 19 00:09:43.744439 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:09:43.807423 containerd[1530]: time="2025-08-19T00:09:43.807310644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qlkx4,Uid:ad4442bf-80b9-4ac4-a91e-7f33b5c10474,Namespace:calico-system,Attempt:0,} returns sandbox id \"10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24\"" Aug 19 00:09:43.809557 containerd[1530]: time="2025-08-19T00:09:43.809525961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 19 00:09:43.892274 containerd[1530]: time="2025-08-19T00:09:43.892234487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gm46,Uid:7dfdfb8d-69a9-402c-a1bf-fd66f5c4d013,Namespace:calico-system,Attempt:0,} returns sandbox id \"00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace\"" Aug 19 00:09:43.897580 systemd-networkd[1441]: cali6e1f1e9bc81: Link UP Aug 19 00:09:43.901282 systemd-networkd[1441]: cali6e1f1e9bc81: Gained carrier Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.399 [INFO][4546] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0 calico-kube-controllers-55b8dfc5c6- calico-system 20fc4aab-a0f0-47e6-8649-b56f7b092a7c 822 0 2025-08-19 00:09:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55b8dfc5c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55b8dfc5c6-j5fk8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6e1f1e9bc81 [] [] }} ContainerID="59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" Namespace="calico-system" Pod="calico-kube-controllers-55b8dfc5c6-j5fk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-" Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.399 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" Namespace="calico-system" Pod="calico-kube-controllers-55b8dfc5c6-j5fk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0" Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.459 [INFO][4613] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" HandleID="k8s-pod-network.59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" Workload="localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0" Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.459 [INFO][4613] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" HandleID="k8s-pod-network.59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" Workload="localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3190), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55b8dfc5c6-j5fk8", "timestamp":"2025-08-19 00:09:43.459164802 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.460 [INFO][4613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.611 [INFO][4613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.611 [INFO][4613] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.668 [INFO][4613] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" host="localhost" Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.678 [INFO][4613] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.689 [INFO][4613] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.693 [INFO][4613] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.700 [INFO][4613] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.700 [INFO][4613] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" host="localhost" Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.702 [INFO][4613] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953 Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.715 [INFO][4613] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" host="localhost" Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.885 [INFO][4613] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" host="localhost" Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.885 [INFO][4613] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" host="localhost" Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.886 [INFO][4613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:09:44.010160 containerd[1530]: 2025-08-19 00:09:43.888 [INFO][4613] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" HandleID="k8s-pod-network.59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" Workload="localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0" Aug 19 00:09:44.010753 containerd[1530]: 2025-08-19 00:09:43.892 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" Namespace="calico-system" Pod="calico-kube-controllers-55b8dfc5c6-j5fk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0", GenerateName:"calico-kube-controllers-55b8dfc5c6-", Namespace:"calico-system", SelfLink:"", UID:"20fc4aab-a0f0-47e6-8649-b56f7b092a7c", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b8dfc5c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55b8dfc5c6-j5fk8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6e1f1e9bc81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:44.010753 containerd[1530]: 2025-08-19 00:09:43.893 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" Namespace="calico-system" Pod="calico-kube-controllers-55b8dfc5c6-j5fk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0" Aug 19 00:09:44.010753 containerd[1530]: 2025-08-19 00:09:43.893 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e1f1e9bc81 ContainerID="59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" Namespace="calico-system" Pod="calico-kube-controllers-55b8dfc5c6-j5fk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0" Aug 19 00:09:44.010753 containerd[1530]: 2025-08-19 00:09:43.902 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" Namespace="calico-system" Pod="calico-kube-controllers-55b8dfc5c6-j5fk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0" Aug 19 00:09:44.010753 containerd[1530]: 2025-08-19 00:09:43.903 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" Namespace="calico-system" Pod="calico-kube-controllers-55b8dfc5c6-j5fk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0", GenerateName:"calico-kube-controllers-55b8dfc5c6-", Namespace:"calico-system", SelfLink:"", UID:"20fc4aab-a0f0-47e6-8649-b56f7b092a7c", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b8dfc5c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953", Pod:"calico-kube-controllers-55b8dfc5c6-j5fk8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6e1f1e9bc81", MAC:"d6:77:ff:95:72:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:44.010753 containerd[1530]: 2025-08-19 00:09:44.000 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" Namespace="calico-system" Pod="calico-kube-controllers-55b8dfc5c6-j5fk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8dfc5c6--j5fk8-eth0" Aug 19 00:09:44.046263 containerd[1530]: time="2025-08-19T00:09:44.046163797Z" level=info msg="connecting to shim 59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953" address="unix:///run/containerd/s/afb196ce35a11c159fd3f38c980d6ff4d88009288549e909e26180596125dc3d" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:09:44.055880 systemd-networkd[1441]: cali54cf912cde1: Link UP Aug 19 00:09:44.056274 systemd-networkd[1441]: cali54cf912cde1: Gained carrier Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:43.380 [INFO][4547] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0 coredns-7c65d6cfc9- kube-system 2968b672-0a30-46ea-b465-e350384939ca 820 0 2025-08-19 00:09:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-q64vx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali54cf912cde1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q64vx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q64vx-" Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:43.380 [INFO][4547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q64vx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0" Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:43.461 [INFO][4602] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" HandleID="k8s-pod-network.e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" Workload="localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0" Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:43.461 [INFO][4602] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" HandleID="k8s-pod-network.e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" Workload="localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-q64vx", "timestamp":"2025-08-19 00:09:43.461269999 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:43.461 [INFO][4602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:43.887 [INFO][4602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:43.887 [INFO][4602] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:43.998 [INFO][4602] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" host="localhost" Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:44.011 [INFO][4602] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:44.022 [INFO][4602] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:44.025 [INFO][4602] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:44.030 [INFO][4602] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:44.030 [INFO][4602] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" host="localhost" Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:44.032 [INFO][4602] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3 Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:44.037 [INFO][4602] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" host="localhost" Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:44.046 [INFO][4602] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" host="localhost" Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:44.046 [INFO][4602] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" host="localhost" Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:44.046 [INFO][4602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:09:44.076920 containerd[1530]: 2025-08-19 00:09:44.046 [INFO][4602] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" HandleID="k8s-pod-network.e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" Workload="localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0" Aug 19 00:09:44.077717 containerd[1530]: 2025-08-19 00:09:44.049 [INFO][4547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q64vx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2968b672-0a30-46ea-b465-e350384939ca", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-q64vx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54cf912cde1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:44.077717 containerd[1530]: 2025-08-19 00:09:44.049 [INFO][4547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q64vx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0" Aug 19 00:09:44.077717 containerd[1530]: 2025-08-19 00:09:44.049 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54cf912cde1 ContainerID="e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q64vx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0" Aug 19 00:09:44.077717 containerd[1530]: 2025-08-19 00:09:44.056 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q64vx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0" Aug 19 00:09:44.077717 containerd[1530]: 2025-08-19 00:09:44.057 [INFO][4547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q64vx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2968b672-0a30-46ea-b465-e350384939ca", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3", Pod:"coredns-7c65d6cfc9-q64vx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54cf912cde1", MAC:"92:af:3c:cc:2a:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:09:44.077717 containerd[1530]: 2025-08-19 00:09:44.068 [INFO][4547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q64vx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q64vx-eth0" Aug 19 00:09:44.093025 systemd[1]: Started cri-containerd-59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953.scope - libcontainer container 59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953. Aug 19 00:09:44.105450 containerd[1530]: time="2025-08-19T00:09:44.105392478Z" level=info msg="connecting to shim e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3" address="unix:///run/containerd/s/8527e4adcdab714e639cf056e336fcff43cf98b4127900a9a8f492838711b8a3" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:09:44.109981 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:09:44.138020 systemd[1]: Started cri-containerd-e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3.scope - libcontainer container e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3. Aug 19 00:09:44.146872 containerd[1530]: time="2025-08-19T00:09:44.146802582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b8dfc5c6-j5fk8,Uid:20fc4aab-a0f0-47e6-8649-b56f7b092a7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953\"" Aug 19 00:09:44.155239 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:09:44.177568 containerd[1530]: time="2025-08-19T00:09:44.177527341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q64vx,Uid:2968b672-0a30-46ea-b465-e350384939ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3\"" Aug 19 00:09:44.181239 containerd[1530]: time="2025-08-19T00:09:44.181189936Z" level=info msg="CreateContainer within sandbox \"e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 00:09:44.195958 containerd[1530]: time="2025-08-19T00:09:44.195907476Z" level=info msg="Container bb3f22fcd79be973fa755385a0cc930c60e50a1352181283fc140d3c67ee945e: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:44.201578 containerd[1530]: time="2025-08-19T00:09:44.201536709Z" level=info msg="CreateContainer within sandbox \"e22cfaed3d752c7dec4f007b3d6c0f0441d1d1f51cd3a2eee7a890305219b8e3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb3f22fcd79be973fa755385a0cc930c60e50a1352181283fc140d3c67ee945e\"" Aug 19 00:09:44.202419 containerd[1530]: time="2025-08-19T00:09:44.202382427Z" level=info msg="StartContainer for \"bb3f22fcd79be973fa755385a0cc930c60e50a1352181283fc140d3c67ee945e\"" Aug 19 00:09:44.203409 containerd[1530]: time="2025-08-19T00:09:44.203374026Z" level=info msg="connecting to shim bb3f22fcd79be973fa755385a0cc930c60e50a1352181283fc140d3c67ee945e" address="unix:///run/containerd/s/8527e4adcdab714e639cf056e336fcff43cf98b4127900a9a8f492838711b8a3" protocol=ttrpc version=3 Aug 19 00:09:44.222044 systemd[1]: Started cri-containerd-bb3f22fcd79be973fa755385a0cc930c60e50a1352181283fc140d3c67ee945e.scope - libcontainer container bb3f22fcd79be973fa755385a0cc930c60e50a1352181283fc140d3c67ee945e. Aug 19 00:09:44.251168 containerd[1530]: time="2025-08-19T00:09:44.251122562Z" level=info msg="StartContainer for \"bb3f22fcd79be973fa755385a0cc930c60e50a1352181283fc140d3c67ee945e\" returns successfully" Aug 19 00:09:44.482269 kubelet[2674]: I0819 00:09:44.482162 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:09:44.482434 kubelet[2674]: I0819 00:09:44.482415 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:09:44.492990 kubelet[2674]: I0819 00:09:44.492912 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q64vx" podStartSLOduration=38.492890917 podStartE2EDuration="38.492890917s" podCreationTimestamp="2025-08-19 00:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:09:44.491840478 +0000 UTC m=+44.371229336" watchObservedRunningTime="2025-08-19 00:09:44.492890917 +0000 UTC m=+44.372279735" Aug 19 00:09:45.023035 systemd-networkd[1441]: calif96f7402bcf: Gained IPv6LL Aug 19 00:09:45.150988 systemd-networkd[1441]: cali54cf912cde1: Gained IPv6LL Aug 19 00:09:45.214982 systemd-networkd[1441]: calif2b2dee2ac3: Gained IPv6LL Aug 19 00:09:45.277033 systemd[1]: Started sshd@8-10.0.0.31:22-10.0.0.1:49516.service - OpenSSH per-connection server daemon (10.0.0.1:49516). Aug 19 00:09:45.349201 sshd[4898]: Accepted publickey for core from 10.0.0.1 port 49516 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:09:45.350760 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:09:45.355895 systemd-logind[1514]: New session 9 of user core. Aug 19 00:09:45.365006 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 00:09:45.581395 kubelet[2674]: I0819 00:09:45.581221 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:09:45.599393 sshd[4907]: Connection closed by 10.0.0.1 port 49516 Aug 19 00:09:45.599732 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Aug 19 00:09:45.604688 systemd[1]: sshd@8-10.0.0.31:22-10.0.0.1:49516.service: Deactivated successfully. Aug 19 00:09:45.606684 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 00:09:45.610653 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Aug 19 00:09:45.611677 systemd-logind[1514]: Removed session 9. Aug 19 00:09:45.663116 systemd-networkd[1441]: cali6e1f1e9bc81: Gained IPv6LL Aug 19 00:09:45.731505 containerd[1530]: time="2025-08-19T00:09:45.731450070Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc90163f378c1f53746ff3c25b50c86b0b5d3e1189e7f95d306a82c63d4e558e\" id:\"08fbff48bed1375ea6acedb263b552bb6e4474106c40639a613a2e479a7c69c2\" pid:4934 exited_at:{seconds:1755562185 nanos:731137990}" Aug 19 00:09:45.877788 containerd[1530]: time="2025-08-19T00:09:45.877625557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc90163f378c1f53746ff3c25b50c86b0b5d3e1189e7f95d306a82c63d4e558e\" id:\"8cf5517a81812f62cdc7f7e4589fcf949089e4ad399708527ad5b554e712eaa5\" pid:4960 exited_at:{seconds:1755562185 nanos:859117061}" Aug 19 00:09:46.274553 kubelet[2674]: I0819 00:09:46.274521 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:09:46.715696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579009136.mount: Deactivated successfully. Aug 19 00:09:47.332527 containerd[1530]: time="2025-08-19T00:09:47.332441597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:47.333070 containerd[1530]: time="2025-08-19T00:09:47.333034316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 19 00:09:47.333891 containerd[1530]: time="2025-08-19T00:09:47.333848915Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:47.338678 containerd[1530]: time="2025-08-19T00:09:47.338603309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:47.339586 containerd[1530]: time="2025-08-19T00:09:47.339359908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.529685907s" Aug 19 00:09:47.339586 containerd[1530]: time="2025-08-19T00:09:47.339405468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 19 00:09:47.340542 containerd[1530]: time="2025-08-19T00:09:47.340507387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 19 00:09:47.343569 containerd[1530]: time="2025-08-19T00:09:47.343529343Z" level=info msg="CreateContainer within sandbox \"10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 19 00:09:47.355173 containerd[1530]: time="2025-08-19T00:09:47.355109728Z" level=info msg="Container 6eb8d2d61f569097d75dd11f21cfc237f1dc2c8379a32291bc6ed53927507758: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:47.406673 containerd[1530]: time="2025-08-19T00:09:47.406557303Z" level=info msg="CreateContainer within sandbox \"10751ad484516fcd690ee359f73f20b4696bacb897994b50b6d42e043ef75d24\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6eb8d2d61f569097d75dd11f21cfc237f1dc2c8379a32291bc6ed53927507758\"" Aug 19 00:09:47.407660 containerd[1530]: time="2025-08-19T00:09:47.407625501Z" level=info msg="StartContainer for \"6eb8d2d61f569097d75dd11f21cfc237f1dc2c8379a32291bc6ed53927507758\"" Aug 19 00:09:47.409886 containerd[1530]: time="2025-08-19T00:09:47.409851378Z" level=info msg="connecting to shim 6eb8d2d61f569097d75dd11f21cfc237f1dc2c8379a32291bc6ed53927507758" address="unix:///run/containerd/s/6998eb9e52a8af3d99f93b376d612961a4ebeffdf4820e0c7643c489e2c6dd6e" protocol=ttrpc version=3 Aug 19 00:09:47.438056 systemd[1]: Started cri-containerd-6eb8d2d61f569097d75dd11f21cfc237f1dc2c8379a32291bc6ed53927507758.scope - libcontainer container 6eb8d2d61f569097d75dd11f21cfc237f1dc2c8379a32291bc6ed53927507758. Aug 19 00:09:47.476954 containerd[1530]: time="2025-08-19T00:09:47.476915173Z" level=info msg="StartContainer for \"6eb8d2d61f569097d75dd11f21cfc237f1dc2c8379a32291bc6ed53927507758\" returns successfully" Aug 19 00:09:47.517285 kubelet[2674]: I0819 00:09:47.517212 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-qlkx4" podStartSLOduration=25.985988616 podStartE2EDuration="29.517191122s" podCreationTimestamp="2025-08-19 00:09:18 +0000 UTC" firstStartedPulling="2025-08-19 00:09:43.809157081 +0000 UTC m=+43.688545939" lastFinishedPulling="2025-08-19 00:09:47.340359587 +0000 UTC m=+47.219748445" observedRunningTime="2025-08-19 00:09:47.516652722 +0000 UTC m=+47.396041620" watchObservedRunningTime="2025-08-19 00:09:47.517191122 +0000 UTC m=+47.396580020" Aug 19 00:09:48.498226 kubelet[2674]: I0819 00:09:48.498180 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:09:48.543246 containerd[1530]: time="2025-08-19T00:09:48.543183707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:48.544156 containerd[1530]: time="2025-08-19T00:09:48.543933906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 19 00:09:48.545582 containerd[1530]: time="2025-08-19T00:09:48.545547304Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:48.549173 containerd[1530]: time="2025-08-19T00:09:48.549135900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.208589193s" Aug 19 00:09:48.549173 containerd[1530]: time="2025-08-19T00:09:48.549172300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 19 00:09:48.549667 containerd[1530]: time="2025-08-19T00:09:48.549512739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:48.550828 containerd[1530]: time="2025-08-19T00:09:48.550788618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 19 00:09:48.556045 containerd[1530]: time="2025-08-19T00:09:48.556002091Z" level=info msg="CreateContainer within sandbox \"00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 19 00:09:48.565315 containerd[1530]: time="2025-08-19T00:09:48.565256880Z" level=info msg="Container cbe4cb3e665d14a410e3cc1069607376133a5a3703e6668a7a84ca69685f0a9b: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:48.575833 containerd[1530]: time="2025-08-19T00:09:48.575308227Z" level=info msg="CreateContainer within sandbox \"00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cbe4cb3e665d14a410e3cc1069607376133a5a3703e6668a7a84ca69685f0a9b\"" Aug 19 00:09:48.576853 containerd[1530]: time="2025-08-19T00:09:48.576369226Z" level=info msg="StartContainer for \"cbe4cb3e665d14a410e3cc1069607376133a5a3703e6668a7a84ca69685f0a9b\"" Aug 19 00:09:48.578729 containerd[1530]: time="2025-08-19T00:09:48.578679223Z" level=info msg="connecting to shim cbe4cb3e665d14a410e3cc1069607376133a5a3703e6668a7a84ca69685f0a9b" address="unix:///run/containerd/s/76da12fba638adb754df3d48936b9001592189917a25325c7d0601795123aa8d" protocol=ttrpc version=3 Aug 19 00:09:48.613029 systemd[1]: Started cri-containerd-cbe4cb3e665d14a410e3cc1069607376133a5a3703e6668a7a84ca69685f0a9b.scope - libcontainer container cbe4cb3e665d14a410e3cc1069607376133a5a3703e6668a7a84ca69685f0a9b. Aug 19 00:09:48.662883 containerd[1530]: time="2025-08-19T00:09:48.662841158Z" level=info msg="StartContainer for \"cbe4cb3e665d14a410e3cc1069607376133a5a3703e6668a7a84ca69685f0a9b\" returns successfully" Aug 19 00:09:50.404867 containerd[1530]: time="2025-08-19T00:09:50.404806133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:50.405494 containerd[1530]: time="2025-08-19T00:09:50.405199212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 19 00:09:50.406335 containerd[1530]: time="2025-08-19T00:09:50.406281451Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:50.409334 containerd[1530]: time="2025-08-19T00:09:50.408634448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:50.409334 containerd[1530]: time="2025-08-19T00:09:50.409203167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.858375029s" Aug 19 00:09:50.409334 containerd[1530]: time="2025-08-19T00:09:50.409235967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 19 00:09:50.410528 containerd[1530]: time="2025-08-19T00:09:50.410503286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 19 00:09:50.429184 containerd[1530]: time="2025-08-19T00:09:50.429141543Z" level=info msg="CreateContainer within sandbox \"59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 19 00:09:50.440857 containerd[1530]: time="2025-08-19T00:09:50.439908730Z" level=info msg="Container e6ddd284856427eb2cc3b752a3f68624978e4ec3b9f60077f1cd92bceb002f4f: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:50.448490 containerd[1530]: time="2025-08-19T00:09:50.448436280Z" level=info msg="CreateContainer within sandbox \"59b5699c99dbec108af564a8717824de5ec56b1d49886661f2c16f7a05bd2953\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e6ddd284856427eb2cc3b752a3f68624978e4ec3b9f60077f1cd92bceb002f4f\"" Aug 19 00:09:50.449213 containerd[1530]: time="2025-08-19T00:09:50.449185799Z" level=info msg="StartContainer for \"e6ddd284856427eb2cc3b752a3f68624978e4ec3b9f60077f1cd92bceb002f4f\"" Aug 19 00:09:50.450513 containerd[1530]: time="2025-08-19T00:09:50.450473077Z" level=info msg="connecting to shim e6ddd284856427eb2cc3b752a3f68624978e4ec3b9f60077f1cd92bceb002f4f" address="unix:///run/containerd/s/afb196ce35a11c159fd3f38c980d6ff4d88009288549e909e26180596125dc3d" protocol=ttrpc version=3 Aug 19 00:09:50.475047 systemd[1]: Started cri-containerd-e6ddd284856427eb2cc3b752a3f68624978e4ec3b9f60077f1cd92bceb002f4f.scope - libcontainer container e6ddd284856427eb2cc3b752a3f68624978e4ec3b9f60077f1cd92bceb002f4f. Aug 19 00:09:50.525071 containerd[1530]: time="2025-08-19T00:09:50.523411269Z" level=info msg="StartContainer for \"e6ddd284856427eb2cc3b752a3f68624978e4ec3b9f60077f1cd92bceb002f4f\" returns successfully" Aug 19 00:09:50.620311 systemd[1]: Started sshd@9-10.0.0.31:22-10.0.0.1:49530.service - OpenSSH per-connection server daemon (10.0.0.1:49530). Aug 19 00:09:50.703246 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 49530 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:09:50.705284 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:09:50.710975 systemd-logind[1514]: New session 10 of user core. Aug 19 00:09:50.721005 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 00:09:50.922533 sshd[5116]: Connection closed by 10.0.0.1 port 49530 Aug 19 00:09:50.924069 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Aug 19 00:09:50.936463 systemd[1]: sshd@9-10.0.0.31:22-10.0.0.1:49530.service: Deactivated successfully. Aug 19 00:09:50.938665 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 00:09:50.941876 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. Aug 19 00:09:50.943051 systemd[1]: Started sshd@10-10.0.0.31:22-10.0.0.1:49544.service - OpenSSH per-connection server daemon (10.0.0.1:49544). Aug 19 00:09:50.945709 systemd-logind[1514]: Removed session 10. Aug 19 00:09:51.002489 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 49544 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:09:51.007571 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:09:51.018905 systemd-logind[1514]: New session 11 of user core. Aug 19 00:09:51.025066 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 00:09:51.271572 sshd[5133]: Connection closed by 10.0.0.1 port 49544 Aug 19 00:09:51.273729 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Aug 19 00:09:51.282845 systemd[1]: sshd@10-10.0.0.31:22-10.0.0.1:49544.service: Deactivated successfully. Aug 19 00:09:51.287703 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 00:09:51.292707 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. Aug 19 00:09:51.297310 systemd[1]: Started sshd@11-10.0.0.31:22-10.0.0.1:49556.service - OpenSSH per-connection server daemon (10.0.0.1:49556). Aug 19 00:09:51.303570 systemd-logind[1514]: Removed session 11. Aug 19 00:09:51.364940 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 49556 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:09:51.366934 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:09:51.373427 systemd-logind[1514]: New session 12 of user core. Aug 19 00:09:51.383043 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 00:09:51.533592 kubelet[2674]: I0819 00:09:51.533445 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55b8dfc5c6-j5fk8" podStartSLOduration=26.272583224 podStartE2EDuration="32.533426772s" podCreationTimestamp="2025-08-19 00:09:19 +0000 UTC" firstStartedPulling="2025-08-19 00:09:44.149516418 +0000 UTC m=+44.028905276" lastFinishedPulling="2025-08-19 00:09:50.410359966 +0000 UTC m=+50.289748824" observedRunningTime="2025-08-19 00:09:51.533102053 +0000 UTC m=+51.412490911" watchObservedRunningTime="2025-08-19 00:09:51.533426772 +0000 UTC m=+51.412815630" Aug 19 00:09:51.576768 sshd[5147]: Connection closed by 10.0.0.1 port 49556 Aug 19 00:09:51.577549 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Aug 19 00:09:51.582358 containerd[1530]: time="2025-08-19T00:09:51.582155514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6ddd284856427eb2cc3b752a3f68624978e4ec3b9f60077f1cd92bceb002f4f\" id:\"4b4b7973af637065a86fbbf844f4599d59fcf3781c4e07faa1be7d4a0b992f7c\" pid:5170 exited_at:{seconds:1755562191 nanos:581234835}" Aug 19 00:09:51.582056 systemd[1]: sshd@11-10.0.0.31:22-10.0.0.1:49556.service: Deactivated successfully. Aug 19 00:09:51.586115 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 00:09:51.589424 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. Aug 19 00:09:51.591895 systemd-logind[1514]: Removed session 12. Aug 19 00:09:52.122953 containerd[1530]: time="2025-08-19T00:09:52.122887029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:52.125695 containerd[1530]: time="2025-08-19T00:09:52.125618306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 19 00:09:52.127310 containerd[1530]: time="2025-08-19T00:09:52.127271664Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:52.129628 containerd[1530]: time="2025-08-19T00:09:52.129577982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:09:52.130776 containerd[1530]: time="2025-08-19T00:09:52.130721660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.720105054s" Aug 19 00:09:52.130834 containerd[1530]: time="2025-08-19T00:09:52.130775180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 19 00:09:52.132937 containerd[1530]: time="2025-08-19T00:09:52.132562218Z" level=info msg="CreateContainer within sandbox \"00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 19 00:09:52.143191 containerd[1530]: time="2025-08-19T00:09:52.143131366Z" level=info msg="Container 5397399b9e015a3db3e353f5ee47f42dc58814e98da74f8843ad3d7d2bad5dd3: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:09:52.154219 kubelet[2674]: I0819 00:09:52.154170 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:09:52.160997 containerd[1530]: time="2025-08-19T00:09:52.160938905Z" level=info msg="CreateContainer within sandbox \"00299413e5aa5192c956e32b0160aa19e5c3d5dbc305bc2eaa2f769f28e4bace\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5397399b9e015a3db3e353f5ee47f42dc58814e98da74f8843ad3d7d2bad5dd3\"" Aug 19 00:09:52.163005 containerd[1530]: time="2025-08-19T00:09:52.162963542Z" level=info msg="StartContainer for \"5397399b9e015a3db3e353f5ee47f42dc58814e98da74f8843ad3d7d2bad5dd3\"" Aug 19 00:09:52.164357 containerd[1530]: time="2025-08-19T00:09:52.164330621Z" level=info msg="connecting to shim 5397399b9e015a3db3e353f5ee47f42dc58814e98da74f8843ad3d7d2bad5dd3" address="unix:///run/containerd/s/76da12fba638adb754df3d48936b9001592189917a25325c7d0601795123aa8d" protocol=ttrpc version=3 Aug 19 00:09:52.187427 systemd[1]: Started cri-containerd-5397399b9e015a3db3e353f5ee47f42dc58814e98da74f8843ad3d7d2bad5dd3.scope - libcontainer container 5397399b9e015a3db3e353f5ee47f42dc58814e98da74f8843ad3d7d2bad5dd3. Aug 19 00:09:52.253671 containerd[1530]: time="2025-08-19T00:09:52.253605595Z" level=info msg="StartContainer for \"5397399b9e015a3db3e353f5ee47f42dc58814e98da74f8843ad3d7d2bad5dd3\" returns successfully" Aug 19 00:09:52.296203 containerd[1530]: time="2025-08-19T00:09:52.296095185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6eb8d2d61f569097d75dd11f21cfc237f1dc2c8379a32291bc6ed53927507758\" id:\"3ba4fcf1dd7ee727cdf8eb85b25e9200af25aaea7b4b3e98e32282713f182342\" pid:5218 exit_status:1 exited_at:{seconds:1755562192 nanos:288373634}" Aug 19 00:09:52.375247 kubelet[2674]: I0819 00:09:52.374718 2674 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 19 00:09:52.375247 kubelet[2674]: I0819 00:09:52.374778 2674 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 19 00:09:52.393373 containerd[1530]: time="2025-08-19T00:09:52.393333910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6eb8d2d61f569097d75dd11f21cfc237f1dc2c8379a32291bc6ed53927507758\" id:\"71e1795976cab831919cf5e3c8dc92da4cd2c83fcc65884279508f945492e8bb\" pid:5264 exit_status:1 exited_at:{seconds:1755562192 nanos:392416112}" Aug 19 00:09:56.589543 systemd[1]: Started sshd@12-10.0.0.31:22-10.0.0.1:41716.service - OpenSSH per-connection server daemon (10.0.0.1:41716). Aug 19 00:09:56.630076 sshd[5285]: Accepted publickey for core from 10.0.0.1 port 41716 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:09:56.631593 sshd-session[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:09:56.637162 systemd-logind[1514]: New session 13 of user core. Aug 19 00:09:56.643971 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 00:09:56.824976 sshd[5288]: Connection closed by 10.0.0.1 port 41716 Aug 19 00:09:56.824801 sshd-session[5285]: pam_unix(sshd:session): session closed for user core Aug 19 00:09:56.829186 systemd[1]: sshd@12-10.0.0.31:22-10.0.0.1:41716.service: Deactivated successfully. Aug 19 00:09:56.830968 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 00:09:56.836039 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. Aug 19 00:09:56.837212 systemd-logind[1514]: Removed session 13. Aug 19 00:10:01.836760 systemd[1]: Started sshd@13-10.0.0.31:22-10.0.0.1:41730.service - OpenSSH per-connection server daemon (10.0.0.1:41730). Aug 19 00:10:01.902400 sshd[5304]: Accepted publickey for core from 10.0.0.1 port 41730 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:10:01.903942 sshd-session[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:10:01.908524 systemd-logind[1514]: New session 14 of user core. Aug 19 00:10:01.921027 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 00:10:02.083086 sshd[5307]: Connection closed by 10.0.0.1 port 41730 Aug 19 00:10:02.083426 sshd-session[5304]: pam_unix(sshd:session): session closed for user core Aug 19 00:10:02.091110 systemd[1]: sshd@13-10.0.0.31:22-10.0.0.1:41730.service: Deactivated successfully. Aug 19 00:10:02.093092 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 00:10:02.093894 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. Aug 19 00:10:02.095176 systemd-logind[1514]: Removed session 14. Aug 19 00:10:05.249319 containerd[1530]: time="2025-08-19T00:10:05.249262330Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6ddd284856427eb2cc3b752a3f68624978e4ec3b9f60077f1cd92bceb002f4f\" id:\"4a59f56d0fed7c5b33afbc55d8b8bc3c32cb17ba50e87f7e295d4f2cee236f62\" pid:5335 exited_at:{seconds:1755562205 nanos:248498999}" Aug 19 00:10:07.099309 systemd[1]: Started sshd@14-10.0.0.31:22-10.0.0.1:50846.service - OpenSSH per-connection server daemon (10.0.0.1:50846). Aug 19 00:10:07.173259 sshd[5350]: Accepted publickey for core from 10.0.0.1 port 50846 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:10:07.178031 sshd-session[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:10:07.182819 systemd-logind[1514]: New session 15 of user core. Aug 19 00:10:07.193053 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 00:10:07.459314 sshd[5353]: Connection closed by 10.0.0.1 port 50846 Aug 19 00:10:07.459609 sshd-session[5350]: pam_unix(sshd:session): session closed for user core Aug 19 00:10:07.463139 systemd[1]: sshd@14-10.0.0.31:22-10.0.0.1:50846.service: Deactivated successfully. Aug 19 00:10:07.464850 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 00:10:07.465570 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. Aug 19 00:10:07.467172 systemd-logind[1514]: Removed session 15. Aug 19 00:10:08.205036 kubelet[2674]: I0819 00:10:08.204544 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:10:08.251854 kubelet[2674]: I0819 00:10:08.251490 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7gm46" podStartSLOduration=41.014589915 podStartE2EDuration="49.25147013s" podCreationTimestamp="2025-08-19 00:09:19 +0000 UTC" firstStartedPulling="2025-08-19 00:09:43.894491204 +0000 UTC m=+43.773880062" lastFinishedPulling="2025-08-19 00:09:52.131371419 +0000 UTC m=+52.010760277" observedRunningTime="2025-08-19 00:09:52.549846846 +0000 UTC m=+52.429235744" watchObservedRunningTime="2025-08-19 00:10:08.25147013 +0000 UTC m=+68.130858988" Aug 19 00:10:12.480682 systemd[1]: Started sshd@15-10.0.0.31:22-10.0.0.1:47314.service - OpenSSH per-connection server daemon (10.0.0.1:47314). Aug 19 00:10:12.558884 sshd[5371]: Accepted publickey for core from 10.0.0.1 port 47314 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:10:12.560305 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:10:12.564627 systemd-logind[1514]: New session 16 of user core. Aug 19 00:10:12.571986 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 00:10:12.766042 sshd[5374]: Connection closed by 10.0.0.1 port 47314 Aug 19 00:10:12.765958 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Aug 19 00:10:12.775354 systemd[1]: sshd@15-10.0.0.31:22-10.0.0.1:47314.service: Deactivated successfully. Aug 19 00:10:12.777451 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 00:10:12.779837 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. Aug 19 00:10:12.781754 systemd-logind[1514]: Removed session 16. Aug 19 00:10:12.785648 systemd[1]: Started sshd@16-10.0.0.31:22-10.0.0.1:47316.service - OpenSSH per-connection server daemon (10.0.0.1:47316). Aug 19 00:10:12.837263 sshd[5387]: Accepted publickey for core from 10.0.0.1 port 47316 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:10:12.839035 sshd-session[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:10:12.846564 systemd-logind[1514]: New session 17 of user core. Aug 19 00:10:12.855007 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 00:10:13.075127 sshd[5390]: Connection closed by 10.0.0.1 port 47316 Aug 19 00:10:13.075189 sshd-session[5387]: pam_unix(sshd:session): session closed for user core Aug 19 00:10:13.091867 systemd[1]: sshd@16-10.0.0.31:22-10.0.0.1:47316.service: Deactivated successfully. Aug 19 00:10:13.093867 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 00:10:13.094614 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. Aug 19 00:10:13.097484 systemd[1]: Started sshd@17-10.0.0.31:22-10.0.0.1:47332.service - OpenSSH per-connection server daemon (10.0.0.1:47332). Aug 19 00:10:13.098731 systemd-logind[1514]: Removed session 17. Aug 19 00:10:13.153712 sshd[5401]: Accepted publickey for core from 10.0.0.1 port 47332 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:10:13.154944 sshd-session[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:10:13.159199 systemd-logind[1514]: New session 18 of user core. Aug 19 00:10:13.170970 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 00:10:13.771339 containerd[1530]: time="2025-08-19T00:10:13.771289208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6eb8d2d61f569097d75dd11f21cfc237f1dc2c8379a32291bc6ed53927507758\" id:\"3e2358499780196d81be269f004ea64d8dd0fc0a6c2f725d199ccdaba3175636\" pid:5429 exited_at:{seconds:1755562213 nanos:770983924}" Aug 19 00:10:14.939568 sshd[5404]: Connection closed by 10.0.0.1 port 47332 Aug 19 00:10:14.941048 sshd-session[5401]: pam_unix(sshd:session): session closed for user core Aug 19 00:10:14.951388 systemd[1]: sshd@17-10.0.0.31:22-10.0.0.1:47332.service: Deactivated successfully. Aug 19 00:10:14.954662 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 00:10:14.954994 systemd[1]: session-18.scope: Consumed 558ms CPU time, 70M memory peak. Aug 19 00:10:14.960173 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. Aug 19 00:10:14.962366 systemd[1]: Started sshd@18-10.0.0.31:22-10.0.0.1:47346.service - OpenSSH per-connection server daemon (10.0.0.1:47346). Aug 19 00:10:14.965150 systemd-logind[1514]: Removed session 18. Aug 19 00:10:15.032898 sshd[5451]: Accepted publickey for core from 10.0.0.1 port 47346 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:10:15.034479 sshd-session[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:10:15.038954 systemd-logind[1514]: New session 19 of user core. Aug 19 00:10:15.049991 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 00:10:15.370470 sshd[5455]: Connection closed by 10.0.0.1 port 47346 Aug 19 00:10:15.371044 sshd-session[5451]: pam_unix(sshd:session): session closed for user core Aug 19 00:10:15.386369 systemd[1]: sshd@18-10.0.0.31:22-10.0.0.1:47346.service: Deactivated successfully. Aug 19 00:10:15.389618 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 00:10:15.392000 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. Aug 19 00:10:15.393178 systemd[1]: Started sshd@19-10.0.0.31:22-10.0.0.1:47352.service - OpenSSH per-connection server daemon (10.0.0.1:47352). Aug 19 00:10:15.395898 systemd-logind[1514]: Removed session 19. Aug 19 00:10:15.451884 sshd[5473]: Accepted publickey for core from 10.0.0.1 port 47352 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:10:15.454729 sshd-session[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:10:15.462415 systemd-logind[1514]: New session 20 of user core. Aug 19 00:10:15.472056 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 00:10:15.660879 sshd[5476]: Connection closed by 10.0.0.1 port 47352 Aug 19 00:10:15.661036 sshd-session[5473]: pam_unix(sshd:session): session closed for user core Aug 19 00:10:15.666556 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. Aug 19 00:10:15.666864 systemd[1]: sshd@19-10.0.0.31:22-10.0.0.1:47352.service: Deactivated successfully. Aug 19 00:10:15.668605 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 00:10:15.672339 systemd-logind[1514]: Removed session 20. Aug 19 00:10:15.701335 containerd[1530]: time="2025-08-19T00:10:15.701292444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc90163f378c1f53746ff3c25b50c86b0b5d3e1189e7f95d306a82c63d4e558e\" id:\"33767d4beffe1483eb58afa21df363bafa0aa042307163ab0596a6c0429e9df5\" pid:5497 exited_at:{seconds:1755562215 nanos:700680077}" Aug 19 00:10:20.673901 systemd[1]: Started sshd@20-10.0.0.31:22-10.0.0.1:47364.service - OpenSSH per-connection server daemon (10.0.0.1:47364). Aug 19 00:10:20.730147 sshd[5515]: Accepted publickey for core from 10.0.0.1 port 47364 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:10:20.731626 sshd-session[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:10:20.735779 systemd-logind[1514]: New session 21 of user core. Aug 19 00:10:20.746082 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 00:10:20.877089 sshd[5518]: Connection closed by 10.0.0.1 port 47364 Aug 19 00:10:20.877434 sshd-session[5515]: pam_unix(sshd:session): session closed for user core Aug 19 00:10:20.881437 systemd[1]: sshd@20-10.0.0.31:22-10.0.0.1:47364.service: Deactivated successfully. Aug 19 00:10:20.883660 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 00:10:20.884483 systemd-logind[1514]: Session 21 logged out. Waiting for processes to exit. Aug 19 00:10:20.887556 systemd-logind[1514]: Removed session 21. Aug 19 00:10:22.248927 containerd[1530]: time="2025-08-19T00:10:22.248881908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6eb8d2d61f569097d75dd11f21cfc237f1dc2c8379a32291bc6ed53927507758\" id:\"29990df16278f5669300d9564bb236c1f8418039179a8c87ce0e01f0546f87d9\" pid:5546 exited_at:{seconds:1755562222 nanos:248477744}" Aug 19 00:10:25.895856 systemd[1]: Started sshd@21-10.0.0.31:22-10.0.0.1:33544.service - OpenSSH per-connection server daemon (10.0.0.1:33544). Aug 19 00:10:25.958339 sshd[5564]: Accepted publickey for core from 10.0.0.1 port 33544 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:10:25.959828 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:10:25.966059 systemd-logind[1514]: New session 22 of user core. Aug 19 00:10:25.972049 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 00:10:26.096693 sshd[5567]: Connection closed by 10.0.0.1 port 33544 Aug 19 00:10:26.097397 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Aug 19 00:10:26.101842 systemd-logind[1514]: Session 22 logged out. Waiting for processes to exit. Aug 19 00:10:26.102638 systemd[1]: sshd@21-10.0.0.31:22-10.0.0.1:33544.service: Deactivated successfully. Aug 19 00:10:26.105541 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 00:10:26.109047 systemd-logind[1514]: Removed session 22. Aug 19 00:10:31.109887 systemd[1]: Started sshd@22-10.0.0.31:22-10.0.0.1:33594.service - OpenSSH per-connection server daemon (10.0.0.1:33594). Aug 19 00:10:31.165379 sshd[5581]: Accepted publickey for core from 10.0.0.1 port 33594 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:10:31.166756 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:10:31.171318 systemd-logind[1514]: New session 23 of user core. Aug 19 00:10:31.185038 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 19 00:10:31.330874 sshd[5584]: Connection closed by 10.0.0.1 port 33594 Aug 19 00:10:31.331116 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Aug 19 00:10:31.337633 systemd[1]: sshd@22-10.0.0.31:22-10.0.0.1:33594.service: Deactivated successfully. Aug 19 00:10:31.340495 systemd[1]: session-23.scope: Deactivated successfully. Aug 19 00:10:31.342634 systemd-logind[1514]: Session 23 logged out. Waiting for processes to exit. Aug 19 00:10:31.345118 systemd-logind[1514]: Removed session 23. Aug 19 00:10:35.236731 containerd[1530]: time="2025-08-19T00:10:35.236676589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6ddd284856427eb2cc3b752a3f68624978e4ec3b9f60077f1cd92bceb002f4f\" id:\"4ec1f62197bf7aea9c98a49dc39f74d630d92dc2fe2ab5320c394fb8a4001de3\" pid:5609 exited_at:{seconds:1755562235 nanos:236364948}" Aug 19 00:10:36.355068 systemd[1]: Started sshd@23-10.0.0.31:22-10.0.0.1:39336.service - OpenSSH per-connection server daemon (10.0.0.1:39336). Aug 19 00:10:36.439742 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 39336 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:10:36.441252 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:10:36.445896 systemd-logind[1514]: New session 24 of user core. Aug 19 00:10:36.455027 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 19 00:10:36.585505 sshd[5622]: Connection closed by 10.0.0.1 port 39336 Aug 19 00:10:36.585985 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Aug 19 00:10:36.589655 systemd[1]: sshd@23-10.0.0.31:22-10.0.0.1:39336.service: Deactivated successfully. Aug 19 00:10:36.591627 systemd[1]: session-24.scope: Deactivated successfully. Aug 19 00:10:36.594750 systemd-logind[1514]: Session 24 logged out. Waiting for processes to exit. Aug 19 00:10:36.596251 systemd-logind[1514]: Removed session 24.