Aug 12 23:36:02.776667 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 12 23:36:02.776690 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Aug 12 21:51:24 -00 2025 Aug 12 23:36:02.776700 kernel: KASLR enabled Aug 12 23:36:02.776706 kernel: efi: EFI v2.7 by EDK II Aug 12 23:36:02.776712 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Aug 12 23:36:02.776718 kernel: random: crng init done Aug 12 23:36:02.776725 kernel: secureboot: Secure boot disabled Aug 12 23:36:02.776731 kernel: ACPI: Early table checksum verification disabled Aug 12 23:36:02.776737 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Aug 12 23:36:02.776745 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 12 23:36:02.776751 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:36:02.776767 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:36:02.776773 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:36:02.776779 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:36:02.776787 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:36:02.776796 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:36:02.776802 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:36:02.776809 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:36:02.776816 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:36:02.776822 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 12 23:36:02.776828 kernel: ACPI: Use ACPI SPCR as default console: Yes Aug 12 23:36:02.776835 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:36:02.776841 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Aug 12 23:36:02.776847 kernel: Zone ranges: Aug 12 23:36:02.776853 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:36:02.776861 kernel: DMA32 empty Aug 12 23:36:02.776867 kernel: Normal empty Aug 12 23:36:02.776873 kernel: Device empty Aug 12 23:36:02.776880 kernel: Movable zone start for each node Aug 12 23:36:02.776886 kernel: Early memory node ranges Aug 12 23:36:02.776892 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Aug 12 23:36:02.776899 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Aug 12 23:36:02.776905 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Aug 12 23:36:02.776911 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Aug 12 23:36:02.776918 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Aug 12 23:36:02.776924 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Aug 12 23:36:02.776931 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Aug 12 23:36:02.776938 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Aug 12 23:36:02.776945 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Aug 12 23:36:02.776951 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 12 23:36:02.776960 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 12 23:36:02.776967 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 12 23:36:02.776974 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 12 23:36:02.776982 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:36:02.776989 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 12 23:36:02.776996 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Aug 12 23:36:02.777002 kernel: psci: probing for conduit method from ACPI. Aug 12 23:36:02.777009 kernel: psci: PSCIv1.1 detected in firmware. Aug 12 23:36:02.777016 kernel: psci: Using standard PSCI v0.2 function IDs Aug 12 23:36:02.777023 kernel: psci: Trusted OS migration not required Aug 12 23:36:02.777030 kernel: psci: SMC Calling Convention v1.1 Aug 12 23:36:02.777038 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 12 23:36:02.777045 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Aug 12 23:36:02.777053 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Aug 12 23:36:02.777060 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 12 23:36:02.777067 kernel: Detected PIPT I-cache on CPU0 Aug 12 23:36:02.777074 kernel: CPU features: detected: GIC system register CPU interface Aug 12 23:36:02.777081 kernel: CPU features: detected: Spectre-v4 Aug 12 23:36:02.777087 kernel: CPU features: detected: Spectre-BHB Aug 12 23:36:02.777094 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 12 23:36:02.777101 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 12 23:36:02.777108 kernel: CPU features: detected: ARM erratum 1418040 Aug 12 23:36:02.777114 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 12 23:36:02.777121 kernel: alternatives: applying boot alternatives Aug 12 23:36:02.777128 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ce82f1ef836ba8581e59ce9db4eef4240d287b2b5f9937c28f0cd024f4dc9107 Aug 12 23:36:02.777137 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:36:02.777143 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:36:02.777150 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:36:02.777157 kernel: Fallback order for Node 0: 0 Aug 12 23:36:02.777164 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Aug 12 23:36:02.777170 kernel: Policy zone: DMA Aug 12 23:36:02.777177 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:36:02.777183 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Aug 12 23:36:02.777190 kernel: software IO TLB: area num 4. Aug 12 23:36:02.777197 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Aug 12 23:36:02.777203 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Aug 12 23:36:02.777211 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 12 23:36:02.777218 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:36:02.777226 kernel: rcu: RCU event tracing is enabled. Aug 12 23:36:02.777233 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 12 23:36:02.777240 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:36:02.777247 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:36:02.777254 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:36:02.777260 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 12 23:36:02.777267 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:36:02.777275 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:36:02.777281 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 12 23:36:02.777289 kernel: GICv3: 256 SPIs implemented Aug 12 23:36:02.777296 kernel: GICv3: 0 Extended SPIs implemented Aug 12 23:36:02.777303 kernel: Root IRQ handler: gic_handle_irq Aug 12 23:36:02.777328 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 12 23:36:02.777337 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Aug 12 23:36:02.777344 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 12 23:36:02.777351 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 12 23:36:02.777358 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Aug 12 23:36:02.777364 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Aug 12 23:36:02.777372 kernel: GICv3: using LPI property table @0x0000000040130000 Aug 12 23:36:02.777378 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Aug 12 23:36:02.777385 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:36:02.777394 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:36:02.777401 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 12 23:36:02.777408 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 12 23:36:02.777415 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 12 23:36:02.777421 kernel: arm-pv: using stolen time PV Aug 12 23:36:02.777429 kernel: Console: colour dummy device 80x25 Aug 12 23:36:02.777436 kernel: ACPI: Core revision 20240827 Aug 12 23:36:02.777443 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 12 23:36:02.777450 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:36:02.777457 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 12 23:36:02.777465 kernel: landlock: Up and running. Aug 12 23:36:02.777472 kernel: SELinux: Initializing. Aug 12 23:36:02.777479 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:36:02.777486 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:36:02.777493 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:36:02.777500 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:36:02.777507 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 12 23:36:02.777514 kernel: Remapping and enabling EFI services. Aug 12 23:36:02.777522 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:36:02.777535 kernel: Detected PIPT I-cache on CPU1 Aug 12 23:36:02.777542 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 12 23:36:02.777550 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Aug 12 23:36:02.777558 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:36:02.777565 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 12 23:36:02.777573 kernel: Detected PIPT I-cache on CPU2 Aug 12 23:36:02.777580 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 12 23:36:02.777588 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Aug 12 23:36:02.777597 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:36:02.777604 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 12 23:36:02.777611 kernel: Detected PIPT I-cache on CPU3 Aug 12 23:36:02.777618 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 12 23:36:02.777626 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Aug 12 23:36:02.777633 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:36:02.777641 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 12 23:36:02.777649 kernel: smp: Brought up 1 node, 4 CPUs Aug 12 23:36:02.777656 kernel: SMP: Total of 4 processors activated. Aug 12 23:36:02.777665 kernel: CPU: All CPU(s) started at EL1 Aug 12 23:36:02.777672 kernel: CPU features: detected: 32-bit EL0 Support Aug 12 23:36:02.777680 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 12 23:36:02.777688 kernel: CPU features: detected: Common not Private translations Aug 12 23:36:02.777695 kernel: CPU features: detected: CRC32 instructions Aug 12 23:36:02.777702 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 12 23:36:02.777709 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 12 23:36:02.777716 kernel: CPU features: detected: LSE atomic instructions Aug 12 23:36:02.777723 kernel: CPU features: detected: Privileged Access Never Aug 12 23:36:02.777732 kernel: CPU features: detected: RAS Extension Support Aug 12 23:36:02.777739 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 12 23:36:02.777747 kernel: alternatives: applying system-wide alternatives Aug 12 23:36:02.777758 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Aug 12 23:36:02.777766 kernel: Memory: 2423968K/2572288K available (11136K kernel code, 2436K rwdata, 9080K rodata, 39488K init, 1038K bss, 125984K reserved, 16384K cma-reserved) Aug 12 23:36:02.777774 kernel: devtmpfs: initialized Aug 12 23:36:02.777781 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:36:02.777788 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 12 23:36:02.777796 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 12 23:36:02.777804 kernel: 0 pages in range for non-PLT usage Aug 12 23:36:02.777811 kernel: 508432 pages in range for PLT usage Aug 12 23:36:02.777818 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:36:02.777826 kernel: SMBIOS 3.0.0 present. Aug 12 23:36:02.777833 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Aug 12 23:36:02.777840 kernel: DMI: Memory slots populated: 1/1 Aug 12 23:36:02.777848 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:36:02.777855 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 12 23:36:02.777863 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 12 23:36:02.777872 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 12 23:36:02.777879 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:36:02.777886 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Aug 12 23:36:02.777894 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:36:02.777901 kernel: cpuidle: using governor menu Aug 12 23:36:02.777908 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 12 23:36:02.777916 kernel: ASID allocator initialised with 32768 entries Aug 12 23:36:02.777923 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:36:02.777931 kernel: Serial: AMBA PL011 UART driver Aug 12 23:36:02.777939 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:36:02.777947 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 12 23:36:02.777954 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 12 23:36:02.777962 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 12 23:36:02.777969 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:36:02.777977 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:36:02.777984 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 12 23:36:02.777992 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 12 23:36:02.777999 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:36:02.778006 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:36:02.778015 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:36:02.778022 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:36:02.778030 kernel: ACPI: Interpreter enabled Aug 12 23:36:02.778037 kernel: ACPI: Using GIC for interrupt routing Aug 12 23:36:02.778044 kernel: ACPI: MCFG table detected, 1 entries Aug 12 23:36:02.778051 kernel: ACPI: CPU0 has been hot-added Aug 12 23:36:02.778059 kernel: ACPI: CPU1 has been hot-added Aug 12 23:36:02.778066 kernel: ACPI: CPU2 has been hot-added Aug 12 23:36:02.778073 kernel: ACPI: CPU3 has been hot-added Aug 12 23:36:02.778081 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 12 23:36:02.778089 kernel: printk: legacy console [ttyAMA0] enabled Aug 12 23:36:02.778096 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:36:02.778228 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:36:02.778294 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 12 23:36:02.778426 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 12 23:36:02.778490 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 12 23:36:02.778556 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 12 23:36:02.778566 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 12 23:36:02.778573 kernel: PCI host bridge to bus 0000:00 Aug 12 23:36:02.778644 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 12 23:36:02.778704 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 12 23:36:02.778772 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 12 23:36:02.778828 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:36:02.778910 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Aug 12 23:36:02.778987 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 12 23:36:02.779051 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Aug 12 23:36:02.779114 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Aug 12 23:36:02.779178 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:36:02.779242 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Aug 12 23:36:02.779325 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Aug 12 23:36:02.779395 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Aug 12 23:36:02.779451 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 12 23:36:02.779505 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 12 23:36:02.779558 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 12 23:36:02.779568 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 12 23:36:02.779575 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 12 23:36:02.779583 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 12 23:36:02.779592 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 12 23:36:02.779600 kernel: iommu: Default domain type: Translated Aug 12 23:36:02.779608 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 12 23:36:02.779615 kernel: efivars: Registered efivars operations Aug 12 23:36:02.779623 kernel: vgaarb: loaded Aug 12 23:36:02.779630 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 12 23:36:02.779638 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:36:02.779646 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:36:02.779653 kernel: pnp: PnP ACPI init Aug 12 23:36:02.779722 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 12 23:36:02.779732 kernel: pnp: PnP ACPI: found 1 devices Aug 12 23:36:02.779739 kernel: NET: Registered PF_INET protocol family Aug 12 23:36:02.779746 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:36:02.779762 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:36:02.779771 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:36:02.779779 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:36:02.779786 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 12 23:36:02.779795 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:36:02.779803 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:36:02.779811 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:36:02.779819 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:36:02.779826 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:36:02.779833 kernel: kvm [1]: HYP mode not available Aug 12 23:36:02.779841 kernel: Initialise system trusted keyrings Aug 12 23:36:02.779848 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:36:02.779855 kernel: Key type asymmetric registered Aug 12 23:36:02.779863 kernel: Asymmetric key parser 'x509' registered Aug 12 23:36:02.779871 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 12 23:36:02.779878 kernel: io scheduler mq-deadline registered Aug 12 23:36:02.779886 kernel: io scheduler kyber registered Aug 12 23:36:02.779893 kernel: io scheduler bfq registered Aug 12 23:36:02.779901 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 12 23:36:02.779909 kernel: ACPI: button: Power Button [PWRB] Aug 12 23:36:02.779917 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 12 23:36:02.779983 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 12 23:36:02.779993 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:36:02.780002 kernel: thunder_xcv, ver 1.0 Aug 12 23:36:02.780009 kernel: thunder_bgx, ver 1.0 Aug 12 23:36:02.780017 kernel: nicpf, ver 1.0 Aug 12 23:36:02.780024 kernel: nicvf, ver 1.0 Aug 12 23:36:02.780099 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 12 23:36:02.780157 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-12T23:36:02 UTC (1755041762) Aug 12 23:36:02.780167 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 12 23:36:02.780174 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Aug 12 23:36:02.780183 kernel: watchdog: NMI not fully supported Aug 12 23:36:02.780190 kernel: watchdog: Hard watchdog permanently disabled Aug 12 23:36:02.780197 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:36:02.780205 kernel: Segment Routing with IPv6 Aug 12 23:36:02.780212 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:36:02.780219 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:36:02.780226 kernel: Key type dns_resolver registered Aug 12 23:36:02.780233 kernel: registered taskstats version 1 Aug 12 23:36:02.780241 kernel: Loading compiled-in X.509 certificates Aug 12 23:36:02.780250 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: e74bfacfa68399ed7282bf533dd5901fdb84b882' Aug 12 23:36:02.780257 kernel: Demotion targets for Node 0: null Aug 12 23:36:02.780265 kernel: Key type .fscrypt registered Aug 12 23:36:02.780272 kernel: Key type fscrypt-provisioning registered Aug 12 23:36:02.780280 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:36:02.780287 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:36:02.780295 kernel: ima: No architecture policies found Aug 12 23:36:02.780302 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 12 23:36:02.780326 kernel: clk: Disabling unused clocks Aug 12 23:36:02.780336 kernel: PM: genpd: Disabling unused power domains Aug 12 23:36:02.780343 kernel: Warning: unable to open an initial console. Aug 12 23:36:02.780351 kernel: Freeing unused kernel memory: 39488K Aug 12 23:36:02.780358 kernel: Run /init as init process Aug 12 23:36:02.780365 kernel: with arguments: Aug 12 23:36:02.780372 kernel: /init Aug 12 23:36:02.780379 kernel: with environment: Aug 12 23:36:02.780386 kernel: HOME=/ Aug 12 23:36:02.780393 kernel: TERM=linux Aug 12 23:36:02.780402 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:36:02.780410 systemd[1]: Successfully made /usr/ read-only. Aug 12 23:36:02.780421 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:36:02.780429 systemd[1]: Detected virtualization kvm. Aug 12 23:36:02.780437 systemd[1]: Detected architecture arm64. Aug 12 23:36:02.780444 systemd[1]: Running in initrd. Aug 12 23:36:02.780452 systemd[1]: No hostname configured, using default hostname. Aug 12 23:36:02.780462 systemd[1]: Hostname set to . Aug 12 23:36:02.780470 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:36:02.780478 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:36:02.780486 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:36:02.780494 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:36:02.780503 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:36:02.780511 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:36:02.780519 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:36:02.780530 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:36:02.780539 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:36:02.780547 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:36:02.780555 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:36:02.780563 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:36:02.780571 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:36:02.780579 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:36:02.780588 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:36:02.780596 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:36:02.780604 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:36:02.780612 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:36:02.780620 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:36:02.780628 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 12 23:36:02.780637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:36:02.780645 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:36:02.780655 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:36:02.780663 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:36:02.780671 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:36:02.780679 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:36:02.780687 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:36:02.780695 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 12 23:36:02.780703 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:36:02.780711 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:36:02.780718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:36:02.780728 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:36:02.780736 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:36:02.780744 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:36:02.780752 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:36:02.780769 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:36:02.780795 systemd-journald[243]: Collecting audit messages is disabled. Aug 12 23:36:02.780815 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:36:02.780824 systemd-journald[243]: Journal started Aug 12 23:36:02.780844 systemd-journald[243]: Runtime Journal (/run/log/journal/a03491f33da244f9869999e8a7bbf3aa) is 6M, max 48.5M, 42.4M free. Aug 12 23:36:02.773537 systemd-modules-load[244]: Inserted module 'overlay' Aug 12 23:36:02.784972 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:36:02.787336 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:36:02.788683 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:36:02.791860 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:36:02.791585 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:36:02.794055 systemd-modules-load[244]: Inserted module 'br_netfilter' Aug 12 23:36:02.794720 kernel: Bridge firewalling registered Aug 12 23:36:02.807931 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:36:02.809169 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:36:02.811417 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:36:02.813366 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 12 23:36:02.815827 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:36:02.819911 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:36:02.823137 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:36:02.825390 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:36:02.827741 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:36:02.831421 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:36:02.851713 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ce82f1ef836ba8581e59ce9db4eef4240d287b2b5f9937c28f0cd024f4dc9107 Aug 12 23:36:02.866655 systemd-resolved[289]: Positive Trust Anchors: Aug 12 23:36:02.866672 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:36:02.866707 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:36:02.871484 systemd-resolved[289]: Defaulting to hostname 'linux'. Aug 12 23:36:02.872431 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:36:02.874248 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:36:02.931343 kernel: SCSI subsystem initialized Aug 12 23:36:02.935327 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:36:02.944334 kernel: iscsi: registered transport (tcp) Aug 12 23:36:02.958331 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:36:02.958354 kernel: QLogic iSCSI HBA Driver Aug 12 23:36:02.974919 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:36:02.994702 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:36:02.995962 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:36:03.041408 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:36:03.044286 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:36:03.111339 kernel: raid6: neonx8 gen() 15807 MB/s Aug 12 23:36:03.128328 kernel: raid6: neonx4 gen() 15818 MB/s Aug 12 23:36:03.145328 kernel: raid6: neonx2 gen() 13302 MB/s Aug 12 23:36:03.162338 kernel: raid6: neonx1 gen() 10464 MB/s Aug 12 23:36:03.179350 kernel: raid6: int64x8 gen() 6895 MB/s Aug 12 23:36:03.196339 kernel: raid6: int64x4 gen() 7350 MB/s Aug 12 23:36:03.213328 kernel: raid6: int64x2 gen() 6104 MB/s Aug 12 23:36:03.230327 kernel: raid6: int64x1 gen() 5055 MB/s Aug 12 23:36:03.230341 kernel: raid6: using algorithm neonx4 gen() 15818 MB/s Aug 12 23:36:03.247336 kernel: raid6: .... xor() 12334 MB/s, rmw enabled Aug 12 23:36:03.247354 kernel: raid6: using neon recovery algorithm Aug 12 23:36:03.255263 kernel: xor: measuring software checksum speed Aug 12 23:36:03.256329 kernel: 8regs : 1690 MB/sec Aug 12 23:36:03.256342 kernel: 32regs : 21699 MB/sec Aug 12 23:36:03.257332 kernel: arm64_neon : 25237 MB/sec Aug 12 23:36:03.257346 kernel: xor: using function: arm64_neon (25237 MB/sec) Aug 12 23:36:03.310341 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:36:03.316572 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:36:03.318894 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:36:03.349374 systemd-udevd[500]: Using default interface naming scheme 'v255'. Aug 12 23:36:03.353417 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:36:03.355070 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:36:03.386687 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Aug 12 23:36:03.409645 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:36:03.411819 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:36:03.466783 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:36:03.468934 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:36:03.513108 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 12 23:36:03.513291 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 12 23:36:03.516518 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:36:03.516552 kernel: GPT:9289727 != 19775487 Aug 12 23:36:03.516563 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:36:03.516573 kernel: GPT:9289727 != 19775487 Aug 12 23:36:03.516588 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:36:03.517334 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:36:03.519203 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:36:03.519338 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:36:03.521654 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:36:03.523142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:36:03.556880 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 12 23:36:03.558030 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:36:03.560627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:36:03.568514 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 12 23:36:03.580620 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:36:03.586504 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 12 23:36:03.587360 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 12 23:36:03.589434 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:36:03.590998 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:36:03.592507 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:36:03.594683 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:36:03.596144 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:36:03.620019 disk-uuid[594]: Primary Header is updated. Aug 12 23:36:03.620019 disk-uuid[594]: Secondary Entries is updated. Aug 12 23:36:03.620019 disk-uuid[594]: Secondary Header is updated. Aug 12 23:36:03.622525 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:36:03.623617 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:36:04.633289 disk-uuid[598]: The operation has completed successfully. Aug 12 23:36:04.634134 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:36:04.659922 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:36:04.660028 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:36:04.684004 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:36:04.700386 sh[614]: Success Aug 12 23:36:04.713588 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:36:04.714910 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:36:04.714946 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 12 23:36:04.725336 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Aug 12 23:36:04.750894 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:36:04.753330 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:36:04.765551 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:36:04.771349 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 12 23:36:04.771386 kernel: BTRFS: device fsid 7658cdd8-2ee4-4f84-82be-1f808605c89c devid 1 transid 42 /dev/mapper/usr (253:0) scanned by mount (626) Aug 12 23:36:04.773337 kernel: BTRFS info (device dm-0): first mount of filesystem 7658cdd8-2ee4-4f84-82be-1f808605c89c Aug 12 23:36:04.773357 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:36:04.774417 kernel: BTRFS info (device dm-0): using free-space-tree Aug 12 23:36:04.779487 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:36:04.780569 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 12 23:36:04.781487 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:36:04.782283 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:36:04.784677 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:36:04.808355 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (657) Aug 12 23:36:04.810773 kernel: BTRFS info (device vda6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:36:04.810804 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:36:04.810815 kernel: BTRFS info (device vda6): using free-space-tree Aug 12 23:36:04.817333 kernel: BTRFS info (device vda6): last unmount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:36:04.818229 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:36:04.819974 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:36:04.888871 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:36:04.891365 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:36:04.928701 systemd-networkd[802]: lo: Link UP Aug 12 23:36:04.928712 systemd-networkd[802]: lo: Gained carrier Aug 12 23:36:04.929405 systemd-networkd[802]: Enumeration completed Aug 12 23:36:04.929851 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:36:04.929854 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:36:04.930232 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:36:04.930808 systemd-networkd[802]: eth0: Link UP Aug 12 23:36:04.930901 systemd-networkd[802]: eth0: Gained carrier Aug 12 23:36:04.930910 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:36:04.932361 systemd[1]: Reached target network.target - Network. Aug 12 23:36:04.950358 systemd-networkd[802]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:36:04.955125 ignition[700]: Ignition 2.21.0 Aug 12 23:36:04.955138 ignition[700]: Stage: fetch-offline Aug 12 23:36:04.955182 ignition[700]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:36:04.955190 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:36:04.955422 ignition[700]: parsed url from cmdline: "" Aug 12 23:36:04.955426 ignition[700]: no config URL provided Aug 12 23:36:04.955430 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:36:04.955437 ignition[700]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:36:04.955461 ignition[700]: op(1): [started] loading QEMU firmware config module Aug 12 23:36:04.955465 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 12 23:36:04.963194 ignition[700]: op(1): [finished] loading QEMU firmware config module Aug 12 23:36:05.001201 ignition[700]: parsing config with SHA512: 34f86af3aa71b86cf71d2b159de9530b5c70e9c7c3baf3e8dba86545c4757503bb1b6652f929c8244cd8496dd452eaa13da0708f02092bad3375fec03251cedf Aug 12 23:36:05.005114 unknown[700]: fetched base config from "system" Aug 12 23:36:05.005125 unknown[700]: fetched user config from "qemu" Aug 12 23:36:05.005501 ignition[700]: fetch-offline: fetch-offline passed Aug 12 23:36:05.005555 ignition[700]: Ignition finished successfully Aug 12 23:36:05.008384 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:36:05.010597 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 12 23:36:05.012229 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:36:05.045238 ignition[817]: Ignition 2.21.0 Aug 12 23:36:05.045257 ignition[817]: Stage: kargs Aug 12 23:36:05.045417 ignition[817]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:36:05.045427 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:36:05.047178 ignition[817]: kargs: kargs passed Aug 12 23:36:05.047231 ignition[817]: Ignition finished successfully Aug 12 23:36:05.050377 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:36:05.052791 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:36:05.079129 ignition[825]: Ignition 2.21.0 Aug 12 23:36:05.079148 ignition[825]: Stage: disks Aug 12 23:36:05.079277 ignition[825]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:36:05.079285 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:36:05.080507 ignition[825]: disks: disks passed Aug 12 23:36:05.080567 ignition[825]: Ignition finished successfully Aug 12 23:36:05.082909 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:36:05.084268 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:36:05.085440 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:36:05.086890 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:36:05.088250 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:36:05.089507 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:36:05.091485 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:36:05.111979 systemd-fsck[835]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 12 23:36:05.156554 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:36:05.158923 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:36:05.231349 kernel: EXT4-fs (vda9): mounted filesystem d634334e-91a3-4b77-89ab-775bdd78a572 r/w with ordered data mode. Quota mode: none. Aug 12 23:36:05.232153 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:36:05.233216 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:36:05.235107 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:36:05.236511 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:36:05.237270 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 12 23:36:05.237325 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:36:05.237349 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:36:05.245671 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:36:05.247512 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:36:05.250357 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (843) Aug 12 23:36:05.252379 kernel: BTRFS info (device vda6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:36:05.252406 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:36:05.252422 kernel: BTRFS info (device vda6): using free-space-tree Aug 12 23:36:05.255850 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:36:05.296752 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:36:05.299789 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:36:05.303268 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:36:05.306043 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:36:05.375648 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:36:05.377728 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:36:05.379075 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:36:05.395342 kernel: BTRFS info (device vda6): last unmount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:36:05.417473 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:36:05.430649 ignition[958]: INFO : Ignition 2.21.0 Aug 12 23:36:05.430649 ignition[958]: INFO : Stage: mount Aug 12 23:36:05.431952 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:36:05.431952 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:36:05.431952 ignition[958]: INFO : mount: mount passed Aug 12 23:36:05.431952 ignition[958]: INFO : Ignition finished successfully Aug 12 23:36:05.433956 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:36:05.436183 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:36:05.772013 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:36:05.774405 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:36:05.806797 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (969) Aug 12 23:36:05.806839 kernel: BTRFS info (device vda6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:36:05.806851 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:36:05.807526 kernel: BTRFS info (device vda6): using free-space-tree Aug 12 23:36:05.810525 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:36:05.841263 ignition[986]: INFO : Ignition 2.21.0 Aug 12 23:36:05.841263 ignition[986]: INFO : Stage: files Aug 12 23:36:05.843124 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:36:05.843124 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:36:05.844849 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:36:05.844849 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:36:05.844849 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:36:05.848062 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:36:05.848062 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:36:05.848062 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:36:05.846882 unknown[986]: wrote ssh authorized keys file for user: core Aug 12 23:36:05.852019 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 12 23:36:05.852019 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 12 23:36:05.891163 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:36:06.049543 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 12 23:36:06.049543 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:36:06.052809 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:36:06.052809 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:36:06.052809 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:36:06.052809 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:36:06.052809 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:36:06.052809 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:36:06.052809 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:36:06.052809 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:36:06.052809 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:36:06.052809 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:36:06.065003 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:36:06.065003 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:36:06.065003 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 12 23:36:06.225530 systemd-networkd[802]: eth0: Gained IPv6LL Aug 12 23:36:06.589208 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 12 23:36:07.626721 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:36:07.626721 ignition[986]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 12 23:36:07.629717 ignition[986]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:36:07.632648 ignition[986]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:36:07.632648 ignition[986]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 12 23:36:07.632648 ignition[986]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 12 23:36:07.635733 ignition[986]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:36:07.635733 ignition[986]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:36:07.635733 ignition[986]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 12 23:36:07.635733 ignition[986]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 12 23:36:07.650795 ignition[986]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:36:07.655140 ignition[986]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:36:07.656304 ignition[986]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 12 23:36:07.656304 ignition[986]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:36:07.656304 ignition[986]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:36:07.656304 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:36:07.656304 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:36:07.656304 ignition[986]: INFO : files: files passed Aug 12 23:36:07.656304 ignition[986]: INFO : Ignition finished successfully Aug 12 23:36:07.660843 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:36:07.665997 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:36:07.668449 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:36:07.687884 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory Aug 12 23:36:07.689382 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:36:07.689707 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:36:07.692418 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:36:07.692418 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:36:07.697548 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:36:07.695650 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:36:07.696700 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:36:07.699106 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:36:07.742237 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:36:07.743103 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:36:07.745131 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:36:07.746655 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:36:07.748145 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:36:07.748906 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:36:07.781985 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:36:07.784068 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:36:07.799734 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:36:07.801448 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:36:07.802351 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:36:07.803831 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:36:07.803944 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:36:07.805787 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:36:07.807184 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:36:07.808395 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:36:07.809727 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:36:07.811146 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:36:07.812605 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 12 23:36:07.814000 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:36:07.815302 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:36:07.816743 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:36:07.818154 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:36:07.819391 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:36:07.820502 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:36:07.820624 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:36:07.822347 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:36:07.823656 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:36:07.825128 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:36:07.825235 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:36:07.826679 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:36:07.826799 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:36:07.828872 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:36:07.828992 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:36:07.830383 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:36:07.831500 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:36:07.832423 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:36:07.833779 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:36:07.834893 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:36:07.836142 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:36:07.836222 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:36:07.837737 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:36:07.837818 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:36:07.838972 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:36:07.839080 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:36:07.840281 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:36:07.840397 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:36:07.842373 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:36:07.843153 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:36:07.843262 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:36:07.845381 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:36:07.846830 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:36:07.846950 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:36:07.848238 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:36:07.848346 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:36:07.852887 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:36:07.856828 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:36:07.865682 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:36:07.871363 ignition[1041]: INFO : Ignition 2.21.0 Aug 12 23:36:07.871363 ignition[1041]: INFO : Stage: umount Aug 12 23:36:07.872646 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:36:07.872646 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:36:07.874148 ignition[1041]: INFO : umount: umount passed Aug 12 23:36:07.874148 ignition[1041]: INFO : Ignition finished successfully Aug 12 23:36:07.875538 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:36:07.876280 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:36:07.879576 systemd[1]: Stopped target network.target - Network. Aug 12 23:36:07.880236 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:36:07.880289 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:36:07.881603 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:36:07.881644 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:36:07.882920 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:36:07.882962 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:36:07.884107 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:36:07.884141 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:36:07.885552 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:36:07.886730 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:36:07.895413 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:36:07.896183 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:36:07.898772 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 12 23:36:07.899010 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:36:07.899049 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:36:07.901800 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:36:07.905778 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:36:07.905879 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:36:07.908950 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 12 23:36:07.909093 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 12 23:36:07.910547 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:36:07.910577 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:36:07.912622 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:36:07.914016 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:36:07.914071 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:36:07.915483 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:36:07.915520 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:36:07.918023 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:36:07.918064 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:36:07.925457 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:36:07.931042 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:36:07.941946 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:36:07.942284 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:36:07.943548 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:36:07.943633 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:36:07.945633 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:36:07.945717 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:36:07.946777 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:36:07.946899 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:36:07.948550 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:36:07.948607 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:36:07.949410 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:36:07.949439 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:36:07.950717 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:36:07.950762 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:36:07.952885 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:36:07.952928 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:36:07.954892 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:36:07.954939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:36:07.957751 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:36:07.959006 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 12 23:36:07.959057 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:36:07.961049 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:36:07.961086 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:36:07.963354 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:36:07.963393 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:36:07.971115 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:36:07.971231 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:36:07.972901 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:36:07.974808 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:36:07.982924 systemd[1]: Switching root. Aug 12 23:36:08.014194 systemd-journald[243]: Journal stopped Aug 12 23:36:08.775913 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Aug 12 23:36:08.775964 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:36:08.775976 kernel: SELinux: policy capability open_perms=1 Aug 12 23:36:08.775988 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:36:08.775997 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:36:08.776007 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:36:08.776019 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:36:08.776028 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:36:08.776037 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:36:08.776046 kernel: SELinux: policy capability userspace_initial_context=0 Aug 12 23:36:08.776055 kernel: audit: type=1403 audit(1755041768.187:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:36:08.776068 systemd[1]: Successfully loaded SELinux policy in 48.125ms. Aug 12 23:36:08.776082 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.473ms. Aug 12 23:36:08.776093 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:36:08.776105 systemd[1]: Detected virtualization kvm. Aug 12 23:36:08.776115 systemd[1]: Detected architecture arm64. Aug 12 23:36:08.776124 systemd[1]: Detected first boot. Aug 12 23:36:08.776134 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:36:08.776144 zram_generator::config[1087]: No configuration found. Aug 12 23:36:08.776157 kernel: NET: Registered PF_VSOCK protocol family Aug 12 23:36:08.776166 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:36:08.776176 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 12 23:36:08.776187 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:36:08.776198 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:36:08.776208 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:36:08.776222 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:36:08.776232 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:36:08.776242 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:36:08.776251 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:36:08.776261 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:36:08.776271 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:36:08.776282 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:36:08.776292 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:36:08.776301 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:36:08.776350 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:36:08.776363 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:36:08.776373 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:36:08.776383 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:36:08.776393 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:36:08.776403 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 12 23:36:08.776416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:36:08.776426 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:36:08.776436 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:36:08.776446 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:36:08.776456 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:36:08.776465 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:36:08.776475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:36:08.776485 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:36:08.776496 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:36:08.776506 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:36:08.776515 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:36:08.776525 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:36:08.776535 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 12 23:36:08.776544 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:36:08.776554 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:36:08.776564 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:36:08.776574 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:36:08.776585 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:36:08.776595 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:36:08.776605 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:36:08.776615 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:36:08.776625 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:36:08.776635 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:36:08.776645 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:36:08.776655 systemd[1]: Reached target machines.target - Containers. Aug 12 23:36:08.776664 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:36:08.776676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:36:08.776688 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:36:08.776698 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:36:08.776708 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:36:08.776718 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:36:08.776728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:36:08.776742 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:36:08.776753 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:36:08.776764 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:36:08.776775 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:36:08.776785 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:36:08.776794 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:36:08.776804 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:36:08.776814 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:36:08.776824 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:36:08.776834 kernel: loop: module loaded Aug 12 23:36:08.776844 kernel: fuse: init (API version 7.41) Aug 12 23:36:08.776854 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:36:08.776864 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:36:08.776873 kernel: ACPI: bus type drm_connector registered Aug 12 23:36:08.776882 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:36:08.776892 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 12 23:36:08.776902 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:36:08.776914 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:36:08.776923 systemd[1]: Stopped verity-setup.service. Aug 12 23:36:08.776933 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:36:08.776943 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:36:08.776953 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:36:08.776962 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:36:08.776972 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:36:08.776984 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:36:08.776994 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:36:08.777004 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:36:08.777014 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:36:08.777025 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:36:08.777057 systemd-journald[1155]: Collecting audit messages is disabled. Aug 12 23:36:08.777081 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:36:08.777091 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:36:08.777101 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:36:08.777113 systemd-journald[1155]: Journal started Aug 12 23:36:08.777133 systemd-journald[1155]: Runtime Journal (/run/log/journal/a03491f33da244f9869999e8a7bbf3aa) is 6M, max 48.5M, 42.4M free. Aug 12 23:36:08.580172 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:36:08.589164 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 12 23:36:08.589554 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:36:08.778980 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:36:08.780557 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:36:08.781277 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:36:08.781485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:36:08.782578 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:36:08.782759 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:36:08.783793 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:36:08.783944 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:36:08.785129 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:36:08.786272 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:36:08.787489 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:36:08.788657 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 12 23:36:08.800934 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:36:08.803120 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:36:08.804886 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:36:08.805709 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:36:08.805745 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:36:08.807374 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 12 23:36:08.813111 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:36:08.814048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:36:08.815117 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:36:08.816770 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:36:08.817748 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:36:08.818658 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:36:08.821492 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:36:08.824372 systemd-journald[1155]: Time spent on flushing to /var/log/journal/a03491f33da244f9869999e8a7bbf3aa is 10.966ms for 879 entries. Aug 12 23:36:08.824372 systemd-journald[1155]: System Journal (/var/log/journal/a03491f33da244f9869999e8a7bbf3aa) is 8M, max 195.6M, 187.6M free. Aug 12 23:36:08.845204 systemd-journald[1155]: Received client request to flush runtime journal. Aug 12 23:36:08.822438 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:36:08.826593 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:36:08.828635 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:36:08.832091 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:36:08.833422 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:36:08.834446 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:36:08.840387 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:36:08.843715 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:36:08.849473 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 12 23:36:08.850688 kernel: loop0: detected capacity change from 0 to 207008 Aug 12 23:36:08.852530 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:36:08.863354 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:36:08.868369 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:36:08.878042 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:36:08.882510 kernel: loop1: detected capacity change from 0 to 138376 Aug 12 23:36:08.881345 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 12 23:36:08.886511 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:36:08.889397 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:36:08.903418 kernel: loop2: detected capacity change from 0 to 107312 Aug 12 23:36:08.915194 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Aug 12 23:36:08.915210 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Aug 12 23:36:08.919288 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:36:08.924340 kernel: loop3: detected capacity change from 0 to 207008 Aug 12 23:36:08.931337 kernel: loop4: detected capacity change from 0 to 138376 Aug 12 23:36:08.939330 kernel: loop5: detected capacity change from 0 to 107312 Aug 12 23:36:08.944182 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 12 23:36:08.944577 (sd-merge)[1225]: Merged extensions into '/usr'. Aug 12 23:36:08.948552 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:36:08.948572 systemd[1]: Reloading... Aug 12 23:36:08.996344 zram_generator::config[1251]: No configuration found. Aug 12 23:36:09.080184 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:36:09.093750 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:36:09.142798 systemd[1]: Reloading finished in 193 ms. Aug 12 23:36:09.172839 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:36:09.174008 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:36:09.189543 systemd[1]: Starting ensure-sysext.service... Aug 12 23:36:09.191223 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:36:09.203551 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:36:09.203569 systemd[1]: Reloading... Aug 12 23:36:09.206969 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 12 23:36:09.207163 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 12 23:36:09.207429 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:36:09.207626 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:36:09.208226 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:36:09.208500 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Aug 12 23:36:09.208549 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Aug 12 23:36:09.211152 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:36:09.211166 systemd-tmpfiles[1286]: Skipping /boot Aug 12 23:36:09.219957 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:36:09.219972 systemd-tmpfiles[1286]: Skipping /boot Aug 12 23:36:09.255492 zram_generator::config[1313]: No configuration found. Aug 12 23:36:09.319686 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:36:09.381449 systemd[1]: Reloading finished in 177 ms. Aug 12 23:36:09.403341 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:36:09.408722 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:36:09.418337 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:36:09.420457 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:36:09.422384 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:36:09.425077 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:36:09.428451 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:36:09.431486 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:36:09.445703 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:36:09.450643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:36:09.452076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:36:09.454826 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:36:09.459524 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:36:09.460372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:36:09.460475 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:36:09.461305 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:36:09.463729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:36:09.464014 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:36:09.466886 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:36:09.467075 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:36:09.475560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:36:09.478944 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Aug 12 23:36:09.480599 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:36:09.484350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:36:09.486238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:36:09.486372 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:36:09.493180 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:36:09.496585 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:36:09.498213 augenrules[1385]: No rules Aug 12 23:36:09.500354 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:36:09.501950 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:36:09.505642 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:36:09.507010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:36:09.510529 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:36:09.511872 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:36:09.514362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:36:09.516599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:36:09.516762 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:36:09.518437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:36:09.528644 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:36:09.530269 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:36:09.544256 systemd[1]: Finished ensure-sysext.service. Aug 12 23:36:09.550532 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:36:09.552396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:36:09.555578 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:36:09.560872 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:36:09.567543 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:36:09.570635 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:36:09.572073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:36:09.572122 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:36:09.575086 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:36:09.589882 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 12 23:36:09.591433 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:36:09.598249 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:36:09.598786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:36:09.600303 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:36:09.600453 augenrules[1430]: /sbin/augenrules: No change Aug 12 23:36:09.602436 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:36:09.603822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:36:09.603964 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:36:09.606871 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:36:09.607025 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:36:09.616035 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 12 23:36:09.617930 augenrules[1464]: No rules Aug 12 23:36:09.621677 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:36:09.621941 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:36:09.632572 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:36:09.632636 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:36:09.637472 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:36:09.639559 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:36:09.671649 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:36:09.729400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:36:09.733118 systemd-resolved[1352]: Positive Trust Anchors: Aug 12 23:36:09.733136 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:36:09.733168 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:36:09.745500 systemd-resolved[1352]: Defaulting to hostname 'linux'. Aug 12 23:36:09.750294 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:36:09.751221 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:36:09.760958 systemd-networkd[1439]: lo: Link UP Aug 12 23:36:09.761226 systemd-networkd[1439]: lo: Gained carrier Aug 12 23:36:09.762141 systemd-networkd[1439]: Enumeration completed Aug 12 23:36:09.762300 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:36:09.763166 systemd[1]: Reached target network.target - Network. Aug 12 23:36:09.771954 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:36:09.772386 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:36:09.773054 systemd-networkd[1439]: eth0: Link UP Aug 12 23:36:09.773254 systemd-networkd[1439]: eth0: Gained carrier Aug 12 23:36:09.773338 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:36:09.775483 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 12 23:36:09.777499 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:36:09.781505 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 12 23:36:09.782478 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:36:09.801361 systemd-networkd[1439]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:36:09.802093 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Aug 12 23:36:09.805670 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 12 23:36:09.808422 systemd-timesyncd[1440]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 12 23:36:09.808471 systemd-timesyncd[1440]: Initial clock synchronization to Tue 2025-08-12 23:36:09.530086 UTC. Aug 12 23:36:09.826450 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:36:09.827494 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:36:09.828366 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:36:09.829226 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:36:09.830275 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:36:09.831169 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:36:09.832086 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:36:09.832975 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:36:09.833009 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:36:09.833652 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:36:09.835201 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:36:09.837231 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:36:09.840124 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 12 23:36:09.841228 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 12 23:36:09.842189 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 12 23:36:09.845167 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:36:09.846514 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 12 23:36:09.847870 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:36:09.848720 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:36:09.849403 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:36:09.850074 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:36:09.850104 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:36:09.851006 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:36:09.852731 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:36:09.854276 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:36:09.855935 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:36:09.858508 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:36:09.859250 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:36:09.860221 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:36:09.861928 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 12 23:36:09.866467 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:36:09.867027 jq[1504]: false Aug 12 23:36:09.868270 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:36:09.871181 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:36:09.872963 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:36:09.875459 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:36:09.877026 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:36:09.878491 extend-filesystems[1505]: Found /dev/vda6 Aug 12 23:36:09.878894 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:36:09.883349 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:36:09.885691 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:36:09.888966 extend-filesystems[1505]: Found /dev/vda9 Aug 12 23:36:09.890255 extend-filesystems[1505]: Checking size of /dev/vda9 Aug 12 23:36:09.891454 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:36:09.891899 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:36:09.892432 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:36:09.894167 jq[1520]: true Aug 12 23:36:09.895583 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:36:09.895877 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:36:09.908380 jq[1529]: true Aug 12 23:36:09.908678 (ntainerd)[1531]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:36:09.911550 extend-filesystems[1505]: Resized partition /dev/vda9 Aug 12 23:36:09.922745 tar[1528]: linux-arm64/LICENSE Aug 12 23:36:09.922745 tar[1528]: linux-arm64/helm Aug 12 23:36:09.922974 extend-filesystems[1542]: resize2fs 1.47.2 (1-Jan-2025) Aug 12 23:36:09.926345 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 12 23:36:09.970919 update_engine[1517]: I20250812 23:36:09.968489 1517 main.cc:92] Flatcar Update Engine starting Aug 12 23:36:09.988412 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 12 23:36:09.976037 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:36:09.975183 dbus-daemon[1502]: [system] SELinux support is enabled Aug 12 23:36:09.979139 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:36:09.979167 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:36:09.980275 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:36:09.980291 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:36:09.990500 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:36:09.991608 extend-filesystems[1542]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:36:09.991608 extend-filesystems[1542]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 12 23:36:09.991608 extend-filesystems[1542]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 12 23:36:09.996473 update_engine[1517]: I20250812 23:36:09.991463 1517 update_check_scheduler.cc:74] Next update check in 9m37s Aug 12 23:36:09.993267 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (Power Button) Aug 12 23:36:09.996683 extend-filesystems[1505]: Resized filesystem in /dev/vda9 Aug 12 23:36:09.993567 systemd-logind[1515]: New seat seat0. Aug 12 23:36:10.001637 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:36:10.002628 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:36:10.003717 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:36:10.005355 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:36:10.006469 bash[1561]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:36:10.014630 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:36:10.022211 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 12 23:36:10.062716 locksmithd[1563]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:36:10.148375 containerd[1531]: time="2025-08-12T23:36:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 12 23:36:10.149944 containerd[1531]: time="2025-08-12T23:36:10.149895271Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 12 23:36:10.158330 containerd[1531]: time="2025-08-12T23:36:10.157684247Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.378µs" Aug 12 23:36:10.158330 containerd[1531]: time="2025-08-12T23:36:10.157718493Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 12 23:36:10.158330 containerd[1531]: time="2025-08-12T23:36:10.157738068Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 12 23:36:10.158330 containerd[1531]: time="2025-08-12T23:36:10.157862196Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 12 23:36:10.158330 containerd[1531]: time="2025-08-12T23:36:10.157876250Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 12 23:36:10.158330 containerd[1531]: time="2025-08-12T23:36:10.157897099Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 12 23:36:10.158330 containerd[1531]: time="2025-08-12T23:36:10.157937329Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 12 23:36:10.158330 containerd[1531]: time="2025-08-12T23:36:10.157946943Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 12 23:36:10.158330 containerd[1531]: time="2025-08-12T23:36:10.158136127Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 12 23:36:10.158330 containerd[1531]: time="2025-08-12T23:36:10.158149563Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 12 23:36:10.158330 containerd[1531]: time="2025-08-12T23:36:10.158159022Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 12 23:36:10.158330 containerd[1531]: time="2025-08-12T23:36:10.158166589Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 12 23:36:10.158556 containerd[1531]: time="2025-08-12T23:36:10.158230526Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 12 23:36:10.158737 containerd[1531]: time="2025-08-12T23:36:10.158713485Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 12 23:36:10.158821 containerd[1531]: time="2025-08-12T23:36:10.158807730Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 12 23:36:10.158865 containerd[1531]: time="2025-08-12T23:36:10.158854485Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 12 23:36:10.158944 containerd[1531]: time="2025-08-12T23:36:10.158931163Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 12 23:36:10.159223 containerd[1531]: time="2025-08-12T23:36:10.159205171Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 12 23:36:10.159363 containerd[1531]: time="2025-08-12T23:36:10.159344742Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:36:10.162208 containerd[1531]: time="2025-08-12T23:36:10.162183622Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 12 23:36:10.162349 containerd[1531]: time="2025-08-12T23:36:10.162332962Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 12 23:36:10.162431 containerd[1531]: time="2025-08-12T23:36:10.162418712Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 12 23:36:10.162480 containerd[1531]: time="2025-08-12T23:36:10.162469638Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 12 23:36:10.162525 containerd[1531]: time="2025-08-12T23:36:10.162514115Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 12 23:36:10.162587 containerd[1531]: time="2025-08-12T23:36:10.162574693Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 12 23:36:10.162638 containerd[1531]: time="2025-08-12T23:36:10.162626043Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 12 23:36:10.162692 containerd[1531]: time="2025-08-12T23:36:10.162680404Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 12 23:36:10.162742 containerd[1531]: time="2025-08-12T23:36:10.162730325Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 12 23:36:10.162788 containerd[1531]: time="2025-08-12T23:36:10.162776772Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 12 23:36:10.162833 containerd[1531]: time="2025-08-12T23:36:10.162821443Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 12 23:36:10.162898 containerd[1531]: time="2025-08-12T23:36:10.162884259Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 12 23:36:10.163049 containerd[1531]: time="2025-08-12T23:36:10.163029970Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 12 23:36:10.163112 containerd[1531]: time="2025-08-12T23:36:10.163100393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 12 23:36:10.163171 containerd[1531]: time="2025-08-12T23:36:10.163159040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 12 23:36:10.163218 containerd[1531]: time="2025-08-12T23:36:10.163207069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 12 23:36:10.163263 containerd[1531]: time="2025-08-12T23:36:10.163252049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 12 23:36:10.163329 containerd[1531]: time="2025-08-12T23:36:10.163308109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 12 23:36:10.163393 containerd[1531]: time="2025-08-12T23:36:10.163380385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 12 23:36:10.163444 containerd[1531]: time="2025-08-12T23:36:10.163433665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 12 23:36:10.163505 containerd[1531]: time="2025-08-12T23:36:10.163494590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 12 23:36:10.163553 containerd[1531]: time="2025-08-12T23:36:10.163541925Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 12 23:36:10.163603 containerd[1531]: time="2025-08-12T23:36:10.163591808Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 12 23:36:10.163814 containerd[1531]: time="2025-08-12T23:36:10.163800566Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 12 23:36:10.163877 containerd[1531]: time="2025-08-12T23:36:10.163866240Z" level=info msg="Start snapshots syncer" Aug 12 23:36:10.163948 containerd[1531]: time="2025-08-12T23:36:10.163937011Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 12 23:36:10.165279 containerd[1531]: time="2025-08-12T23:36:10.165189718Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 12 23:36:10.165435 containerd[1531]: time="2025-08-12T23:36:10.165322494Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 12 23:36:10.165435 containerd[1531]: time="2025-08-12T23:36:10.165424769Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 12 23:36:10.165560 containerd[1531]: time="2025-08-12T23:36:10.165537778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 12 23:36:10.165587 containerd[1531]: time="2025-08-12T23:36:10.165572410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 12 23:36:10.165604 containerd[1531]: time="2025-08-12T23:36:10.165588510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 12 23:36:10.165621 containerd[1531]: time="2025-08-12T23:36:10.165602756Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 12 23:36:10.165637 containerd[1531]: time="2025-08-12T23:36:10.165616270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 12 23:36:10.165637 containerd[1531]: time="2025-08-12T23:36:10.165630092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 12 23:36:10.165676 containerd[1531]: time="2025-08-12T23:36:10.165644338Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 12 23:36:10.165693 containerd[1531]: time="2025-08-12T23:36:10.165672986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 12 23:36:10.165693 containerd[1531]: time="2025-08-12T23:36:10.165688623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 12 23:36:10.165732 containerd[1531]: time="2025-08-12T23:36:10.165703217Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 12 23:36:10.165833 containerd[1531]: time="2025-08-12T23:36:10.165767269Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 12 23:36:10.166004 containerd[1531]: time="2025-08-12T23:36:10.165812172Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 12 23:36:10.166069 containerd[1531]: time="2025-08-12T23:36:10.166054790Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 12 23:36:10.166119 containerd[1531]: time="2025-08-12T23:36:10.166106797Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 12 23:36:10.166168 containerd[1531]: time="2025-08-12T23:36:10.166156448Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 12 23:36:10.166878 containerd[1531]: time="2025-08-12T23:36:10.166209458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 12 23:36:10.166878 containerd[1531]: time="2025-08-12T23:36:10.166225404Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 12 23:36:10.166878 containerd[1531]: time="2025-08-12T23:36:10.166300459Z" level=info msg="runtime interface created" Aug 12 23:36:10.166878 containerd[1531]: time="2025-08-12T23:36:10.166320999Z" level=info msg="created NRI interface" Aug 12 23:36:10.166878 containerd[1531]: time="2025-08-12T23:36:10.166330883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 12 23:36:10.166878 containerd[1531]: time="2025-08-12T23:36:10.166343122Z" level=info msg="Connect containerd service" Aug 12 23:36:10.166878 containerd[1531]: time="2025-08-12T23:36:10.166372890Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:36:10.167201 containerd[1531]: time="2025-08-12T23:36:10.167176922Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:36:10.269457 containerd[1531]: time="2025-08-12T23:36:10.269405323Z" level=info msg="Start subscribing containerd event" Aug 12 23:36:10.269457 containerd[1531]: time="2025-08-12T23:36:10.269466286Z" level=info msg="Start recovering state" Aug 12 23:36:10.269587 containerd[1531]: time="2025-08-12T23:36:10.269543350Z" level=info msg="Start event monitor" Aug 12 23:36:10.269587 containerd[1531]: time="2025-08-12T23:36:10.269553929Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:36:10.269587 containerd[1531]: time="2025-08-12T23:36:10.269562925Z" level=info msg="Start streaming server" Aug 12 23:36:10.269587 containerd[1531]: time="2025-08-12T23:36:10.269571496Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 12 23:36:10.269587 containerd[1531]: time="2025-08-12T23:36:10.269577982Z" level=info msg="runtime interface starting up..." Aug 12 23:36:10.269587 containerd[1531]: time="2025-08-12T23:36:10.269583078Z" level=info msg="starting plugins..." Aug 12 23:36:10.269676 containerd[1531]: time="2025-08-12T23:36:10.269595356Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 12 23:36:10.270031 containerd[1531]: time="2025-08-12T23:36:10.269990751Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:36:10.270075 containerd[1531]: time="2025-08-12T23:36:10.270056463Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:36:10.270211 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:36:10.271820 containerd[1531]: time="2025-08-12T23:36:10.270117272Z" level=info msg="containerd successfully booted in 0.122183s" Aug 12 23:36:10.350349 tar[1528]: linux-arm64/README.md Aug 12 23:36:10.369421 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 12 23:36:11.665445 systemd-networkd[1439]: eth0: Gained IPv6LL Aug 12 23:36:11.670262 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:36:11.672102 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:36:11.674261 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 12 23:36:11.676257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:36:11.677988 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:36:11.707482 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 12 23:36:11.707728 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 12 23:36:11.710057 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:36:11.711436 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:36:11.778130 sshd_keygen[1525]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:36:11.797016 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:36:11.799527 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:36:11.815244 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:36:11.816112 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:36:11.818580 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:36:11.849153 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:36:11.852883 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:36:11.855365 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 12 23:36:11.856760 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:36:12.244377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:36:12.245779 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:36:12.248392 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:36:12.251410 systemd[1]: Startup finished in 2.067s (kernel) + 5.563s (initrd) + 4.121s (userspace) = 11.753s. Aug 12 23:36:12.662585 kubelet[1634]: E0812 23:36:12.662492 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:36:12.665158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:36:12.665297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:36:12.665815 systemd[1]: kubelet.service: Consumed 819ms CPU time, 257.1M memory peak. Aug 12 23:36:14.888693 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:36:14.890197 systemd[1]: Started sshd@0-10.0.0.30:22-10.0.0.1:59302.service - OpenSSH per-connection server daemon (10.0.0.1:59302). Aug 12 23:36:14.966638 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 59302 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:36:14.968563 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:36:14.979861 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:36:14.980998 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:36:14.986697 systemd-logind[1515]: New session 1 of user core. Aug 12 23:36:15.001354 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:36:15.003922 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:36:15.017489 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:36:15.019668 systemd-logind[1515]: New session c1 of user core. Aug 12 23:36:15.120391 systemd[1651]: Queued start job for default target default.target. Aug 12 23:36:15.128177 systemd[1651]: Created slice app.slice - User Application Slice. Aug 12 23:36:15.128209 systemd[1651]: Reached target paths.target - Paths. Aug 12 23:36:15.128244 systemd[1651]: Reached target timers.target - Timers. Aug 12 23:36:15.129474 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:36:15.138975 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:36:15.139212 systemd[1651]: Reached target sockets.target - Sockets. Aug 12 23:36:15.139349 systemd[1651]: Reached target basic.target - Basic System. Aug 12 23:36:15.139454 systemd[1651]: Reached target default.target - Main User Target. Aug 12 23:36:15.139505 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:36:15.139549 systemd[1651]: Startup finished in 114ms. Aug 12 23:36:15.140685 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:36:15.203602 systemd[1]: Started sshd@1-10.0.0.30:22-10.0.0.1:59310.service - OpenSSH per-connection server daemon (10.0.0.1:59310). Aug 12 23:36:15.255025 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 59310 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:36:15.256233 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:36:15.260504 systemd-logind[1515]: New session 2 of user core. Aug 12 23:36:15.267439 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:36:15.320821 sshd[1664]: Connection closed by 10.0.0.1 port 59310 Aug 12 23:36:15.321374 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Aug 12 23:36:15.336223 systemd[1]: sshd@1-10.0.0.30:22-10.0.0.1:59310.service: Deactivated successfully. Aug 12 23:36:15.338006 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:36:15.340386 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:36:15.341899 systemd[1]: Started sshd@2-10.0.0.30:22-10.0.0.1:59314.service - OpenSSH per-connection server daemon (10.0.0.1:59314). Aug 12 23:36:15.342834 systemd-logind[1515]: Removed session 2. Aug 12 23:36:15.406617 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 59314 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:36:15.407840 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:36:15.411631 systemd-logind[1515]: New session 3 of user core. Aug 12 23:36:15.427486 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:36:15.476069 sshd[1672]: Connection closed by 10.0.0.1 port 59314 Aug 12 23:36:15.475926 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Aug 12 23:36:15.485348 systemd[1]: sshd@2-10.0.0.30:22-10.0.0.1:59314.service: Deactivated successfully. Aug 12 23:36:15.486991 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:36:15.487617 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:36:15.489990 systemd[1]: Started sshd@3-10.0.0.30:22-10.0.0.1:59324.service - OpenSSH per-connection server daemon (10.0.0.1:59324). Aug 12 23:36:15.490640 systemd-logind[1515]: Removed session 3. Aug 12 23:36:15.550989 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 59324 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:36:15.552273 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:36:15.556330 systemd-logind[1515]: New session 4 of user core. Aug 12 23:36:15.567461 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:36:15.617614 sshd[1680]: Connection closed by 10.0.0.1 port 59324 Aug 12 23:36:15.617926 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Aug 12 23:36:15.630408 systemd[1]: sshd@3-10.0.0.30:22-10.0.0.1:59324.service: Deactivated successfully. Aug 12 23:36:15.631901 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:36:15.632710 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:36:15.635068 systemd[1]: Started sshd@4-10.0.0.30:22-10.0.0.1:59334.service - OpenSSH per-connection server daemon (10.0.0.1:59334). Aug 12 23:36:15.635726 systemd-logind[1515]: Removed session 4. Aug 12 23:36:15.688759 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 59334 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:36:15.689985 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:36:15.694983 systemd-logind[1515]: New session 5 of user core. Aug 12 23:36:15.702478 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:36:15.783665 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:36:15.783939 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:36:15.800908 sudo[1689]: pam_unix(sudo:session): session closed for user root Aug 12 23:36:15.803008 sshd[1688]: Connection closed by 10.0.0.1 port 59334 Aug 12 23:36:15.803664 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Aug 12 23:36:15.814920 systemd[1]: sshd@4-10.0.0.30:22-10.0.0.1:59334.service: Deactivated successfully. Aug 12 23:36:15.817166 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:36:15.820234 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:36:15.825490 systemd[1]: Started sshd@5-10.0.0.30:22-10.0.0.1:59346.service - OpenSSH per-connection server daemon (10.0.0.1:59346). Aug 12 23:36:15.826115 systemd-logind[1515]: Removed session 5. Aug 12 23:36:15.892375 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 59346 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:36:15.894128 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:36:15.898901 systemd-logind[1515]: New session 6 of user core. Aug 12 23:36:15.910498 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:36:15.961011 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:36:15.961286 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:36:16.008375 sudo[1699]: pam_unix(sudo:session): session closed for user root Aug 12 23:36:16.013803 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 12 23:36:16.014079 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:36:16.024170 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:36:16.065857 augenrules[1721]: No rules Aug 12 23:36:16.067203 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:36:16.067459 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:36:16.069544 sudo[1698]: pam_unix(sudo:session): session closed for user root Aug 12 23:36:16.071361 sshd[1697]: Connection closed by 10.0.0.1 port 59346 Aug 12 23:36:16.071250 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Aug 12 23:36:16.082525 systemd[1]: sshd@5-10.0.0.30:22-10.0.0.1:59346.service: Deactivated successfully. Aug 12 23:36:16.085898 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:36:16.086641 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:36:16.089888 systemd[1]: Started sshd@6-10.0.0.30:22-10.0.0.1:59360.service - OpenSSH per-connection server daemon (10.0.0.1:59360). Aug 12 23:36:16.090500 systemd-logind[1515]: Removed session 6. Aug 12 23:36:16.147349 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 59360 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:36:16.148589 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:36:16.152852 systemd-logind[1515]: New session 7 of user core. Aug 12 23:36:16.163666 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:36:16.212759 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:36:16.213417 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:36:16.574216 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 12 23:36:16.586596 (dockerd)[1753]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 12 23:36:16.855905 dockerd[1753]: time="2025-08-12T23:36:16.855438947Z" level=info msg="Starting up" Aug 12 23:36:16.856401 dockerd[1753]: time="2025-08-12T23:36:16.856369946Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 12 23:36:16.977123 dockerd[1753]: time="2025-08-12T23:36:16.977068925Z" level=info msg="Loading containers: start." Aug 12 23:36:16.984336 kernel: Initializing XFRM netlink socket Aug 12 23:36:17.183042 systemd-networkd[1439]: docker0: Link UP Aug 12 23:36:17.186616 dockerd[1753]: time="2025-08-12T23:36:17.186572871Z" level=info msg="Loading containers: done." Aug 12 23:36:17.197916 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3528468762-merged.mount: Deactivated successfully. Aug 12 23:36:17.199647 dockerd[1753]: time="2025-08-12T23:36:17.199603262Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:36:17.199706 dockerd[1753]: time="2025-08-12T23:36:17.199693138Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 12 23:36:17.199811 dockerd[1753]: time="2025-08-12T23:36:17.199784986Z" level=info msg="Initializing buildkit" Aug 12 23:36:17.223045 dockerd[1753]: time="2025-08-12T23:36:17.223001389Z" level=info msg="Completed buildkit initialization" Aug 12 23:36:17.229805 dockerd[1753]: time="2025-08-12T23:36:17.229757647Z" level=info msg="Daemon has completed initialization" Aug 12 23:36:17.229910 dockerd[1753]: time="2025-08-12T23:36:17.229833793Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:36:17.229991 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 12 23:36:18.054886 containerd[1531]: time="2025-08-12T23:36:18.054831293Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 12 23:36:18.967982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084472144.mount: Deactivated successfully. Aug 12 23:36:20.523922 containerd[1531]: time="2025-08-12T23:36:20.523860343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:20.524983 containerd[1531]: time="2025-08-12T23:36:20.524936328Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327783" Aug 12 23:36:20.526547 containerd[1531]: time="2025-08-12T23:36:20.526486812Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:20.529006 containerd[1531]: time="2025-08-12T23:36:20.528968023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:20.530025 containerd[1531]: time="2025-08-12T23:36:20.529986341Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 2.475108776s" Aug 12 23:36:20.530073 containerd[1531]: time="2025-08-12T23:36:20.530034615Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Aug 12 23:36:20.530697 containerd[1531]: time="2025-08-12T23:36:20.530657504Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 12 23:36:22.083815 containerd[1531]: time="2025-08-12T23:36:22.083704983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:22.085434 containerd[1531]: time="2025-08-12T23:36:22.085363325Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529698" Aug 12 23:36:22.086487 containerd[1531]: time="2025-08-12T23:36:22.086401283Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:22.090060 containerd[1531]: time="2025-08-12T23:36:22.089402116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:22.090560 containerd[1531]: time="2025-08-12T23:36:22.090524281Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 1.559698508s" Aug 12 23:36:22.090621 containerd[1531]: time="2025-08-12T23:36:22.090561260Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Aug 12 23:36:22.091326 containerd[1531]: time="2025-08-12T23:36:22.091181286Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 12 23:36:22.722842 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:36:22.724407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:36:22.871728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:36:22.875581 (kubelet)[2029]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:36:22.918356 kubelet[2029]: E0812 23:36:22.918281 2029 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:36:22.921291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:36:22.921470 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:36:22.921935 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.6M memory peak. Aug 12 23:36:23.914346 containerd[1531]: time="2025-08-12T23:36:23.913687440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:23.914997 containerd[1531]: time="2025-08-12T23:36:23.914947073Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484140" Aug 12 23:36:23.916121 containerd[1531]: time="2025-08-12T23:36:23.916086526Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:23.918548 containerd[1531]: time="2025-08-12T23:36:23.918483544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:23.919509 containerd[1531]: time="2025-08-12T23:36:23.919469147Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 1.828247374s" Aug 12 23:36:23.919509 containerd[1531]: time="2025-08-12T23:36:23.919507828Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Aug 12 23:36:23.920616 containerd[1531]: time="2025-08-12T23:36:23.920588683Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 12 23:36:25.129478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount14147688.mount: Deactivated successfully. Aug 12 23:36:25.470575 containerd[1531]: time="2025-08-12T23:36:25.469893364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:25.470575 containerd[1531]: time="2025-08-12T23:36:25.470333131Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378407" Aug 12 23:36:25.471468 containerd[1531]: time="2025-08-12T23:36:25.471053773Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:25.473149 containerd[1531]: time="2025-08-12T23:36:25.473114856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:25.473643 containerd[1531]: time="2025-08-12T23:36:25.473615814Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.552994242s" Aug 12 23:36:25.473693 containerd[1531]: time="2025-08-12T23:36:25.473647027Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Aug 12 23:36:25.474134 containerd[1531]: time="2025-08-12T23:36:25.474046584Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:36:26.111044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount195052932.mount: Deactivated successfully. Aug 12 23:36:26.998162 containerd[1531]: time="2025-08-12T23:36:26.998103757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:26.999085 containerd[1531]: time="2025-08-12T23:36:26.999044120Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 12 23:36:27.000570 containerd[1531]: time="2025-08-12T23:36:27.000522585Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:27.003119 containerd[1531]: time="2025-08-12T23:36:27.003058115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:27.004677 containerd[1531]: time="2025-08-12T23:36:27.004639613Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.530495159s" Aug 12 23:36:27.004677 containerd[1531]: time="2025-08-12T23:36:27.004678114Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 12 23:36:27.005171 containerd[1531]: time="2025-08-12T23:36:27.005148140Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:36:27.552836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount637206739.mount: Deactivated successfully. Aug 12 23:36:27.559063 containerd[1531]: time="2025-08-12T23:36:27.558999197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:36:27.560552 containerd[1531]: time="2025-08-12T23:36:27.560514653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 12 23:36:27.561870 containerd[1531]: time="2025-08-12T23:36:27.561833379Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:36:27.564422 containerd[1531]: time="2025-08-12T23:36:27.564361703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:36:27.564817 containerd[1531]: time="2025-08-12T23:36:27.564781270Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 559.603716ms" Aug 12 23:36:27.564858 containerd[1531]: time="2025-08-12T23:36:27.564817539Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 12 23:36:27.565344 containerd[1531]: time="2025-08-12T23:36:27.565293184Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 12 23:36:28.231034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464135984.mount: Deactivated successfully. Aug 12 23:36:30.459351 containerd[1531]: time="2025-08-12T23:36:30.459290360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:30.460108 containerd[1531]: time="2025-08-12T23:36:30.460046216Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Aug 12 23:36:30.461212 containerd[1531]: time="2025-08-12T23:36:30.461071862Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:30.464458 containerd[1531]: time="2025-08-12T23:36:30.464399115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:30.465488 containerd[1531]: time="2025-08-12T23:36:30.465445390Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.900098549s" Aug 12 23:36:30.465617 containerd[1531]: time="2025-08-12T23:36:30.465598142Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Aug 12 23:36:32.972924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 12 23:36:32.974951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:36:33.115416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:36:33.118524 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:36:33.154014 kubelet[2191]: E0812 23:36:33.153957 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:36:33.156872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:36:33.157012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:36:33.158401 systemd[1]: kubelet.service: Consumed 132ms CPU time, 106.5M memory peak. Aug 12 23:36:38.019403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:36:38.019561 systemd[1]: kubelet.service: Consumed 132ms CPU time, 106.5M memory peak. Aug 12 23:36:38.022152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:36:38.042794 systemd[1]: Reload requested from client PID 2206 ('systemctl') (unit session-7.scope)... Aug 12 23:36:38.042811 systemd[1]: Reloading... Aug 12 23:36:38.125496 zram_generator::config[2252]: No configuration found. Aug 12 23:36:38.234701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:36:38.322574 systemd[1]: Reloading finished in 279 ms. Aug 12 23:36:38.376996 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 12 23:36:38.377264 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 12 23:36:38.379375 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:36:38.379432 systemd[1]: kubelet.service: Consumed 92ms CPU time, 94.9M memory peak. Aug 12 23:36:38.381423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:36:38.520206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:36:38.524231 (kubelet)[2294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:36:38.567931 kubelet[2294]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:36:38.567931 kubelet[2294]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:36:38.567931 kubelet[2294]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:36:38.567931 kubelet[2294]: I0812 23:36:38.564222 2294 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:36:39.347617 kubelet[2294]: I0812 23:36:39.347568 2294 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 12 23:36:39.347617 kubelet[2294]: I0812 23:36:39.347603 2294 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:36:39.347901 kubelet[2294]: I0812 23:36:39.347868 2294 server.go:954] "Client rotation is on, will bootstrap in background" Aug 12 23:36:39.379984 kubelet[2294]: E0812 23:36:39.379940 2294 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:36:39.381392 kubelet[2294]: I0812 23:36:39.381284 2294 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:36:39.388788 kubelet[2294]: I0812 23:36:39.388764 2294 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 12 23:36:39.391433 kubelet[2294]: I0812 23:36:39.391411 2294 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:36:39.392576 kubelet[2294]: I0812 23:36:39.392533 2294 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:36:39.392732 kubelet[2294]: I0812 23:36:39.392570 2294 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:36:39.392812 kubelet[2294]: I0812 23:36:39.392800 2294 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:36:39.392812 kubelet[2294]: I0812 23:36:39.392809 2294 container_manager_linux.go:304] "Creating device plugin manager" Aug 12 23:36:39.393018 kubelet[2294]: I0812 23:36:39.392994 2294 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:36:39.395897 kubelet[2294]: I0812 23:36:39.395874 2294 kubelet.go:446] "Attempting to sync node with API server" Aug 12 23:36:39.395928 kubelet[2294]: I0812 23:36:39.395900 2294 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:36:39.395928 kubelet[2294]: I0812 23:36:39.395923 2294 kubelet.go:352] "Adding apiserver pod source" Aug 12 23:36:39.395965 kubelet[2294]: I0812 23:36:39.395932 2294 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:36:39.398832 kubelet[2294]: W0812 23:36:39.398776 2294 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:36:39.398975 kubelet[2294]: E0812 23:36:39.398955 2294 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:36:39.399077 kubelet[2294]: W0812 23:36:39.398993 2294 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:36:39.399173 kubelet[2294]: E0812 23:36:39.399157 2294 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:36:39.400946 kubelet[2294]: I0812 23:36:39.400928 2294 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 12 23:36:39.401775 kubelet[2294]: I0812 23:36:39.401682 2294 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:36:39.401903 kubelet[2294]: W0812 23:36:39.401891 2294 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:36:39.403080 kubelet[2294]: I0812 23:36:39.403059 2294 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:36:39.403785 kubelet[2294]: I0812 23:36:39.403768 2294 server.go:1287] "Started kubelet" Aug 12 23:36:39.403930 kubelet[2294]: I0812 23:36:39.403907 2294 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:36:39.404932 kubelet[2294]: I0812 23:36:39.404917 2294 server.go:479] "Adding debug handlers to kubelet server" Aug 12 23:36:39.405106 kubelet[2294]: I0812 23:36:39.405068 2294 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:36:39.406170 kubelet[2294]: I0812 23:36:39.404960 2294 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:36:39.406253 kubelet[2294]: I0812 23:36:39.406240 2294 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:36:39.406578 kubelet[2294]: E0812 23:36:39.406527 2294 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:36:39.406834 kubelet[2294]: I0812 23:36:39.406747 2294 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:36:39.406968 kubelet[2294]: I0812 23:36:39.406937 2294 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:36:39.406968 kubelet[2294]: I0812 23:36:39.406942 2294 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:36:39.407124 kubelet[2294]: I0812 23:36:39.407037 2294 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:36:39.407722 kubelet[2294]: I0812 23:36:39.407144 2294 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:36:39.407722 kubelet[2294]: I0812 23:36:39.407231 2294 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:36:39.407722 kubelet[2294]: W0812 23:36:39.407329 2294 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:36:39.407722 kubelet[2294]: E0812 23:36:39.407366 2294 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:36:39.407722 kubelet[2294]: E0812 23:36:39.407576 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="200ms" Aug 12 23:36:39.408547 kubelet[2294]: I0812 23:36:39.408399 2294 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:36:39.408802 kubelet[2294]: E0812 23:36:39.408533 2294 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.30:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.30:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2937cb6c2b5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:36:39.403744095 +0000 UTC m=+0.876245744,LastTimestamp:2025-08-12 23:36:39.403744095 +0000 UTC m=+0.876245744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:36:39.409389 kubelet[2294]: E0812 23:36:39.409254 2294 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:36:39.422802 kubelet[2294]: I0812 23:36:39.422736 2294 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:36:39.422899 kubelet[2294]: I0812 23:36:39.422824 2294 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:36:39.422899 kubelet[2294]: I0812 23:36:39.422705 2294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:36:39.422899 kubelet[2294]: I0812 23:36:39.422843 2294 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:36:39.424141 kubelet[2294]: I0812 23:36:39.424081 2294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:36:39.424141 kubelet[2294]: I0812 23:36:39.424120 2294 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 12 23:36:39.424141 kubelet[2294]: I0812 23:36:39.424145 2294 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:36:39.424255 kubelet[2294]: I0812 23:36:39.424152 2294 kubelet.go:2382] "Starting kubelet main sync loop" Aug 12 23:36:39.424255 kubelet[2294]: E0812 23:36:39.424194 2294 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:36:39.501157 kubelet[2294]: I0812 23:36:39.500974 2294 policy_none.go:49] "None policy: Start" Aug 12 23:36:39.501157 kubelet[2294]: I0812 23:36:39.501006 2294 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:36:39.501157 kubelet[2294]: I0812 23:36:39.501022 2294 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:36:39.501157 kubelet[2294]: W0812 23:36:39.501056 2294 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:36:39.501157 kubelet[2294]: E0812 23:36:39.501121 2294 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:36:39.506429 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:36:39.507329 kubelet[2294]: E0812 23:36:39.506773 2294 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:36:39.520077 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:36:39.522797 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:36:39.524607 kubelet[2294]: E0812 23:36:39.524576 2294 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 12 23:36:39.538100 kubelet[2294]: I0812 23:36:39.538078 2294 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:36:39.538517 kubelet[2294]: I0812 23:36:39.538495 2294 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:36:39.538566 kubelet[2294]: I0812 23:36:39.538513 2294 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:36:39.538995 kubelet[2294]: I0812 23:36:39.538755 2294 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:36:39.539590 kubelet[2294]: E0812 23:36:39.539571 2294 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:36:39.539858 kubelet[2294]: E0812 23:36:39.539841 2294 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 12 23:36:39.608443 kubelet[2294]: E0812 23:36:39.608344 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="400ms" Aug 12 23:36:39.640434 kubelet[2294]: I0812 23:36:39.640399 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:36:39.640916 kubelet[2294]: E0812 23:36:39.640872 2294 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Aug 12 23:36:39.733505 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice - libcontainer container kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Aug 12 23:36:39.762870 kubelet[2294]: E0812 23:36:39.762826 2294 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:36:39.765675 systemd[1]: Created slice kubepods-burstable-pod6900c0267f990375b584edbe90ef3041.slice - libcontainer container kubepods-burstable-pod6900c0267f990375b584edbe90ef3041.slice. Aug 12 23:36:39.776652 kubelet[2294]: E0812 23:36:39.776612 2294 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:36:39.779251 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice - libcontainer container kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Aug 12 23:36:39.781412 kubelet[2294]: E0812 23:36:39.781386 2294 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:36:39.808672 kubelet[2294]: I0812 23:36:39.808622 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6900c0267f990375b584edbe90ef3041-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6900c0267f990375b584edbe90ef3041\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:36:39.808672 kubelet[2294]: I0812 23:36:39.808668 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6900c0267f990375b584edbe90ef3041-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6900c0267f990375b584edbe90ef3041\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:36:39.808823 kubelet[2294]: I0812 23:36:39.808690 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:39.808823 kubelet[2294]: I0812 23:36:39.808708 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:39.808823 kubelet[2294]: I0812 23:36:39.808723 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:39.808823 kubelet[2294]: I0812 23:36:39.808736 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:39.808823 kubelet[2294]: I0812 23:36:39.808751 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6900c0267f990375b584edbe90ef3041-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6900c0267f990375b584edbe90ef3041\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:36:39.808935 kubelet[2294]: I0812 23:36:39.808769 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:39.808935 kubelet[2294]: I0812 23:36:39.808783 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:36:39.842732 kubelet[2294]: I0812 23:36:39.842690 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:36:39.843158 kubelet[2294]: E0812 23:36:39.843126 2294 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Aug 12 23:36:40.009631 kubelet[2294]: E0812 23:36:40.009579 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="800ms" Aug 12 23:36:40.064155 kubelet[2294]: E0812 23:36:40.064060 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:40.064763 containerd[1531]: time="2025-08-12T23:36:40.064721894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Aug 12 23:36:40.078232 kubelet[2294]: E0812 23:36:40.077954 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:40.078666 containerd[1531]: time="2025-08-12T23:36:40.078612374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6900c0267f990375b584edbe90ef3041,Namespace:kube-system,Attempt:0,}" Aug 12 23:36:40.081093 containerd[1531]: time="2025-08-12T23:36:40.081064900Z" level=info msg="connecting to shim 34a6318958a1e2ed6b277438c64fe93eb74f5c3a058f7e8b5aca7383f8cfcc46" address="unix:///run/containerd/s/78267924fbc896caf3dfdf3b2c0048994700bfbf27ece63d6df31c90e473dfc4" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:36:40.082790 kubelet[2294]: E0812 23:36:40.082741 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:40.083236 containerd[1531]: time="2025-08-12T23:36:40.083199228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Aug 12 23:36:40.102638 containerd[1531]: time="2025-08-12T23:36:40.102586505Z" level=info msg="connecting to shim 2fc82f8af2e408b23924ef684ee5a6af091c21e08e91cd95d66b427949cdd330" address="unix:///run/containerd/s/33acc148f9b913999359a04bc283d438ba58f70685daf20c4b4417fccddd5d13" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:36:40.107503 systemd[1]: Started cri-containerd-34a6318958a1e2ed6b277438c64fe93eb74f5c3a058f7e8b5aca7383f8cfcc46.scope - libcontainer container 34a6318958a1e2ed6b277438c64fe93eb74f5c3a058f7e8b5aca7383f8cfcc46. Aug 12 23:36:40.112870 containerd[1531]: time="2025-08-12T23:36:40.112817942Z" level=info msg="connecting to shim d0172625e9b227037e3d1b8a892c7fc304c0cd9a167b4938307bb7075875ff75" address="unix:///run/containerd/s/e7ab5f18f44df08f3d41fcbb49f51977edff68fde0794113e447814e20f306e2" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:36:40.127501 systemd[1]: Started cri-containerd-2fc82f8af2e408b23924ef684ee5a6af091c21e08e91cd95d66b427949cdd330.scope - libcontainer container 2fc82f8af2e408b23924ef684ee5a6af091c21e08e91cd95d66b427949cdd330. Aug 12 23:36:40.132787 systemd[1]: Started cri-containerd-d0172625e9b227037e3d1b8a892c7fc304c0cd9a167b4938307bb7075875ff75.scope - libcontainer container d0172625e9b227037e3d1b8a892c7fc304c0cd9a167b4938307bb7075875ff75. Aug 12 23:36:40.165334 containerd[1531]: time="2025-08-12T23:36:40.164802327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6900c0267f990375b584edbe90ef3041,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fc82f8af2e408b23924ef684ee5a6af091c21e08e91cd95d66b427949cdd330\"" Aug 12 23:36:40.166735 kubelet[2294]: E0812 23:36:40.166593 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:40.169353 containerd[1531]: time="2025-08-12T23:36:40.169290124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"34a6318958a1e2ed6b277438c64fe93eb74f5c3a058f7e8b5aca7383f8cfcc46\"" Aug 12 23:36:40.169881 containerd[1531]: time="2025-08-12T23:36:40.169853087Z" level=info msg="CreateContainer within sandbox \"2fc82f8af2e408b23924ef684ee5a6af091c21e08e91cd95d66b427949cdd330\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:36:40.170232 kubelet[2294]: E0812 23:36:40.170197 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:40.171920 containerd[1531]: time="2025-08-12T23:36:40.171889677Z" level=info msg="CreateContainer within sandbox \"34a6318958a1e2ed6b277438c64fe93eb74f5c3a058f7e8b5aca7383f8cfcc46\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:36:40.179052 containerd[1531]: time="2025-08-12T23:36:40.179022478Z" level=info msg="Container eec5ba27037256513485c575e7b27a548cddb01c9fb9747dada5ee861a5c9013: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:36:40.181195 containerd[1531]: time="2025-08-12T23:36:40.181157165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0172625e9b227037e3d1b8a892c7fc304c0cd9a167b4938307bb7075875ff75\"" Aug 12 23:36:40.181875 kubelet[2294]: E0812 23:36:40.181822 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:40.182961 containerd[1531]: time="2025-08-12T23:36:40.182914012Z" level=info msg="Container 22646f05b0b409e937d222e871be5a41e7d263d5720a2e1ad432bbc564513023: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:36:40.183723 containerd[1531]: time="2025-08-12T23:36:40.183684364Z" level=info msg="CreateContainer within sandbox \"d0172625e9b227037e3d1b8a892c7fc304c0cd9a167b4938307bb7075875ff75\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:36:40.186512 containerd[1531]: time="2025-08-12T23:36:40.186449013Z" level=info msg="CreateContainer within sandbox \"2fc82f8af2e408b23924ef684ee5a6af091c21e08e91cd95d66b427949cdd330\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eec5ba27037256513485c575e7b27a548cddb01c9fb9747dada5ee861a5c9013\"" Aug 12 23:36:40.187350 containerd[1531]: time="2025-08-12T23:36:40.187295716Z" level=info msg="StartContainer for \"eec5ba27037256513485c575e7b27a548cddb01c9fb9747dada5ee861a5c9013\"" Aug 12 23:36:40.188453 containerd[1531]: time="2025-08-12T23:36:40.188429078Z" level=info msg="connecting to shim eec5ba27037256513485c575e7b27a548cddb01c9fb9747dada5ee861a5c9013" address="unix:///run/containerd/s/33acc148f9b913999359a04bc283d438ba58f70685daf20c4b4417fccddd5d13" protocol=ttrpc version=3 Aug 12 23:36:40.191891 containerd[1531]: time="2025-08-12T23:36:40.191814853Z" level=info msg="Container 8eb22faa3b4d16d1d7b90af68bbefd718e0b5ad806573a3128a9c0440bdeb1bc: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:36:40.194714 containerd[1531]: time="2025-08-12T23:36:40.194666886Z" level=info msg="CreateContainer within sandbox \"34a6318958a1e2ed6b277438c64fe93eb74f5c3a058f7e8b5aca7383f8cfcc46\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"22646f05b0b409e937d222e871be5a41e7d263d5720a2e1ad432bbc564513023\"" Aug 12 23:36:40.195387 containerd[1531]: time="2025-08-12T23:36:40.195290171Z" level=info msg="StartContainer for \"22646f05b0b409e937d222e871be5a41e7d263d5720a2e1ad432bbc564513023\"" Aug 12 23:36:40.196943 containerd[1531]: time="2025-08-12T23:36:40.196906627Z" level=info msg="connecting to shim 22646f05b0b409e937d222e871be5a41e7d263d5720a2e1ad432bbc564513023" address="unix:///run/containerd/s/78267924fbc896caf3dfdf3b2c0048994700bfbf27ece63d6df31c90e473dfc4" protocol=ttrpc version=3 Aug 12 23:36:40.202633 containerd[1531]: time="2025-08-12T23:36:40.202587188Z" level=info msg="CreateContainer within sandbox \"d0172625e9b227037e3d1b8a892c7fc304c0cd9a167b4938307bb7075875ff75\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8eb22faa3b4d16d1d7b90af68bbefd718e0b5ad806573a3128a9c0440bdeb1bc\"" Aug 12 23:36:40.203225 containerd[1531]: time="2025-08-12T23:36:40.203199600Z" level=info msg="StartContainer for \"8eb22faa3b4d16d1d7b90af68bbefd718e0b5ad806573a3128a9c0440bdeb1bc\"" Aug 12 23:36:40.205635 containerd[1531]: time="2025-08-12T23:36:40.205565421Z" level=info msg="connecting to shim 8eb22faa3b4d16d1d7b90af68bbefd718e0b5ad806573a3128a9c0440bdeb1bc" address="unix:///run/containerd/s/e7ab5f18f44df08f3d41fcbb49f51977edff68fde0794113e447814e20f306e2" protocol=ttrpc version=3 Aug 12 23:36:40.208504 systemd[1]: Started cri-containerd-eec5ba27037256513485c575e7b27a548cddb01c9fb9747dada5ee861a5c9013.scope - libcontainer container eec5ba27037256513485c575e7b27a548cddb01c9fb9747dada5ee861a5c9013. Aug 12 23:36:40.222499 systemd[1]: Started cri-containerd-22646f05b0b409e937d222e871be5a41e7d263d5720a2e1ad432bbc564513023.scope - libcontainer container 22646f05b0b409e937d222e871be5a41e7d263d5720a2e1ad432bbc564513023. Aug 12 23:36:40.227544 systemd[1]: Started cri-containerd-8eb22faa3b4d16d1d7b90af68bbefd718e0b5ad806573a3128a9c0440bdeb1bc.scope - libcontainer container 8eb22faa3b4d16d1d7b90af68bbefd718e0b5ad806573a3128a9c0440bdeb1bc. Aug 12 23:36:40.245288 kubelet[2294]: I0812 23:36:40.245236 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:36:40.245640 kubelet[2294]: E0812 23:36:40.245583 2294 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Aug 12 23:36:40.267505 containerd[1531]: time="2025-08-12T23:36:40.265769918Z" level=info msg="StartContainer for \"eec5ba27037256513485c575e7b27a548cddb01c9fb9747dada5ee861a5c9013\" returns successfully" Aug 12 23:36:40.287081 containerd[1531]: time="2025-08-12T23:36:40.287033526Z" level=info msg="StartContainer for \"22646f05b0b409e937d222e871be5a41e7d263d5720a2e1ad432bbc564513023\" returns successfully" Aug 12 23:36:40.295642 containerd[1531]: time="2025-08-12T23:36:40.292386215Z" level=info msg="StartContainer for \"8eb22faa3b4d16d1d7b90af68bbefd718e0b5ad806573a3128a9c0440bdeb1bc\" returns successfully" Aug 12 23:36:40.435634 kubelet[2294]: E0812 23:36:40.435589 2294 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:36:40.435907 kubelet[2294]: E0812 23:36:40.435890 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:40.441965 kubelet[2294]: E0812 23:36:40.441934 2294 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:36:40.442074 kubelet[2294]: E0812 23:36:40.442054 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:40.445190 kubelet[2294]: E0812 23:36:40.445165 2294 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:36:40.445265 kubelet[2294]: E0812 23:36:40.445257 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:40.471595 kubelet[2294]: W0812 23:36:40.471537 2294 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:36:40.471696 kubelet[2294]: E0812 23:36:40.471605 2294 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:36:41.047261 kubelet[2294]: I0812 23:36:41.047230 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:36:41.447053 kubelet[2294]: E0812 23:36:41.446765 2294 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:36:41.447147 kubelet[2294]: E0812 23:36:41.447090 2294 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:36:41.447344 kubelet[2294]: E0812 23:36:41.447243 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:41.447719 kubelet[2294]: E0812 23:36:41.447663 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:42.472593 kubelet[2294]: E0812 23:36:42.472443 2294 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:36:42.472593 kubelet[2294]: E0812 23:36:42.472589 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:42.493007 kubelet[2294]: E0812 23:36:42.492973 2294 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 12 23:36:42.569147 kubelet[2294]: I0812 23:36:42.569105 2294 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 12 23:36:42.608431 kubelet[2294]: I0812 23:36:42.608388 2294 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:42.616536 kubelet[2294]: E0812 23:36:42.616490 2294 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:42.616536 kubelet[2294]: I0812 23:36:42.616524 2294 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:36:42.618363 kubelet[2294]: E0812 23:36:42.618338 2294 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 12 23:36:42.618363 kubelet[2294]: I0812 23:36:42.618360 2294 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:36:42.619941 kubelet[2294]: E0812 23:36:42.619917 2294 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 12 23:36:43.398981 kubelet[2294]: I0812 23:36:43.398937 2294 apiserver.go:52] "Watching apiserver" Aug 12 23:36:43.407886 kubelet[2294]: I0812 23:36:43.407840 2294 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:36:44.509304 systemd[1]: Reload requested from client PID 2573 ('systemctl') (unit session-7.scope)... Aug 12 23:36:44.509336 systemd[1]: Reloading... Aug 12 23:36:44.588349 zram_generator::config[2619]: No configuration found. Aug 12 23:36:44.651744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:36:44.748155 systemd[1]: Reloading finished in 238 ms. Aug 12 23:36:44.780747 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:36:44.795492 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:36:44.795731 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:36:44.795790 systemd[1]: kubelet.service: Consumed 1.302s CPU time, 128.4M memory peak. Aug 12 23:36:44.797457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:36:44.924948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:36:44.928463 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:36:44.969924 kubelet[2658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:36:44.969924 kubelet[2658]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:36:44.969924 kubelet[2658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:36:44.970258 kubelet[2658]: I0812 23:36:44.970029 2658 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:36:44.976366 kubelet[2658]: I0812 23:36:44.976287 2658 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 12 23:36:44.976366 kubelet[2658]: I0812 23:36:44.976368 2658 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:36:44.976642 kubelet[2658]: I0812 23:36:44.976613 2658 server.go:954] "Client rotation is on, will bootstrap in background" Aug 12 23:36:44.977891 kubelet[2658]: I0812 23:36:44.977864 2658 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:36:44.979949 kubelet[2658]: I0812 23:36:44.979920 2658 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:36:44.983042 kubelet[2658]: I0812 23:36:44.983023 2658 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 12 23:36:44.985622 kubelet[2658]: I0812 23:36:44.985573 2658 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:36:44.985795 kubelet[2658]: I0812 23:36:44.985772 2658 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:36:44.985985 kubelet[2658]: I0812 23:36:44.985797 2658 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:36:44.986068 kubelet[2658]: I0812 23:36:44.985996 2658 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:36:44.986068 kubelet[2658]: I0812 23:36:44.986006 2658 container_manager_linux.go:304] "Creating device plugin manager" Aug 12 23:36:44.986068 kubelet[2658]: I0812 23:36:44.986052 2658 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:36:44.986182 kubelet[2658]: I0812 23:36:44.986170 2658 kubelet.go:446] "Attempting to sync node with API server" Aug 12 23:36:44.986207 kubelet[2658]: I0812 23:36:44.986183 2658 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:36:44.986207 kubelet[2658]: I0812 23:36:44.986203 2658 kubelet.go:352] "Adding apiserver pod source" Aug 12 23:36:44.986364 kubelet[2658]: I0812 23:36:44.986212 2658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:36:44.990158 kubelet[2658]: I0812 23:36:44.990133 2658 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 12 23:36:44.990753 kubelet[2658]: I0812 23:36:44.990737 2658 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:36:44.991238 kubelet[2658]: I0812 23:36:44.991221 2658 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:36:44.991273 kubelet[2658]: I0812 23:36:44.991254 2658 server.go:1287] "Started kubelet" Aug 12 23:36:44.992052 kubelet[2658]: I0812 23:36:44.992005 2658 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:36:44.997352 kubelet[2658]: I0812 23:36:44.995947 2658 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:36:44.997352 kubelet[2658]: I0812 23:36:44.995962 2658 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:36:44.997352 kubelet[2658]: I0812 23:36:44.996783 2658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:36:44.997352 kubelet[2658]: I0812 23:36:44.997151 2658 server.go:479] "Adding debug handlers to kubelet server" Aug 12 23:36:45.001207 kubelet[2658]: I0812 23:36:45.001170 2658 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:36:45.002896 kubelet[2658]: E0812 23:36:45.001419 2658 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:36:45.002896 kubelet[2658]: I0812 23:36:45.001459 2658 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:36:45.002896 kubelet[2658]: I0812 23:36:45.001599 2658 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:36:45.002896 kubelet[2658]: I0812 23:36:45.001711 2658 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:36:45.006000 kubelet[2658]: I0812 23:36:45.005960 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:36:45.006781 kubelet[2658]: I0812 23:36:45.006757 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:36:45.006819 kubelet[2658]: I0812 23:36:45.006808 2658 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 12 23:36:45.006845 kubelet[2658]: I0812 23:36:45.006827 2658 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:36:45.006845 kubelet[2658]: I0812 23:36:45.006833 2658 kubelet.go:2382] "Starting kubelet main sync loop" Aug 12 23:36:45.006908 kubelet[2658]: E0812 23:36:45.006879 2658 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:36:45.006994 kubelet[2658]: I0812 23:36:45.006972 2658 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:36:45.007072 kubelet[2658]: I0812 23:36:45.007053 2658 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:36:45.009200 kubelet[2658]: I0812 23:36:45.009179 2658 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:36:45.013481 kubelet[2658]: E0812 23:36:45.013450 2658 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:36:45.037530 kubelet[2658]: I0812 23:36:45.036676 2658 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:36:45.037530 kubelet[2658]: I0812 23:36:45.036696 2658 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:36:45.037530 kubelet[2658]: I0812 23:36:45.036717 2658 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:36:45.037530 kubelet[2658]: I0812 23:36:45.036868 2658 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:36:45.037530 kubelet[2658]: I0812 23:36:45.036880 2658 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:36:45.037530 kubelet[2658]: I0812 23:36:45.036899 2658 policy_none.go:49] "None policy: Start" Aug 12 23:36:45.037530 kubelet[2658]: I0812 23:36:45.036907 2658 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:36:45.037530 kubelet[2658]: I0812 23:36:45.036917 2658 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:36:45.037530 kubelet[2658]: I0812 23:36:45.037003 2658 state_mem.go:75] "Updated machine memory state" Aug 12 23:36:45.041953 kubelet[2658]: I0812 23:36:45.041923 2658 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:36:45.042120 kubelet[2658]: I0812 23:36:45.042098 2658 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:36:45.042159 kubelet[2658]: I0812 23:36:45.042115 2658 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:36:45.042428 kubelet[2658]: I0812 23:36:45.042410 2658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:36:45.043695 kubelet[2658]: E0812 23:36:45.043670 2658 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:36:45.108621 kubelet[2658]: I0812 23:36:45.108502 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:45.108621 kubelet[2658]: I0812 23:36:45.108607 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:36:45.108783 kubelet[2658]: I0812 23:36:45.108704 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:36:45.146414 kubelet[2658]: I0812 23:36:45.146353 2658 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:36:45.152490 kubelet[2658]: I0812 23:36:45.152454 2658 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 12 23:36:45.152603 kubelet[2658]: I0812 23:36:45.152533 2658 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 12 23:36:45.303943 kubelet[2658]: I0812 23:36:45.303815 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:45.303943 kubelet[2658]: I0812 23:36:45.303862 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:45.303943 kubelet[2658]: I0812 23:36:45.303882 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6900c0267f990375b584edbe90ef3041-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6900c0267f990375b584edbe90ef3041\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:36:45.303943 kubelet[2658]: I0812 23:36:45.303912 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:45.303943 kubelet[2658]: I0812 23:36:45.303929 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:45.304272 kubelet[2658]: I0812 23:36:45.303949 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:45.304272 kubelet[2658]: I0812 23:36:45.303964 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:36:45.304272 kubelet[2658]: I0812 23:36:45.303979 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6900c0267f990375b584edbe90ef3041-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6900c0267f990375b584edbe90ef3041\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:36:45.304272 kubelet[2658]: I0812 23:36:45.303993 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6900c0267f990375b584edbe90ef3041-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6900c0267f990375b584edbe90ef3041\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:36:45.413971 kubelet[2658]: E0812 23:36:45.413930 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:45.414383 kubelet[2658]: E0812 23:36:45.414344 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:45.416162 kubelet[2658]: E0812 23:36:45.416139 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:45.986558 kubelet[2658]: I0812 23:36:45.986525 2658 apiserver.go:52] "Watching apiserver" Aug 12 23:36:46.001791 kubelet[2658]: I0812 23:36:46.001740 2658 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:36:46.025122 kubelet[2658]: I0812 23:36:46.024888 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:36:46.026394 kubelet[2658]: I0812 23:36:46.025253 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:46.026571 kubelet[2658]: I0812 23:36:46.026550 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:36:46.032719 kubelet[2658]: E0812 23:36:46.032391 2658 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:36:46.033630 kubelet[2658]: E0812 23:36:46.033284 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:46.034078 kubelet[2658]: E0812 23:36:46.033390 2658 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 12 23:36:46.034566 kubelet[2658]: E0812 23:36:46.033399 2658 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:36:46.034566 kubelet[2658]: E0812 23:36:46.034441 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:46.034951 kubelet[2658]: E0812 23:36:46.034852 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:46.057241 kubelet[2658]: I0812 23:36:46.057106 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.057082479 podStartE2EDuration="1.057082479s" podCreationTimestamp="2025-08-12 23:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:36:46.049897325 +0000 UTC m=+1.116930377" watchObservedRunningTime="2025-08-12 23:36:46.057082479 +0000 UTC m=+1.124115531" Aug 12 23:36:46.066681 kubelet[2658]: I0812 23:36:46.066604 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.06658737 podStartE2EDuration="1.06658737s" podCreationTimestamp="2025-08-12 23:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:36:46.057326506 +0000 UTC m=+1.124359558" watchObservedRunningTime="2025-08-12 23:36:46.06658737 +0000 UTC m=+1.133620422" Aug 12 23:36:46.066991 kubelet[2658]: I0812 23:36:46.066877 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.066868961 podStartE2EDuration="1.066868961s" podCreationTimestamp="2025-08-12 23:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:36:46.066521083 +0000 UTC m=+1.133554135" watchObservedRunningTime="2025-08-12 23:36:46.066868961 +0000 UTC m=+1.133902013" Aug 12 23:36:47.027229 kubelet[2658]: E0812 23:36:47.026854 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:47.027229 kubelet[2658]: E0812 23:36:47.026967 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:47.027686 kubelet[2658]: E0812 23:36:47.027394 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:48.028465 kubelet[2658]: E0812 23:36:48.028428 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:50.889011 kubelet[2658]: I0812 23:36:50.888958 2658 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:36:50.889408 containerd[1531]: time="2025-08-12T23:36:50.889262298Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:36:50.889695 kubelet[2658]: I0812 23:36:50.889557 2658 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:36:51.636275 systemd[1]: Created slice kubepods-besteffort-pod6d2893dc_a6e3_4b83_9c1d_0356cfbbe16b.slice - libcontainer container kubepods-besteffort-pod6d2893dc_a6e3_4b83_9c1d_0356cfbbe16b.slice. Aug 12 23:36:51.643685 kubelet[2658]: I0812 23:36:51.642425 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d2893dc-a6e3-4b83-9c1d-0356cfbbe16b-xtables-lock\") pod \"kube-proxy-vlx7x\" (UID: \"6d2893dc-a6e3-4b83-9c1d-0356cfbbe16b\") " pod="kube-system/kube-proxy-vlx7x" Aug 12 23:36:51.643685 kubelet[2658]: I0812 23:36:51.642461 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d2893dc-a6e3-4b83-9c1d-0356cfbbe16b-lib-modules\") pod \"kube-proxy-vlx7x\" (UID: \"6d2893dc-a6e3-4b83-9c1d-0356cfbbe16b\") " pod="kube-system/kube-proxy-vlx7x" Aug 12 23:36:51.643685 kubelet[2658]: I0812 23:36:51.642479 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d2893dc-a6e3-4b83-9c1d-0356cfbbe16b-kube-proxy\") pod \"kube-proxy-vlx7x\" (UID: \"6d2893dc-a6e3-4b83-9c1d-0356cfbbe16b\") " pod="kube-system/kube-proxy-vlx7x" Aug 12 23:36:51.643685 kubelet[2658]: I0812 23:36:51.642500 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg7wb\" (UniqueName: \"kubernetes.io/projected/6d2893dc-a6e3-4b83-9c1d-0356cfbbe16b-kube-api-access-jg7wb\") pod \"kube-proxy-vlx7x\" (UID: \"6d2893dc-a6e3-4b83-9c1d-0356cfbbe16b\") " pod="kube-system/kube-proxy-vlx7x" Aug 12 23:36:51.950513 kubelet[2658]: E0812 23:36:51.950403 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:51.950999 containerd[1531]: time="2025-08-12T23:36:51.950962542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vlx7x,Uid:6d2893dc-a6e3-4b83-9c1d-0356cfbbe16b,Namespace:kube-system,Attempt:0,}" Aug 12 23:36:51.967864 containerd[1531]: time="2025-08-12T23:36:51.967825764Z" level=info msg="connecting to shim 112051408473db87c484466ad3b47006b2dc5dd93efa64adf8f882f02bd4cc3f" address="unix:///run/containerd/s/bf8bfee9cb1b1b15ded6537f98c82230e5d1d407c6205390eb1ec167709c8816" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:36:51.998474 systemd[1]: Started cri-containerd-112051408473db87c484466ad3b47006b2dc5dd93efa64adf8f882f02bd4cc3f.scope - libcontainer container 112051408473db87c484466ad3b47006b2dc5dd93efa64adf8f882f02bd4cc3f. Aug 12 23:36:52.030920 containerd[1531]: time="2025-08-12T23:36:52.030452320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vlx7x,Uid:6d2893dc-a6e3-4b83-9c1d-0356cfbbe16b,Namespace:kube-system,Attempt:0,} returns sandbox id \"112051408473db87c484466ad3b47006b2dc5dd93efa64adf8f882f02bd4cc3f\"" Aug 12 23:36:52.032481 kubelet[2658]: E0812 23:36:52.032450 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:52.037072 containerd[1531]: time="2025-08-12T23:36:52.037026206Z" level=info msg="CreateContainer within sandbox \"112051408473db87c484466ad3b47006b2dc5dd93efa64adf8f882f02bd4cc3f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:36:52.054493 containerd[1531]: time="2025-08-12T23:36:52.054421518Z" level=info msg="Container 845b77f702d4b1cb60f646eeb10777e6b21e4039b356f7f4243bdd82a3f20a43: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:36:52.057570 systemd[1]: Created slice kubepods-besteffort-pod9ef3851d_4ac5_43d8_9bb0_16df1a350a90.slice - libcontainer container kubepods-besteffort-pod9ef3851d_4ac5_43d8_9bb0_16df1a350a90.slice. Aug 12 23:36:52.070044 containerd[1531]: time="2025-08-12T23:36:52.069959201Z" level=info msg="CreateContainer within sandbox \"112051408473db87c484466ad3b47006b2dc5dd93efa64adf8f882f02bd4cc3f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"845b77f702d4b1cb60f646eeb10777e6b21e4039b356f7f4243bdd82a3f20a43\"" Aug 12 23:36:52.070760 containerd[1531]: time="2025-08-12T23:36:52.070727422Z" level=info msg="StartContainer for \"845b77f702d4b1cb60f646eeb10777e6b21e4039b356f7f4243bdd82a3f20a43\"" Aug 12 23:36:52.072131 containerd[1531]: time="2025-08-12T23:36:52.072085571Z" level=info msg="connecting to shim 845b77f702d4b1cb60f646eeb10777e6b21e4039b356f7f4243bdd82a3f20a43" address="unix:///run/containerd/s/bf8bfee9cb1b1b15ded6537f98c82230e5d1d407c6205390eb1ec167709c8816" protocol=ttrpc version=3 Aug 12 23:36:52.096504 systemd[1]: Started cri-containerd-845b77f702d4b1cb60f646eeb10777e6b21e4039b356f7f4243bdd82a3f20a43.scope - libcontainer container 845b77f702d4b1cb60f646eeb10777e6b21e4039b356f7f4243bdd82a3f20a43. Aug 12 23:36:52.134276 containerd[1531]: time="2025-08-12T23:36:52.134240945Z" level=info msg="StartContainer for \"845b77f702d4b1cb60f646eeb10777e6b21e4039b356f7f4243bdd82a3f20a43\" returns successfully" Aug 12 23:36:52.145111 kubelet[2658]: I0812 23:36:52.145058 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtnrc\" (UniqueName: \"kubernetes.io/projected/9ef3851d-4ac5-43d8-9bb0-16df1a350a90-kube-api-access-dtnrc\") pod \"tigera-operator-747864d56d-mj6sq\" (UID: \"9ef3851d-4ac5-43d8-9bb0-16df1a350a90\") " pod="tigera-operator/tigera-operator-747864d56d-mj6sq" Aug 12 23:36:52.145111 kubelet[2658]: I0812 23:36:52.145101 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9ef3851d-4ac5-43d8-9bb0-16df1a350a90-var-lib-calico\") pod \"tigera-operator-747864d56d-mj6sq\" (UID: \"9ef3851d-4ac5-43d8-9bb0-16df1a350a90\") " pod="tigera-operator/tigera-operator-747864d56d-mj6sq" Aug 12 23:36:52.361706 containerd[1531]: time="2025-08-12T23:36:52.361635020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-mj6sq,Uid:9ef3851d-4ac5-43d8-9bb0-16df1a350a90,Namespace:tigera-operator,Attempt:0,}" Aug 12 23:36:52.376330 containerd[1531]: time="2025-08-12T23:36:52.376244709Z" level=info msg="connecting to shim c765f3c1c4b0d982f1d1a846f54ae4f103a3616705a004a4e2847256e7db9435" address="unix:///run/containerd/s/a1dc955e938ffb4ca2b13dfbf21b84d176cd88c15f26cd94c80aeddc553ea386" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:36:52.403523 systemd[1]: Started cri-containerd-c765f3c1c4b0d982f1d1a846f54ae4f103a3616705a004a4e2847256e7db9435.scope - libcontainer container c765f3c1c4b0d982f1d1a846f54ae4f103a3616705a004a4e2847256e7db9435. Aug 12 23:36:52.434436 containerd[1531]: time="2025-08-12T23:36:52.434398603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-mj6sq,Uid:9ef3851d-4ac5-43d8-9bb0-16df1a350a90,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c765f3c1c4b0d982f1d1a846f54ae4f103a3616705a004a4e2847256e7db9435\"" Aug 12 23:36:52.439152 containerd[1531]: time="2025-08-12T23:36:52.439121261Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 12 23:36:53.041972 kubelet[2658]: E0812 23:36:53.040990 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:53.568798 kubelet[2658]: E0812 23:36:53.568532 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:53.585672 kubelet[2658]: I0812 23:36:53.585606 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vlx7x" podStartSLOduration=2.585586517 podStartE2EDuration="2.585586517s" podCreationTimestamp="2025-08-12 23:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:36:53.051271319 +0000 UTC m=+8.118304371" watchObservedRunningTime="2025-08-12 23:36:53.585586517 +0000 UTC m=+8.652619529" Aug 12 23:36:53.743966 kubelet[2658]: E0812 23:36:53.743928 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:54.045377 kubelet[2658]: E0812 23:36:54.045347 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:54.047033 kubelet[2658]: E0812 23:36:54.045805 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:54.047033 kubelet[2658]: E0812 23:36:54.045975 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:54.327034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3118933490.mount: Deactivated successfully. Aug 12 23:36:54.821247 containerd[1531]: time="2025-08-12T23:36:54.821194771Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:54.821672 containerd[1531]: time="2025-08-12T23:36:54.821600480Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 12 23:36:54.822551 containerd[1531]: time="2025-08-12T23:36:54.822517466Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:54.824325 containerd[1531]: time="2025-08-12T23:36:54.824283514Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:36:54.825158 containerd[1531]: time="2025-08-12T23:36:54.824870236Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.385656168s" Aug 12 23:36:54.825158 containerd[1531]: time="2025-08-12T23:36:54.824902158Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 12 23:36:54.828387 containerd[1531]: time="2025-08-12T23:36:54.828357688Z" level=info msg="CreateContainer within sandbox \"c765f3c1c4b0d982f1d1a846f54ae4f103a3616705a004a4e2847256e7db9435\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 12 23:36:54.834605 containerd[1531]: time="2025-08-12T23:36:54.834571577Z" level=info msg="Container 7cbfeb81a3163a472942bf9a088db7b031b4352db3cd715a0e203083628cbae8: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:36:54.840502 containerd[1531]: time="2025-08-12T23:36:54.840465962Z" level=info msg="CreateContainer within sandbox \"c765f3c1c4b0d982f1d1a846f54ae4f103a3616705a004a4e2847256e7db9435\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7cbfeb81a3163a472942bf9a088db7b031b4352db3cd715a0e203083628cbae8\"" Aug 12 23:36:54.840984 containerd[1531]: time="2025-08-12T23:36:54.840951917Z" level=info msg="StartContainer for \"7cbfeb81a3163a472942bf9a088db7b031b4352db3cd715a0e203083628cbae8\"" Aug 12 23:36:54.841966 containerd[1531]: time="2025-08-12T23:36:54.841917787Z" level=info msg="connecting to shim 7cbfeb81a3163a472942bf9a088db7b031b4352db3cd715a0e203083628cbae8" address="unix:///run/containerd/s/a1dc955e938ffb4ca2b13dfbf21b84d176cd88c15f26cd94c80aeddc553ea386" protocol=ttrpc version=3 Aug 12 23:36:54.866511 systemd[1]: Started cri-containerd-7cbfeb81a3163a472942bf9a088db7b031b4352db3cd715a0e203083628cbae8.scope - libcontainer container 7cbfeb81a3163a472942bf9a088db7b031b4352db3cd715a0e203083628cbae8. Aug 12 23:36:54.897903 containerd[1531]: time="2025-08-12T23:36:54.897857865Z" level=info msg="StartContainer for \"7cbfeb81a3163a472942bf9a088db7b031b4352db3cd715a0e203083628cbae8\" returns successfully" Aug 12 23:36:55.058091 kubelet[2658]: I0812 23:36:55.058028 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-mj6sq" podStartSLOduration=0.667313425 podStartE2EDuration="3.058011223s" podCreationTimestamp="2025-08-12 23:36:52 +0000 UTC" firstStartedPulling="2025-08-12 23:36:52.435964888 +0000 UTC m=+7.502997900" lastFinishedPulling="2025-08-12 23:36:54.826662646 +0000 UTC m=+9.893695698" observedRunningTime="2025-08-12 23:36:55.057552391 +0000 UTC m=+10.124585443" watchObservedRunningTime="2025-08-12 23:36:55.058011223 +0000 UTC m=+10.125044315" Aug 12 23:36:55.499029 update_engine[1517]: I20250812 23:36:55.498962 1517 update_attempter.cc:509] Updating boot flags... Aug 12 23:36:56.297675 kubelet[2658]: E0812 23:36:56.297630 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:36:57.055483 kubelet[2658]: E0812 23:36:57.055408 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:00.458138 sudo[1733]: pam_unix(sudo:session): session closed for user root Aug 12 23:37:00.464631 sshd[1732]: Connection closed by 10.0.0.1 port 59360 Aug 12 23:37:00.465292 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Aug 12 23:37:00.471111 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:37:00.471300 systemd[1]: sshd@6-10.0.0.30:22-10.0.0.1:59360.service: Deactivated successfully. Aug 12 23:37:00.475458 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:37:00.475683 systemd[1]: session-7.scope: Consumed 9.458s CPU time, 216.8M memory peak. Aug 12 23:37:00.478540 systemd-logind[1515]: Removed session 7. Aug 12 23:37:05.833348 kubelet[2658]: I0812 23:37:05.832245 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/49a97758-1899-4fa9-b2e9-9cd032ddc1a2-typha-certs\") pod \"calico-typha-59ff488cfd-z9t4k\" (UID: \"49a97758-1899-4fa9-b2e9-9cd032ddc1a2\") " pod="calico-system/calico-typha-59ff488cfd-z9t4k" Aug 12 23:37:05.833348 kubelet[2658]: I0812 23:37:05.832288 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z56b9\" (UniqueName: \"kubernetes.io/projected/49a97758-1899-4fa9-b2e9-9cd032ddc1a2-kube-api-access-z56b9\") pod \"calico-typha-59ff488cfd-z9t4k\" (UID: \"49a97758-1899-4fa9-b2e9-9cd032ddc1a2\") " pod="calico-system/calico-typha-59ff488cfd-z9t4k" Aug 12 23:37:05.833348 kubelet[2658]: I0812 23:37:05.832492 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49a97758-1899-4fa9-b2e9-9cd032ddc1a2-tigera-ca-bundle\") pod \"calico-typha-59ff488cfd-z9t4k\" (UID: \"49a97758-1899-4fa9-b2e9-9cd032ddc1a2\") " pod="calico-system/calico-typha-59ff488cfd-z9t4k" Aug 12 23:37:05.835278 systemd[1]: Created slice kubepods-besteffort-pod49a97758_1899_4fa9_b2e9_9cd032ddc1a2.slice - libcontainer container kubepods-besteffort-pod49a97758_1899_4fa9_b2e9_9cd032ddc1a2.slice. Aug 12 23:37:06.145656 kubelet[2658]: E0812 23:37:06.145393 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:06.146278 containerd[1531]: time="2025-08-12T23:37:06.146007964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59ff488cfd-z9t4k,Uid:49a97758-1899-4fa9-b2e9-9cd032ddc1a2,Namespace:calico-system,Attempt:0,}" Aug 12 23:37:06.183133 containerd[1531]: time="2025-08-12T23:37:06.183063865Z" level=info msg="connecting to shim 2dea341bf7bf9b77a8824548cbd455c4e9c3dfae7263a9cc48fe9d6891bab066" address="unix:///run/containerd/s/0287e4debc92dc00cd19be93ed9763416aa82aa0b7db6a8a7b43529cc38aff4a" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:37:06.199187 systemd[1]: Created slice kubepods-besteffort-pod6e7efc29_6f50_4cdc_85e4_35587c8461e5.slice - libcontainer container kubepods-besteffort-pod6e7efc29_6f50_4cdc_85e4_35587c8461e5.slice. Aug 12 23:37:06.235632 kubelet[2658]: I0812 23:37:06.235596 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6e7efc29-6f50-4cdc-85e4-35587c8461e5-cni-net-dir\") pod \"calico-node-bj2r9\" (UID: \"6e7efc29-6f50-4cdc-85e4-35587c8461e5\") " pod="calico-system/calico-node-bj2r9" Aug 12 23:37:06.235632 kubelet[2658]: I0812 23:37:06.235634 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e7efc29-6f50-4cdc-85e4-35587c8461e5-tigera-ca-bundle\") pod \"calico-node-bj2r9\" (UID: \"6e7efc29-6f50-4cdc-85e4-35587c8461e5\") " pod="calico-system/calico-node-bj2r9" Aug 12 23:37:06.235775 kubelet[2658]: I0812 23:37:06.235656 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6e7efc29-6f50-4cdc-85e4-35587c8461e5-var-run-calico\") pod \"calico-node-bj2r9\" (UID: \"6e7efc29-6f50-4cdc-85e4-35587c8461e5\") " pod="calico-system/calico-node-bj2r9" Aug 12 23:37:06.235775 kubelet[2658]: I0812 23:37:06.235670 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e7efc29-6f50-4cdc-85e4-35587c8461e5-xtables-lock\") pod \"calico-node-bj2r9\" (UID: \"6e7efc29-6f50-4cdc-85e4-35587c8461e5\") " pod="calico-system/calico-node-bj2r9" Aug 12 23:37:06.235775 kubelet[2658]: I0812 23:37:06.235685 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e7efc29-6f50-4cdc-85e4-35587c8461e5-lib-modules\") pod \"calico-node-bj2r9\" (UID: \"6e7efc29-6f50-4cdc-85e4-35587c8461e5\") " pod="calico-system/calico-node-bj2r9" Aug 12 23:37:06.235775 kubelet[2658]: I0812 23:37:06.235701 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6e7efc29-6f50-4cdc-85e4-35587c8461e5-node-certs\") pod \"calico-node-bj2r9\" (UID: \"6e7efc29-6f50-4cdc-85e4-35587c8461e5\") " pod="calico-system/calico-node-bj2r9" Aug 12 23:37:06.235775 kubelet[2658]: I0812 23:37:06.235717 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbvjw\" (UniqueName: \"kubernetes.io/projected/6e7efc29-6f50-4cdc-85e4-35587c8461e5-kube-api-access-kbvjw\") pod \"calico-node-bj2r9\" (UID: \"6e7efc29-6f50-4cdc-85e4-35587c8461e5\") " pod="calico-system/calico-node-bj2r9" Aug 12 23:37:06.235886 kubelet[2658]: I0812 23:37:06.235736 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6e7efc29-6f50-4cdc-85e4-35587c8461e5-flexvol-driver-host\") pod \"calico-node-bj2r9\" (UID: \"6e7efc29-6f50-4cdc-85e4-35587c8461e5\") " pod="calico-system/calico-node-bj2r9" Aug 12 23:37:06.235886 kubelet[2658]: I0812 23:37:06.235751 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6e7efc29-6f50-4cdc-85e4-35587c8461e5-policysync\") pod \"calico-node-bj2r9\" (UID: \"6e7efc29-6f50-4cdc-85e4-35587c8461e5\") " pod="calico-system/calico-node-bj2r9" Aug 12 23:37:06.235886 kubelet[2658]: I0812 23:37:06.235765 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e7efc29-6f50-4cdc-85e4-35587c8461e5-var-lib-calico\") pod \"calico-node-bj2r9\" (UID: \"6e7efc29-6f50-4cdc-85e4-35587c8461e5\") " pod="calico-system/calico-node-bj2r9" Aug 12 23:37:06.235886 kubelet[2658]: I0812 23:37:06.235783 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6e7efc29-6f50-4cdc-85e4-35587c8461e5-cni-bin-dir\") pod \"calico-node-bj2r9\" (UID: \"6e7efc29-6f50-4cdc-85e4-35587c8461e5\") " pod="calico-system/calico-node-bj2r9" Aug 12 23:37:06.235886 kubelet[2658]: I0812 23:37:06.235797 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6e7efc29-6f50-4cdc-85e4-35587c8461e5-cni-log-dir\") pod \"calico-node-bj2r9\" (UID: \"6e7efc29-6f50-4cdc-85e4-35587c8461e5\") " pod="calico-system/calico-node-bj2r9" Aug 12 23:37:06.253517 systemd[1]: Started cri-containerd-2dea341bf7bf9b77a8824548cbd455c4e9c3dfae7263a9cc48fe9d6891bab066.scope - libcontainer container 2dea341bf7bf9b77a8824548cbd455c4e9c3dfae7263a9cc48fe9d6891bab066. Aug 12 23:37:06.289502 containerd[1531]: time="2025-08-12T23:37:06.289461209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59ff488cfd-z9t4k,Uid:49a97758-1899-4fa9-b2e9-9cd032ddc1a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"2dea341bf7bf9b77a8824548cbd455c4e9c3dfae7263a9cc48fe9d6891bab066\"" Aug 12 23:37:06.290248 kubelet[2658]: E0812 23:37:06.290223 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:06.294425 containerd[1531]: time="2025-08-12T23:37:06.294077681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 12 23:37:06.343898 kubelet[2658]: E0812 23:37:06.343844 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.343898 kubelet[2658]: W0812 23:37:06.343875 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.343898 kubelet[2658]: E0812 23:37:06.343896 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.353923 kubelet[2658]: E0812 23:37:06.353883 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.353923 kubelet[2658]: W0812 23:37:06.353914 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.354037 kubelet[2658]: E0812 23:37:06.353934 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.430214 kubelet[2658]: E0812 23:37:06.430096 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9m5bp" podUID="6f6acd37-c53d-49cd-8abd-4c20e696ec5d" Aug 12 23:37:06.436158 kubelet[2658]: E0812 23:37:06.436131 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.436271 kubelet[2658]: W0812 23:37:06.436169 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.436271 kubelet[2658]: E0812 23:37:06.436188 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.436368 kubelet[2658]: E0812 23:37:06.436357 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.436403 kubelet[2658]: W0812 23:37:06.436368 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.436423 kubelet[2658]: E0812 23:37:06.436406 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.436558 kubelet[2658]: E0812 23:37:06.436548 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.436558 kubelet[2658]: W0812 23:37:06.436557 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.436627 kubelet[2658]: E0812 23:37:06.436565 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.436692 kubelet[2658]: E0812 23:37:06.436683 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.436692 kubelet[2658]: W0812 23:37:06.436691 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.436745 kubelet[2658]: E0812 23:37:06.436698 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.436832 kubelet[2658]: E0812 23:37:06.436822 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.436870 kubelet[2658]: W0812 23:37:06.436832 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.436870 kubelet[2658]: E0812 23:37:06.436839 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.436977 kubelet[2658]: E0812 23:37:06.436944 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.436977 kubelet[2658]: W0812 23:37:06.436953 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.436977 kubelet[2658]: E0812 23:37:06.436960 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.437074 kubelet[2658]: E0812 23:37:06.437062 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.437074 kubelet[2658]: W0812 23:37:06.437073 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.437120 kubelet[2658]: E0812 23:37:06.437080 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.437213 kubelet[2658]: E0812 23:37:06.437187 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.437213 kubelet[2658]: W0812 23:37:06.437197 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.437213 kubelet[2658]: E0812 23:37:06.437205 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.437370 kubelet[2658]: E0812 23:37:06.437358 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.437370 kubelet[2658]: W0812 23:37:06.437369 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.437428 kubelet[2658]: E0812 23:37:06.437377 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.437515 kubelet[2658]: E0812 23:37:06.437503 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.437515 kubelet[2658]: W0812 23:37:06.437512 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.437557 kubelet[2658]: E0812 23:37:06.437520 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.437646 kubelet[2658]: E0812 23:37:06.437636 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.437667 kubelet[2658]: W0812 23:37:06.437646 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.437667 kubelet[2658]: E0812 23:37:06.437653 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.437772 kubelet[2658]: E0812 23:37:06.437762 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.437809 kubelet[2658]: W0812 23:37:06.437771 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.437809 kubelet[2658]: E0812 23:37:06.437778 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.437917 kubelet[2658]: E0812 23:37:06.437904 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.437917 kubelet[2658]: W0812 23:37:06.437914 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.437955 kubelet[2658]: E0812 23:37:06.437922 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.438036 kubelet[2658]: E0812 23:37:06.438027 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.438056 kubelet[2658]: W0812 23:37:06.438036 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.438056 kubelet[2658]: E0812 23:37:06.438043 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.438223 kubelet[2658]: E0812 23:37:06.438211 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.438223 kubelet[2658]: W0812 23:37:06.438221 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.438277 kubelet[2658]: E0812 23:37:06.438229 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.438390 kubelet[2658]: E0812 23:37:06.438378 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.438390 kubelet[2658]: W0812 23:37:06.438388 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.438450 kubelet[2658]: E0812 23:37:06.438398 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.438550 kubelet[2658]: E0812 23:37:06.438537 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.438550 kubelet[2658]: W0812 23:37:06.438548 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.438597 kubelet[2658]: E0812 23:37:06.438555 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.438676 kubelet[2658]: E0812 23:37:06.438665 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.438676 kubelet[2658]: W0812 23:37:06.438674 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.438720 kubelet[2658]: E0812 23:37:06.438680 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.438795 kubelet[2658]: E0812 23:37:06.438786 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.438815 kubelet[2658]: W0812 23:37:06.438795 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.438815 kubelet[2658]: E0812 23:37:06.438802 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.438917 kubelet[2658]: E0812 23:37:06.438907 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.438952 kubelet[2658]: W0812 23:37:06.438917 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.438952 kubelet[2658]: E0812 23:37:06.438925 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.439139 kubelet[2658]: E0812 23:37:06.439128 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.439196 kubelet[2658]: W0812 23:37:06.439183 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.439225 kubelet[2658]: E0812 23:37:06.439199 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.439252 kubelet[2658]: I0812 23:37:06.439223 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6f6acd37-c53d-49cd-8abd-4c20e696ec5d-registration-dir\") pod \"csi-node-driver-9m5bp\" (UID: \"6f6acd37-c53d-49cd-8abd-4c20e696ec5d\") " pod="calico-system/csi-node-driver-9m5bp" Aug 12 23:37:06.439402 kubelet[2658]: E0812 23:37:06.439387 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.439402 kubelet[2658]: W0812 23:37:06.439400 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.439464 kubelet[2658]: E0812 23:37:06.439414 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.439464 kubelet[2658]: I0812 23:37:06.439428 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f6acd37-c53d-49cd-8abd-4c20e696ec5d-kubelet-dir\") pod \"csi-node-driver-9m5bp\" (UID: \"6f6acd37-c53d-49cd-8abd-4c20e696ec5d\") " pod="calico-system/csi-node-driver-9m5bp" Aug 12 23:37:06.439575 kubelet[2658]: E0812 23:37:06.439562 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.439575 kubelet[2658]: W0812 23:37:06.439574 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.439625 kubelet[2658]: E0812 23:37:06.439582 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.439625 kubelet[2658]: I0812 23:37:06.439596 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6f6acd37-c53d-49cd-8abd-4c20e696ec5d-socket-dir\") pod \"csi-node-driver-9m5bp\" (UID: \"6f6acd37-c53d-49cd-8abd-4c20e696ec5d\") " pod="calico-system/csi-node-driver-9m5bp" Aug 12 23:37:06.439726 kubelet[2658]: E0812 23:37:06.439713 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.439726 kubelet[2658]: W0812 23:37:06.439724 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.439726 kubelet[2658]: E0812 23:37:06.439733 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.439816 kubelet[2658]: I0812 23:37:06.439747 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6f6acd37-c53d-49cd-8abd-4c20e696ec5d-varrun\") pod \"csi-node-driver-9m5bp\" (UID: \"6f6acd37-c53d-49cd-8abd-4c20e696ec5d\") " pod="calico-system/csi-node-driver-9m5bp" Aug 12 23:37:06.439897 kubelet[2658]: E0812 23:37:06.439884 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.439925 kubelet[2658]: W0812 23:37:06.439899 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.439925 kubelet[2658]: E0812 23:37:06.439913 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.439966 kubelet[2658]: I0812 23:37:06.439928 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvh7c\" (UniqueName: \"kubernetes.io/projected/6f6acd37-c53d-49cd-8abd-4c20e696ec5d-kube-api-access-bvh7c\") pod \"csi-node-driver-9m5bp\" (UID: \"6f6acd37-c53d-49cd-8abd-4c20e696ec5d\") " pod="calico-system/csi-node-driver-9m5bp" Aug 12 23:37:06.440091 kubelet[2658]: E0812 23:37:06.440079 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.440091 kubelet[2658]: W0812 23:37:06.440089 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.440134 kubelet[2658]: E0812 23:37:06.440102 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.440224 kubelet[2658]: E0812 23:37:06.440213 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.440224 kubelet[2658]: W0812 23:37:06.440222 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.440264 kubelet[2658]: E0812 23:37:06.440230 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.440385 kubelet[2658]: E0812 23:37:06.440374 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.440385 kubelet[2658]: W0812 23:37:06.440385 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.440440 kubelet[2658]: E0812 23:37:06.440398 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.440535 kubelet[2658]: E0812 23:37:06.440526 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.440559 kubelet[2658]: W0812 23:37:06.440535 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.440559 kubelet[2658]: E0812 23:37:06.440547 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.440718 kubelet[2658]: E0812 23:37:06.440687 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.440718 kubelet[2658]: W0812 23:37:06.440698 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.440718 kubelet[2658]: E0812 23:37:06.440712 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.440838 kubelet[2658]: E0812 23:37:06.440826 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.440838 kubelet[2658]: W0812 23:37:06.440835 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.440955 kubelet[2658]: E0812 23:37:06.440871 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.440955 kubelet[2658]: E0812 23:37:06.440945 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.440955 kubelet[2658]: W0812 23:37:06.440951 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.440955 kubelet[2658]: E0812 23:37:06.440979 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.441098 kubelet[2658]: E0812 23:37:06.441055 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.441098 kubelet[2658]: W0812 23:37:06.441061 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.441098 kubelet[2658]: E0812 23:37:06.441073 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.441406 kubelet[2658]: E0812 23:37:06.441354 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.441406 kubelet[2658]: W0812 23:37:06.441370 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.441406 kubelet[2658]: E0812 23:37:06.441380 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.441887 kubelet[2658]: E0812 23:37:06.441715 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.441887 kubelet[2658]: W0812 23:37:06.441726 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.441887 kubelet[2658]: E0812 23:37:06.441737 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.504026 containerd[1531]: time="2025-08-12T23:37:06.503401785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bj2r9,Uid:6e7efc29-6f50-4cdc-85e4-35587c8461e5,Namespace:calico-system,Attempt:0,}" Aug 12 23:37:06.541801 kubelet[2658]: E0812 23:37:06.541075 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.541801 kubelet[2658]: W0812 23:37:06.541758 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.541801 kubelet[2658]: E0812 23:37:06.541786 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.542414 kubelet[2658]: E0812 23:37:06.542387 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.542414 kubelet[2658]: W0812 23:37:06.542404 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.542499 kubelet[2658]: E0812 23:37:06.542422 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.543658 kubelet[2658]: E0812 23:37:06.543412 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.543658 kubelet[2658]: W0812 23:37:06.543429 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.543658 kubelet[2658]: E0812 23:37:06.543452 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.543826 kubelet[2658]: E0812 23:37:06.543670 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.543826 kubelet[2658]: W0812 23:37:06.543680 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.543826 kubelet[2658]: E0812 23:37:06.543759 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.544007 kubelet[2658]: E0812 23:37:06.543873 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.544007 kubelet[2658]: W0812 23:37:06.543881 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.544007 kubelet[2658]: E0812 23:37:06.543944 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.544133 kubelet[2658]: E0812 23:37:06.544049 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.544133 kubelet[2658]: W0812 23:37:06.544059 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.544133 kubelet[2658]: E0812 23:37:06.544091 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.544731 kubelet[2658]: E0812 23:37:06.544706 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.544731 kubelet[2658]: W0812 23:37:06.544723 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.544731 kubelet[2658]: E0812 23:37:06.544739 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.545065 kubelet[2658]: E0812 23:37:06.545048 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.545065 kubelet[2658]: W0812 23:37:06.545063 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.545145 kubelet[2658]: E0812 23:37:06.545078 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.546193 kubelet[2658]: E0812 23:37:06.546167 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.546333 kubelet[2658]: W0812 23:37:06.546281 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.546607 kubelet[2658]: E0812 23:37:06.546437 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.546767 kubelet[2658]: E0812 23:37:06.546613 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.546767 kubelet[2658]: W0812 23:37:06.546625 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.547218 kubelet[2658]: E0812 23:37:06.546704 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.547569 kubelet[2658]: E0812 23:37:06.546870 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.547569 kubelet[2658]: W0812 23:37:06.547351 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.547569 kubelet[2658]: E0812 23:37:06.547465 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.547569 kubelet[2658]: E0812 23:37:06.547568 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.547843 kubelet[2658]: W0812 23:37:06.547579 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.547843 kubelet[2658]: E0812 23:37:06.547609 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.548198 kubelet[2658]: E0812 23:37:06.548097 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.548198 kubelet[2658]: W0812 23:37:06.548111 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.548198 kubelet[2658]: E0812 23:37:06.548166 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.548300 kubelet[2658]: E0812 23:37:06.548252 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.548300 kubelet[2658]: W0812 23:37:06.548260 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.548300 kubelet[2658]: E0812 23:37:06.548270 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.548489 kubelet[2658]: E0812 23:37:06.548448 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.548489 kubelet[2658]: W0812 23:37:06.548459 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.548489 kubelet[2658]: E0812 23:37:06.548470 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.548669 kubelet[2658]: E0812 23:37:06.548632 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.548669 kubelet[2658]: W0812 23:37:06.548643 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.548669 kubelet[2658]: E0812 23:37:06.548655 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.548874 kubelet[2658]: E0812 23:37:06.548812 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.548874 kubelet[2658]: W0812 23:37:06.548821 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.548874 kubelet[2658]: E0812 23:37:06.548833 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.548997 kubelet[2658]: E0812 23:37:06.548985 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.548997 kubelet[2658]: W0812 23:37:06.548996 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.549109 kubelet[2658]: E0812 23:37:06.549064 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.549256 kubelet[2658]: E0812 23:37:06.549164 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.549256 kubelet[2658]: W0812 23:37:06.549175 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.549256 kubelet[2658]: E0812 23:37:06.549195 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.550060 kubelet[2658]: E0812 23:37:06.550041 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.550060 kubelet[2658]: W0812 23:37:06.550057 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.550217 kubelet[2658]: E0812 23:37:06.550138 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.550457 kubelet[2658]: E0812 23:37:06.550387 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.550457 kubelet[2658]: W0812 23:37:06.550404 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.550746 kubelet[2658]: E0812 23:37:06.550445 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.550746 kubelet[2658]: E0812 23:37:06.550698 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.550746 kubelet[2658]: W0812 23:37:06.550710 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.551200 kubelet[2658]: E0812 23:37:06.551142 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.551200 kubelet[2658]: W0812 23:37:06.551159 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.551200 kubelet[2658]: E0812 23:37:06.551172 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.551354 kubelet[2658]: E0812 23:37:06.551292 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.551540 kubelet[2658]: E0812 23:37:06.551526 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.551571 kubelet[2658]: W0812 23:37:06.551540 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.551571 kubelet[2658]: E0812 23:37:06.551551 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.551719 kubelet[2658]: E0812 23:37:06.551707 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.551719 kubelet[2658]: W0812 23:37:06.551719 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.551777 kubelet[2658]: E0812 23:37:06.551729 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.555782 containerd[1531]: time="2025-08-12T23:37:06.555515312Z" level=info msg="connecting to shim 7d91da5de22e059ff544107090aadd535e652cb462205a22f42e5ca53e3dcb84" address="unix:///run/containerd/s/99c642760023e34b0ea7ad953f1eaae58cd2017d66efb75fa57318ce194b2200" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:37:06.561967 kubelet[2658]: E0812 23:37:06.561931 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:06.561967 kubelet[2658]: W0812 23:37:06.561950 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:06.561967 kubelet[2658]: E0812 23:37:06.561968 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:06.581569 systemd[1]: Started cri-containerd-7d91da5de22e059ff544107090aadd535e652cb462205a22f42e5ca53e3dcb84.scope - libcontainer container 7d91da5de22e059ff544107090aadd535e652cb462205a22f42e5ca53e3dcb84. Aug 12 23:37:06.603395 containerd[1531]: time="2025-08-12T23:37:06.603352901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bj2r9,Uid:6e7efc29-6f50-4cdc-85e4-35587c8461e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d91da5de22e059ff544107090aadd535e652cb462205a22f42e5ca53e3dcb84\"" Aug 12 23:37:07.402112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount358196083.mount: Deactivated successfully. Aug 12 23:37:08.007752 kubelet[2658]: E0812 23:37:08.007710 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9m5bp" podUID="6f6acd37-c53d-49cd-8abd-4c20e696ec5d" Aug 12 23:37:08.812453 containerd[1531]: time="2025-08-12T23:37:08.812400709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:08.813266 containerd[1531]: time="2025-08-12T23:37:08.812795564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 12 23:37:08.814485 containerd[1531]: time="2025-08-12T23:37:08.814450507Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:08.816058 containerd[1531]: time="2025-08-12T23:37:08.816021608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:08.817279 containerd[1531]: time="2025-08-12T23:37:08.817229614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.522796359s" Aug 12 23:37:08.817279 containerd[1531]: time="2025-08-12T23:37:08.817264415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 12 23:37:08.819222 containerd[1531]: time="2025-08-12T23:37:08.819183929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 12 23:37:08.833057 containerd[1531]: time="2025-08-12T23:37:08.833013420Z" level=info msg="CreateContainer within sandbox \"2dea341bf7bf9b77a8824548cbd455c4e9c3dfae7263a9cc48fe9d6891bab066\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 12 23:37:08.841345 containerd[1531]: time="2025-08-12T23:37:08.841184974Z" level=info msg="Container 21aa8182d0b80d68d894f8de1a615251aea60aeaff81e06086d4743c4e627238: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:08.849041 containerd[1531]: time="2025-08-12T23:37:08.848991794Z" level=info msg="CreateContainer within sandbox \"2dea341bf7bf9b77a8824548cbd455c4e9c3dfae7263a9cc48fe9d6891bab066\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"21aa8182d0b80d68d894f8de1a615251aea60aeaff81e06086d4743c4e627238\"" Aug 12 23:37:08.849936 containerd[1531]: time="2025-08-12T23:37:08.849896949Z" level=info msg="StartContainer for \"21aa8182d0b80d68d894f8de1a615251aea60aeaff81e06086d4743c4e627238\"" Aug 12 23:37:08.851189 containerd[1531]: time="2025-08-12T23:37:08.851155637Z" level=info msg="connecting to shim 21aa8182d0b80d68d894f8de1a615251aea60aeaff81e06086d4743c4e627238" address="unix:///run/containerd/s/0287e4debc92dc00cd19be93ed9763416aa82aa0b7db6a8a7b43529cc38aff4a" protocol=ttrpc version=3 Aug 12 23:37:08.876545 systemd[1]: Started cri-containerd-21aa8182d0b80d68d894f8de1a615251aea60aeaff81e06086d4743c4e627238.scope - libcontainer container 21aa8182d0b80d68d894f8de1a615251aea60aeaff81e06086d4743c4e627238. Aug 12 23:37:08.917059 containerd[1531]: time="2025-08-12T23:37:08.916970285Z" level=info msg="StartContainer for \"21aa8182d0b80d68d894f8de1a615251aea60aeaff81e06086d4743c4e627238\" returns successfully" Aug 12 23:37:09.103246 kubelet[2658]: E0812 23:37:09.102863 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:09.118928 kubelet[2658]: I0812 23:37:09.118863 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59ff488cfd-z9t4k" podStartSLOduration=1.592123392 podStartE2EDuration="4.118844351s" podCreationTimestamp="2025-08-12 23:37:05 +0000 UTC" firstStartedPulling="2025-08-12 23:37:06.291269444 +0000 UTC m=+21.358302496" lastFinishedPulling="2025-08-12 23:37:08.817990403 +0000 UTC m=+23.885023455" observedRunningTime="2025-08-12 23:37:09.117638266 +0000 UTC m=+24.184671318" watchObservedRunningTime="2025-08-12 23:37:09.118844351 +0000 UTC m=+24.185877403" Aug 12 23:37:09.153897 kubelet[2658]: E0812 23:37:09.153856 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.153897 kubelet[2658]: W0812 23:37:09.153886 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.153897 kubelet[2658]: E0812 23:37:09.153909 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.154280 kubelet[2658]: E0812 23:37:09.154115 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.154280 kubelet[2658]: W0812 23:37:09.154130 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.154280 kubelet[2658]: E0812 23:37:09.154176 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.155221 kubelet[2658]: E0812 23:37:09.155193 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.155221 kubelet[2658]: W0812 23:37:09.155210 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.155221 kubelet[2658]: E0812 23:37:09.155226 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.155456 kubelet[2658]: E0812 23:37:09.155432 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.155456 kubelet[2658]: W0812 23:37:09.155453 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.155513 kubelet[2658]: E0812 23:37:09.155463 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.155690 kubelet[2658]: E0812 23:37:09.155663 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.155690 kubelet[2658]: W0812 23:37:09.155678 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.155690 kubelet[2658]: E0812 23:37:09.155688 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.155883 kubelet[2658]: E0812 23:37:09.155848 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.155883 kubelet[2658]: W0812 23:37:09.155858 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.155883 kubelet[2658]: E0812 23:37:09.155868 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.155883 kubelet[2658]: E0812 23:37:09.156029 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.155883 kubelet[2658]: W0812 23:37:09.156038 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.155883 kubelet[2658]: E0812 23:37:09.156047 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.156597 kubelet[2658]: E0812 23:37:09.156229 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.156597 kubelet[2658]: W0812 23:37:09.156240 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.156597 kubelet[2658]: E0812 23:37:09.156249 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.156597 kubelet[2658]: E0812 23:37:09.156459 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.156597 kubelet[2658]: W0812 23:37:09.156468 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.156597 kubelet[2658]: E0812 23:37:09.156480 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.157005 kubelet[2658]: E0812 23:37:09.156794 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.157005 kubelet[2658]: W0812 23:37:09.156807 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.157005 kubelet[2658]: E0812 23:37:09.156818 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.157346 kubelet[2658]: E0812 23:37:09.157164 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.157346 kubelet[2658]: W0812 23:37:09.157175 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.157346 kubelet[2658]: E0812 23:37:09.157186 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.157553 kubelet[2658]: E0812 23:37:09.157408 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.157553 kubelet[2658]: W0812 23:37:09.157418 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.157553 kubelet[2658]: E0812 23:37:09.157428 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.159448 kubelet[2658]: E0812 23:37:09.159415 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.159448 kubelet[2658]: W0812 23:37:09.159434 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.159556 kubelet[2658]: E0812 23:37:09.159457 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.160487 kubelet[2658]: E0812 23:37:09.160434 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.160487 kubelet[2658]: W0812 23:37:09.160461 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.160487 kubelet[2658]: E0812 23:37:09.160485 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.160732 kubelet[2658]: E0812 23:37:09.160712 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.160732 kubelet[2658]: W0812 23:37:09.160726 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.160783 kubelet[2658]: E0812 23:37:09.160736 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.164947 kubelet[2658]: E0812 23:37:09.164897 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.164947 kubelet[2658]: W0812 23:37:09.164922 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.165089 kubelet[2658]: E0812 23:37:09.164956 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.165210 kubelet[2658]: E0812 23:37:09.165189 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.165210 kubelet[2658]: W0812 23:37:09.165201 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.165263 kubelet[2658]: E0812 23:37:09.165216 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.165421 kubelet[2658]: E0812 23:37:09.165404 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.165421 kubelet[2658]: W0812 23:37:09.165416 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.165501 kubelet[2658]: E0812 23:37:09.165431 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.166596 kubelet[2658]: E0812 23:37:09.166559 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.166596 kubelet[2658]: W0812 23:37:09.166583 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.166691 kubelet[2658]: E0812 23:37:09.166606 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.166819 kubelet[2658]: E0812 23:37:09.166793 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.166819 kubelet[2658]: W0812 23:37:09.166806 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.166883 kubelet[2658]: E0812 23:37:09.166860 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.166990 kubelet[2658]: E0812 23:37:09.166963 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.166990 kubelet[2658]: W0812 23:37:09.166981 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.167057 kubelet[2658]: E0812 23:37:09.167014 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.167158 kubelet[2658]: E0812 23:37:09.167137 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.167158 kubelet[2658]: W0812 23:37:09.167149 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.167227 kubelet[2658]: E0812 23:37:09.167187 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.167348 kubelet[2658]: E0812 23:37:09.167299 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.167348 kubelet[2658]: W0812 23:37:09.167322 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.167348 kubelet[2658]: E0812 23:37:09.167347 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.167815 kubelet[2658]: E0812 23:37:09.167794 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.167815 kubelet[2658]: W0812 23:37:09.167811 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.167981 kubelet[2658]: E0812 23:37:09.167829 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.168580 kubelet[2658]: E0812 23:37:09.168511 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.168580 kubelet[2658]: W0812 23:37:09.168530 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.168580 kubelet[2658]: E0812 23:37:09.168549 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.169237 kubelet[2658]: E0812 23:37:09.169191 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.169237 kubelet[2658]: W0812 23:37:09.169210 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.169324 kubelet[2658]: E0812 23:37:09.169259 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.171347 kubelet[2658]: E0812 23:37:09.170742 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.171347 kubelet[2658]: W0812 23:37:09.170762 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.171347 kubelet[2658]: E0812 23:37:09.170812 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.171347 kubelet[2658]: E0812 23:37:09.171234 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.171347 kubelet[2658]: W0812 23:37:09.171246 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.171347 kubelet[2658]: E0812 23:37:09.171286 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.172477 kubelet[2658]: E0812 23:37:09.172431 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.172477 kubelet[2658]: W0812 23:37:09.172459 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.172573 kubelet[2658]: E0812 23:37:09.172494 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.172958 kubelet[2658]: E0812 23:37:09.172934 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.172958 kubelet[2658]: W0812 23:37:09.172952 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.173034 kubelet[2658]: E0812 23:37:09.172976 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.173398 kubelet[2658]: E0812 23:37:09.173372 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.173398 kubelet[2658]: W0812 23:37:09.173388 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.173472 kubelet[2658]: E0812 23:37:09.173405 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.174589 kubelet[2658]: E0812 23:37:09.174407 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.174589 kubelet[2658]: W0812 23:37:09.174427 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.174589 kubelet[2658]: E0812 23:37:09.174453 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.175144 kubelet[2658]: E0812 23:37:09.175112 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:37:09.175144 kubelet[2658]: W0812 23:37:09.175132 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:37:09.175144 kubelet[2658]: E0812 23:37:09.175148 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:37:09.930778 containerd[1531]: time="2025-08-12T23:37:09.930736129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:09.931706 containerd[1531]: time="2025-08-12T23:37:09.931656923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 12 23:37:09.933010 containerd[1531]: time="2025-08-12T23:37:09.932971171Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:09.934896 containerd[1531]: time="2025-08-12T23:37:09.934854081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:09.936116 containerd[1531]: time="2025-08-12T23:37:09.935993043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.116766592s" Aug 12 23:37:09.936116 containerd[1531]: time="2025-08-12T23:37:09.936027044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 12 23:37:09.938357 containerd[1531]: time="2025-08-12T23:37:09.938294008Z" level=info msg="CreateContainer within sandbox \"7d91da5de22e059ff544107090aadd535e652cb462205a22f42e5ca53e3dcb84\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 12 23:37:09.965343 containerd[1531]: time="2025-08-12T23:37:09.965179722Z" level=info msg="Container 75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:09.984659 containerd[1531]: time="2025-08-12T23:37:09.984590120Z" level=info msg="CreateContainer within sandbox \"7d91da5de22e059ff544107090aadd535e652cb462205a22f42e5ca53e3dcb84\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c\"" Aug 12 23:37:09.985242 containerd[1531]: time="2025-08-12T23:37:09.985218623Z" level=info msg="StartContainer for \"75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c\"" Aug 12 23:37:09.986720 containerd[1531]: time="2025-08-12T23:37:09.986688397Z" level=info msg="connecting to shim 75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c" address="unix:///run/containerd/s/99c642760023e34b0ea7ad953f1eaae58cd2017d66efb75fa57318ce194b2200" protocol=ttrpc version=3 Aug 12 23:37:10.007147 kubelet[2658]: E0812 23:37:10.007069 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9m5bp" podUID="6f6acd37-c53d-49cd-8abd-4c20e696ec5d" Aug 12 23:37:10.008528 systemd[1]: Started cri-containerd-75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c.scope - libcontainer container 75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c. Aug 12 23:37:10.051585 containerd[1531]: time="2025-08-12T23:37:10.049841667Z" level=info msg="StartContainer for \"75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c\" returns successfully" Aug 12 23:37:10.068153 systemd[1]: cri-containerd-75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c.scope: Deactivated successfully. Aug 12 23:37:10.091133 containerd[1531]: time="2025-08-12T23:37:10.091075696Z" level=info msg="received exit event container_id:\"75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c\" id:\"75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c\" pid:3352 exited_at:{seconds:1755041830 nanos:75212051}" Aug 12 23:37:10.095289 containerd[1531]: time="2025-08-12T23:37:10.094873031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c\" id:\"75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c\" pid:3352 exited_at:{seconds:1755041830 nanos:75212051}" Aug 12 23:37:10.113523 kubelet[2658]: I0812 23:37:10.113481 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:37:10.114825 kubelet[2658]: E0812 23:37:10.114775 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:10.136915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75fbbd70800e3dcc306400c35d410bb38bdbc5f8517372a0c567a604217f1b7c-rootfs.mount: Deactivated successfully. Aug 12 23:37:11.116583 containerd[1531]: time="2025-08-12T23:37:11.116463440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 12 23:37:12.007390 kubelet[2658]: E0812 23:37:12.007308 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9m5bp" podUID="6f6acd37-c53d-49cd-8abd-4c20e696ec5d" Aug 12 23:37:14.008293 kubelet[2658]: E0812 23:37:14.008243 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9m5bp" podUID="6f6acd37-c53d-49cd-8abd-4c20e696ec5d" Aug 12 23:37:14.354204 containerd[1531]: time="2025-08-12T23:37:14.354088268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:14.354816 containerd[1531]: time="2025-08-12T23:37:14.354779570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 12 23:37:14.355361 containerd[1531]: time="2025-08-12T23:37:14.355336667Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:14.357642 containerd[1531]: time="2025-08-12T23:37:14.357604697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:14.358215 containerd[1531]: time="2025-08-12T23:37:14.358178875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.241646593s" Aug 12 23:37:14.358215 containerd[1531]: time="2025-08-12T23:37:14.358211636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 12 23:37:14.361295 containerd[1531]: time="2025-08-12T23:37:14.361263691Z" level=info msg="CreateContainer within sandbox \"7d91da5de22e059ff544107090aadd535e652cb462205a22f42e5ca53e3dcb84\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 12 23:37:14.369654 containerd[1531]: time="2025-08-12T23:37:14.368551677Z" level=info msg="Container 3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:14.377446 containerd[1531]: time="2025-08-12T23:37:14.377401032Z" level=info msg="CreateContainer within sandbox \"7d91da5de22e059ff544107090aadd535e652cb462205a22f42e5ca53e3dcb84\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346\"" Aug 12 23:37:14.378103 containerd[1531]: time="2025-08-12T23:37:14.378072012Z" level=info msg="StartContainer for \"3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346\"" Aug 12 23:37:14.380203 containerd[1531]: time="2025-08-12T23:37:14.379895069Z" level=info msg="connecting to shim 3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346" address="unix:///run/containerd/s/99c642760023e34b0ea7ad953f1eaae58cd2017d66efb75fa57318ce194b2200" protocol=ttrpc version=3 Aug 12 23:37:14.409503 systemd[1]: Started cri-containerd-3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346.scope - libcontainer container 3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346. Aug 12 23:37:14.725993 containerd[1531]: time="2025-08-12T23:37:14.725940245Z" level=info msg="StartContainer for \"3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346\" returns successfully" Aug 12 23:37:15.314443 containerd[1531]: time="2025-08-12T23:37:15.314398434Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:37:15.317028 systemd[1]: cri-containerd-3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346.scope: Deactivated successfully. Aug 12 23:37:15.317356 systemd[1]: cri-containerd-3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346.scope: Consumed 483ms CPU time, 174M memory peak, 165.8M written to disk. Aug 12 23:37:15.318254 containerd[1531]: time="2025-08-12T23:37:15.318097625Z" level=info msg="received exit event container_id:\"3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346\" id:\"3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346\" pid:3414 exited_at:{seconds:1755041835 nanos:317721694}" Aug 12 23:37:15.318254 containerd[1531]: time="2025-08-12T23:37:15.318175828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346\" id:\"3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346\" pid:3414 exited_at:{seconds:1755041835 nanos:317721694}" Aug 12 23:37:15.337819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f5cebdd02d6a6a92621f5566b06fa3cb02e233cba4dd15b98ce8f8cdeb1c346-rootfs.mount: Deactivated successfully. Aug 12 23:37:15.345256 kubelet[2658]: I0812 23:37:15.344032 2658 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 12 23:37:15.423904 systemd[1]: Created slice kubepods-besteffort-pod419b28e0_84b9_4d61_aa83_b5377b54bd8b.slice - libcontainer container kubepods-besteffort-pod419b28e0_84b9_4d61_aa83_b5377b54bd8b.slice. Aug 12 23:37:15.443906 systemd[1]: Created slice kubepods-burstable-podf962c9f9_6696_4a88_914d_4cf57af3f039.slice - libcontainer container kubepods-burstable-podf962c9f9_6696_4a88_914d_4cf57af3f039.slice. Aug 12 23:37:15.454906 systemd[1]: Created slice kubepods-besteffort-podfb967153_ca96_4ffe_9267_1fe8dfaad512.slice - libcontainer container kubepods-besteffort-podfb967153_ca96_4ffe_9267_1fe8dfaad512.slice. Aug 12 23:37:15.460556 systemd[1]: Created slice kubepods-burstable-pod98c68851_1e5d_47be_8edf_990e370bc5b7.slice - libcontainer container kubepods-burstable-pod98c68851_1e5d_47be_8edf_990e370bc5b7.slice. Aug 12 23:37:15.466940 systemd[1]: Created slice kubepods-besteffort-podf8ddd631_9317_47ab_885f_ccbae4a5157d.slice - libcontainer container kubepods-besteffort-podf8ddd631_9317_47ab_885f_ccbae4a5157d.slice. Aug 12 23:37:15.470989 systemd[1]: Created slice kubepods-besteffort-pod9eb00c3b_1588_41d0_a343_7a8582fb6f38.slice - libcontainer container kubepods-besteffort-pod9eb00c3b_1588_41d0_a343_7a8582fb6f38.slice. Aug 12 23:37:15.475405 systemd[1]: Created slice kubepods-besteffort-podc7d69676_2d42_4a97_b0df_97abd0f14cb8.slice - libcontainer container kubepods-besteffort-podc7d69676_2d42_4a97_b0df_97abd0f14cb8.slice. Aug 12 23:37:15.509574 kubelet[2658]: I0812 23:37:15.509529 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skm27\" (UniqueName: \"kubernetes.io/projected/fb967153-ca96-4ffe-9267-1fe8dfaad512-kube-api-access-skm27\") pod \"goldmane-768f4c5c69-kbx6z\" (UID: \"fb967153-ca96-4ffe-9267-1fe8dfaad512\") " pod="calico-system/goldmane-768f4c5c69-kbx6z" Aug 12 23:37:15.509574 kubelet[2658]: I0812 23:37:15.509572 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb967153-ca96-4ffe-9267-1fe8dfaad512-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-kbx6z\" (UID: \"fb967153-ca96-4ffe-9267-1fe8dfaad512\") " pod="calico-system/goldmane-768f4c5c69-kbx6z" Aug 12 23:37:15.509749 kubelet[2658]: I0812 23:37:15.509592 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbfs7\" (UniqueName: \"kubernetes.io/projected/9eb00c3b-1588-41d0-a343-7a8582fb6f38-kube-api-access-dbfs7\") pod \"calico-apiserver-5b577fd4f9-gmxv5\" (UID: \"9eb00c3b-1588-41d0-a343-7a8582fb6f38\") " pod="calico-apiserver/calico-apiserver-5b577fd4f9-gmxv5" Aug 12 23:37:15.509749 kubelet[2658]: I0812 23:37:15.509609 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98c68851-1e5d-47be-8edf-990e370bc5b7-config-volume\") pod \"coredns-668d6bf9bc-9n5cx\" (UID: \"98c68851-1e5d-47be-8edf-990e370bc5b7\") " pod="kube-system/coredns-668d6bf9bc-9n5cx" Aug 12 23:37:15.509749 kubelet[2658]: I0812 23:37:15.509673 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdvkr\" (UniqueName: \"kubernetes.io/projected/98c68851-1e5d-47be-8edf-990e370bc5b7-kube-api-access-xdvkr\") pod \"coredns-668d6bf9bc-9n5cx\" (UID: \"98c68851-1e5d-47be-8edf-990e370bc5b7\") " pod="kube-system/coredns-668d6bf9bc-9n5cx" Aug 12 23:37:15.509749 kubelet[2658]: I0812 23:37:15.509709 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8ddd631-9317-47ab-885f-ccbae4a5157d-whisker-ca-bundle\") pod \"whisker-7b589888cc-p2ckx\" (UID: \"f8ddd631-9317-47ab-885f-ccbae4a5157d\") " pod="calico-system/whisker-7b589888cc-p2ckx" Aug 12 23:37:15.509749 kubelet[2658]: I0812 23:37:15.509728 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfljx\" (UniqueName: \"kubernetes.io/projected/419b28e0-84b9-4d61-aa83-b5377b54bd8b-kube-api-access-vfljx\") pod \"calico-kube-controllers-5bdf877cd4-rvm5d\" (UID: \"419b28e0-84b9-4d61-aa83-b5377b54bd8b\") " pod="calico-system/calico-kube-controllers-5bdf877cd4-rvm5d" Aug 12 23:37:15.509858 kubelet[2658]: I0812 23:37:15.509755 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c7d69676-2d42-4a97-b0df-97abd0f14cb8-calico-apiserver-certs\") pod \"calico-apiserver-5b577fd4f9-z6p5k\" (UID: \"c7d69676-2d42-4a97-b0df-97abd0f14cb8\") " pod="calico-apiserver/calico-apiserver-5b577fd4f9-z6p5k" Aug 12 23:37:15.509858 kubelet[2658]: I0812 23:37:15.509773 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f8ddd631-9317-47ab-885f-ccbae4a5157d-whisker-backend-key-pair\") pod \"whisker-7b589888cc-p2ckx\" (UID: \"f8ddd631-9317-47ab-885f-ccbae4a5157d\") " pod="calico-system/whisker-7b589888cc-p2ckx" Aug 12 23:37:15.509858 kubelet[2658]: I0812 23:37:15.509827 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f962c9f9-6696-4a88-914d-4cf57af3f039-config-volume\") pod \"coredns-668d6bf9bc-ms66l\" (UID: \"f962c9f9-6696-4a88-914d-4cf57af3f039\") " pod="kube-system/coredns-668d6bf9bc-ms66l" Aug 12 23:37:15.509920 kubelet[2658]: I0812 23:37:15.509869 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8knb\" (UniqueName: \"kubernetes.io/projected/f962c9f9-6696-4a88-914d-4cf57af3f039-kube-api-access-n8knb\") pod \"coredns-668d6bf9bc-ms66l\" (UID: \"f962c9f9-6696-4a88-914d-4cf57af3f039\") " pod="kube-system/coredns-668d6bf9bc-ms66l" Aug 12 23:37:15.509920 kubelet[2658]: I0812 23:37:15.509892 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/419b28e0-84b9-4d61-aa83-b5377b54bd8b-tigera-ca-bundle\") pod \"calico-kube-controllers-5bdf877cd4-rvm5d\" (UID: \"419b28e0-84b9-4d61-aa83-b5377b54bd8b\") " pod="calico-system/calico-kube-controllers-5bdf877cd4-rvm5d" Aug 12 23:37:15.509920 kubelet[2658]: I0812 23:37:15.509911 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb967153-ca96-4ffe-9267-1fe8dfaad512-config\") pod \"goldmane-768f4c5c69-kbx6z\" (UID: \"fb967153-ca96-4ffe-9267-1fe8dfaad512\") " pod="calico-system/goldmane-768f4c5c69-kbx6z" Aug 12 23:37:15.509986 kubelet[2658]: I0812 23:37:15.509926 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fb967153-ca96-4ffe-9267-1fe8dfaad512-goldmane-key-pair\") pod \"goldmane-768f4c5c69-kbx6z\" (UID: \"fb967153-ca96-4ffe-9267-1fe8dfaad512\") " pod="calico-system/goldmane-768f4c5c69-kbx6z" Aug 12 23:37:15.509986 kubelet[2658]: I0812 23:37:15.509962 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw78m\" (UniqueName: \"kubernetes.io/projected/c7d69676-2d42-4a97-b0df-97abd0f14cb8-kube-api-access-pw78m\") pod \"calico-apiserver-5b577fd4f9-z6p5k\" (UID: \"c7d69676-2d42-4a97-b0df-97abd0f14cb8\") " pod="calico-apiserver/calico-apiserver-5b577fd4f9-z6p5k" Aug 12 23:37:15.510026 kubelet[2658]: I0812 23:37:15.509989 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2wqr\" (UniqueName: \"kubernetes.io/projected/f8ddd631-9317-47ab-885f-ccbae4a5157d-kube-api-access-w2wqr\") pod \"whisker-7b589888cc-p2ckx\" (UID: \"f8ddd631-9317-47ab-885f-ccbae4a5157d\") " pod="calico-system/whisker-7b589888cc-p2ckx" Aug 12 23:37:15.510362 kubelet[2658]: I0812 23:37:15.510087 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9eb00c3b-1588-41d0-a343-7a8582fb6f38-calico-apiserver-certs\") pod \"calico-apiserver-5b577fd4f9-gmxv5\" (UID: \"9eb00c3b-1588-41d0-a343-7a8582fb6f38\") " pod="calico-apiserver/calico-apiserver-5b577fd4f9-gmxv5" Aug 12 23:37:15.735862 containerd[1531]: time="2025-08-12T23:37:15.735817537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bdf877cd4-rvm5d,Uid:419b28e0-84b9-4d61-aa83-b5377b54bd8b,Namespace:calico-system,Attempt:0,}" Aug 12 23:37:15.750154 kubelet[2658]: E0812 23:37:15.750104 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:15.754809 containerd[1531]: time="2025-08-12T23:37:15.754764546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ms66l,Uid:f962c9f9-6696-4a88-914d-4cf57af3f039,Namespace:kube-system,Attempt:0,}" Aug 12 23:37:15.761589 containerd[1531]: time="2025-08-12T23:37:15.761549310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-kbx6z,Uid:fb967153-ca96-4ffe-9267-1fe8dfaad512,Namespace:calico-system,Attempt:0,}" Aug 12 23:37:15.764667 kubelet[2658]: E0812 23:37:15.764606 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:15.771051 containerd[1531]: time="2025-08-12T23:37:15.770597502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9n5cx,Uid:98c68851-1e5d-47be-8edf-990e370bc5b7,Namespace:kube-system,Attempt:0,}" Aug 12 23:37:15.776335 containerd[1531]: time="2025-08-12T23:37:15.776283313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b589888cc-p2ckx,Uid:f8ddd631-9317-47ab-885f-ccbae4a5157d,Namespace:calico-system,Attempt:0,}" Aug 12 23:37:15.776914 containerd[1531]: time="2025-08-12T23:37:15.776873891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b577fd4f9-gmxv5,Uid:9eb00c3b-1588-41d0-a343-7a8582fb6f38,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:37:15.781445 containerd[1531]: time="2025-08-12T23:37:15.781408547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b577fd4f9-z6p5k,Uid:c7d69676-2d42-4a97-b0df-97abd0f14cb8,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:37:16.041852 systemd[1]: Created slice kubepods-besteffort-pod6f6acd37_c53d_49cd_8abd_4c20e696ec5d.slice - libcontainer container kubepods-besteffort-pod6f6acd37_c53d_49cd_8abd_4c20e696ec5d.slice. Aug 12 23:37:16.046157 containerd[1531]: time="2025-08-12T23:37:16.045930057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9m5bp,Uid:6f6acd37-c53d-49cd-8abd-4c20e696ec5d,Namespace:calico-system,Attempt:0,}" Aug 12 23:37:16.137150 containerd[1531]: time="2025-08-12T23:37:16.137109553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 12 23:37:16.149218 containerd[1531]: time="2025-08-12T23:37:16.149158704Z" level=error msg="Failed to destroy network for sandbox \"38373a0dc406353962128264643a3fa6b20bdef2f2e8a14b026d5bc645f89ee7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.158849 containerd[1531]: time="2025-08-12T23:37:16.158783504Z" level=error msg="Failed to destroy network for sandbox \"b106e219520031bf9cdecfa7f3a31495139ddda0f94f0a2ef995c4ce7db4828e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.160307 containerd[1531]: time="2025-08-12T23:37:16.160249627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-kbx6z,Uid:fb967153-ca96-4ffe-9267-1fe8dfaad512,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b106e219520031bf9cdecfa7f3a31495139ddda0f94f0a2ef995c4ce7db4828e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.160523 containerd[1531]: time="2025-08-12T23:37:16.160474274Z" level=error msg="Failed to destroy network for sandbox \"a83c180750646dc2066efe8c04c19e0f82cfe62c6b5c0cf07cba0d278a15e335\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.164049 kubelet[2658]: E0812 23:37:16.163975 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b106e219520031bf9cdecfa7f3a31495139ddda0f94f0a2ef995c4ce7db4828e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.164698 containerd[1531]: time="2025-08-12T23:37:16.164633355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b577fd4f9-gmxv5,Uid:9eb00c3b-1588-41d0-a343-7a8582fb6f38,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83c180750646dc2066efe8c04c19e0f82cfe62c6b5c0cf07cba0d278a15e335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.164971 kubelet[2658]: E0812 23:37:16.164922 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83c180750646dc2066efe8c04c19e0f82cfe62c6b5c0cf07cba0d278a15e335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.165064 kubelet[2658]: E0812 23:37:16.164982 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83c180750646dc2066efe8c04c19e0f82cfe62c6b5c0cf07cba0d278a15e335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b577fd4f9-gmxv5" Aug 12 23:37:16.165064 kubelet[2658]: E0812 23:37:16.165003 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83c180750646dc2066efe8c04c19e0f82cfe62c6b5c0cf07cba0d278a15e335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b577fd4f9-gmxv5" Aug 12 23:37:16.165064 kubelet[2658]: E0812 23:37:16.165047 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b577fd4f9-gmxv5_calico-apiserver(9eb00c3b-1588-41d0-a343-7a8582fb6f38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b577fd4f9-gmxv5_calico-apiserver(9eb00c3b-1588-41d0-a343-7a8582fb6f38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a83c180750646dc2066efe8c04c19e0f82cfe62c6b5c0cf07cba0d278a15e335\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b577fd4f9-gmxv5" podUID="9eb00c3b-1588-41d0-a343-7a8582fb6f38" Aug 12 23:37:16.165166 containerd[1531]: time="2025-08-12T23:37:16.165119769Z" level=error msg="Failed to destroy network for sandbox \"7c1b2e8312cf3e601d09928463b0670a1c3b9168f91323afdfc6021cbf1d47b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.166173 containerd[1531]: time="2025-08-12T23:37:16.166138519Z" level=error msg="Failed to destroy network for sandbox \"1ddee2a38d35289ea79d13799efa7d06a0a2bacade192406792ff358a8adeaaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.166776 kubelet[2658]: E0812 23:37:16.166736 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b106e219520031bf9cdecfa7f3a31495139ddda0f94f0a2ef995c4ce7db4828e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-kbx6z" Aug 12 23:37:16.166828 kubelet[2658]: E0812 23:37:16.166806 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b106e219520031bf9cdecfa7f3a31495139ddda0f94f0a2ef995c4ce7db4828e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-kbx6z" Aug 12 23:37:16.166911 kubelet[2658]: E0812 23:37:16.166886 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-kbx6z_calico-system(fb967153-ca96-4ffe-9267-1fe8dfaad512)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-kbx6z_calico-system(fb967153-ca96-4ffe-9267-1fe8dfaad512)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b106e219520031bf9cdecfa7f3a31495139ddda0f94f0a2ef995c4ce7db4828e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-kbx6z" podUID="fb967153-ca96-4ffe-9267-1fe8dfaad512" Aug 12 23:37:16.167927 containerd[1531]: time="2025-08-12T23:37:16.167875929Z" level=error msg="Failed to destroy network for sandbox \"0a2ba9e020d101523d096211bed4309fb828aec065680afdd02cdc0157839480\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.175389 containerd[1531]: time="2025-08-12T23:37:16.175292425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9n5cx,Uid:98c68851-1e5d-47be-8edf-990e370bc5b7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38373a0dc406353962128264643a3fa6b20bdef2f2e8a14b026d5bc645f89ee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.175535 containerd[1531]: time="2025-08-12T23:37:16.175464510Z" level=error msg="Failed to destroy network for sandbox \"10ff2975002c8d2134b0a2c9c1c9c9539a86cd7dc401928e925bea1eaae404d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.175733 kubelet[2658]: E0812 23:37:16.175692 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38373a0dc406353962128264643a3fa6b20bdef2f2e8a14b026d5bc645f89ee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.175978 kubelet[2658]: E0812 23:37:16.175753 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38373a0dc406353962128264643a3fa6b20bdef2f2e8a14b026d5bc645f89ee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9n5cx" Aug 12 23:37:16.175978 kubelet[2658]: E0812 23:37:16.175774 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38373a0dc406353962128264643a3fa6b20bdef2f2e8a14b026d5bc645f89ee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9n5cx" Aug 12 23:37:16.175978 kubelet[2658]: E0812 23:37:16.175817 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9n5cx_kube-system(98c68851-1e5d-47be-8edf-990e370bc5b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9n5cx_kube-system(98c68851-1e5d-47be-8edf-990e370bc5b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38373a0dc406353962128264643a3fa6b20bdef2f2e8a14b026d5bc645f89ee7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9n5cx" podUID="98c68851-1e5d-47be-8edf-990e370bc5b7" Aug 12 23:37:16.176081 containerd[1531]: time="2025-08-12T23:37:16.176020967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b577fd4f9-z6p5k,Uid:c7d69676-2d42-4a97-b0df-97abd0f14cb8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ddee2a38d35289ea79d13799efa7d06a0a2bacade192406792ff358a8adeaaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.176476 kubelet[2658]: E0812 23:37:16.176386 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ddee2a38d35289ea79d13799efa7d06a0a2bacade192406792ff358a8adeaaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.176571 kubelet[2658]: E0812 23:37:16.176540 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ddee2a38d35289ea79d13799efa7d06a0a2bacade192406792ff358a8adeaaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b577fd4f9-z6p5k" Aug 12 23:37:16.176571 kubelet[2658]: E0812 23:37:16.176563 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ddee2a38d35289ea79d13799efa7d06a0a2bacade192406792ff358a8adeaaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b577fd4f9-z6p5k" Aug 12 23:37:16.177050 kubelet[2658]: E0812 23:37:16.176952 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b577fd4f9-z6p5k_calico-apiserver(c7d69676-2d42-4a97-b0df-97abd0f14cb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b577fd4f9-z6p5k_calico-apiserver(c7d69676-2d42-4a97-b0df-97abd0f14cb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ddee2a38d35289ea79d13799efa7d06a0a2bacade192406792ff358a8adeaaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b577fd4f9-z6p5k" podUID="c7d69676-2d42-4a97-b0df-97abd0f14cb8" Aug 12 23:37:16.186524 containerd[1531]: time="2025-08-12T23:37:16.186425550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ms66l,Uid:f962c9f9-6696-4a88-914d-4cf57af3f039,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c1b2e8312cf3e601d09928463b0670a1c3b9168f91323afdfc6021cbf1d47b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.186755 kubelet[2658]: E0812 23:37:16.186713 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c1b2e8312cf3e601d09928463b0670a1c3b9168f91323afdfc6021cbf1d47b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.186817 kubelet[2658]: E0812 23:37:16.186772 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c1b2e8312cf3e601d09928463b0670a1c3b9168f91323afdfc6021cbf1d47b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ms66l" Aug 12 23:37:16.186817 kubelet[2658]: E0812 23:37:16.186792 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c1b2e8312cf3e601d09928463b0670a1c3b9168f91323afdfc6021cbf1d47b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ms66l" Aug 12 23:37:16.186877 kubelet[2658]: E0812 23:37:16.186828 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ms66l_kube-system(f962c9f9-6696-4a88-914d-4cf57af3f039)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ms66l_kube-system(f962c9f9-6696-4a88-914d-4cf57af3f039)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c1b2e8312cf3e601d09928463b0670a1c3b9168f91323afdfc6021cbf1d47b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ms66l" podUID="f962c9f9-6696-4a88-914d-4cf57af3f039" Aug 12 23:37:16.194541 containerd[1531]: time="2025-08-12T23:37:16.194471264Z" level=error msg="Failed to destroy network for sandbox \"d02a5130215b4c1bbb36a990db7e638400698712b380ba8dc6443a7713650523\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.198597 containerd[1531]: time="2025-08-12T23:37:16.198539063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b589888cc-p2ckx,Uid:f8ddd631-9317-47ab-885f-ccbae4a5157d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2ba9e020d101523d096211bed4309fb828aec065680afdd02cdc0157839480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.199025 kubelet[2658]: E0812 23:37:16.198971 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2ba9e020d101523d096211bed4309fb828aec065680afdd02cdc0157839480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.199025 kubelet[2658]: E0812 23:37:16.199030 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2ba9e020d101523d096211bed4309fb828aec065680afdd02cdc0157839480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b589888cc-p2ckx" Aug 12 23:37:16.199155 kubelet[2658]: E0812 23:37:16.199049 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2ba9e020d101523d096211bed4309fb828aec065680afdd02cdc0157839480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b589888cc-p2ckx" Aug 12 23:37:16.199155 kubelet[2658]: E0812 23:37:16.199090 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b589888cc-p2ckx_calico-system(f8ddd631-9317-47ab-885f-ccbae4a5157d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b589888cc-p2ckx_calico-system(f8ddd631-9317-47ab-885f-ccbae4a5157d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a2ba9e020d101523d096211bed4309fb828aec065680afdd02cdc0157839480\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b589888cc-p2ckx" podUID="f8ddd631-9317-47ab-885f-ccbae4a5157d" Aug 12 23:37:16.222880 containerd[1531]: time="2025-08-12T23:37:16.222817290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bdf877cd4-rvm5d,Uid:419b28e0-84b9-4d61-aa83-b5377b54bd8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10ff2975002c8d2134b0a2c9c1c9c9539a86cd7dc401928e925bea1eaae404d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.223304 kubelet[2658]: E0812 23:37:16.223243 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10ff2975002c8d2134b0a2c9c1c9c9539a86cd7dc401928e925bea1eaae404d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.223376 kubelet[2658]: E0812 23:37:16.223350 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10ff2975002c8d2134b0a2c9c1c9c9539a86cd7dc401928e925bea1eaae404d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bdf877cd4-rvm5d" Aug 12 23:37:16.223409 kubelet[2658]: E0812 23:37:16.223372 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10ff2975002c8d2134b0a2c9c1c9c9539a86cd7dc401928e925bea1eaae404d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bdf877cd4-rvm5d" Aug 12 23:37:16.223452 kubelet[2658]: E0812 23:37:16.223421 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bdf877cd4-rvm5d_calico-system(419b28e0-84b9-4d61-aa83-b5377b54bd8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bdf877cd4-rvm5d_calico-system(419b28e0-84b9-4d61-aa83-b5377b54bd8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10ff2975002c8d2134b0a2c9c1c9c9539a86cd7dc401928e925bea1eaae404d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bdf877cd4-rvm5d" podUID="419b28e0-84b9-4d61-aa83-b5377b54bd8b" Aug 12 23:37:16.284904 containerd[1531]: time="2025-08-12T23:37:16.284830056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9m5bp,Uid:6f6acd37-c53d-49cd-8abd-4c20e696ec5d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d02a5130215b4c1bbb36a990db7e638400698712b380ba8dc6443a7713650523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.285125 kubelet[2658]: E0812 23:37:16.285068 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d02a5130215b4c1bbb36a990db7e638400698712b380ba8dc6443a7713650523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:37:16.285171 kubelet[2658]: E0812 23:37:16.285140 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d02a5130215b4c1bbb36a990db7e638400698712b380ba8dc6443a7713650523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9m5bp" Aug 12 23:37:16.285171 kubelet[2658]: E0812 23:37:16.285164 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d02a5130215b4c1bbb36a990db7e638400698712b380ba8dc6443a7713650523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9m5bp" Aug 12 23:37:16.285234 kubelet[2658]: E0812 23:37:16.285206 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9m5bp_calico-system(6f6acd37-c53d-49cd-8abd-4c20e696ec5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9m5bp_calico-system(6f6acd37-c53d-49cd-8abd-4c20e696ec5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d02a5130215b4c1bbb36a990db7e638400698712b380ba8dc6443a7713650523\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9m5bp" podUID="6f6acd37-c53d-49cd-8abd-4c20e696ec5d" Aug 12 23:37:16.617639 systemd[1]: run-netns-cni\x2de1154403\x2d7bb9\x2db377\x2dbbf9\x2dc444238a0231.mount: Deactivated successfully. Aug 12 23:37:20.777938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923111129.mount: Deactivated successfully. Aug 12 23:37:21.032466 containerd[1531]: time="2025-08-12T23:37:21.032342860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:21.033257 containerd[1531]: time="2025-08-12T23:37:21.033184001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 12 23:37:21.064487 containerd[1531]: time="2025-08-12T23:37:21.064411672Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:21.065753 containerd[1531]: time="2025-08-12T23:37:21.064983327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.927831693s" Aug 12 23:37:21.065753 containerd[1531]: time="2025-08-12T23:37:21.065015808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 12 23:37:21.065753 containerd[1531]: time="2025-08-12T23:37:21.065377777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:21.089174 containerd[1531]: time="2025-08-12T23:37:21.089129739Z" level=info msg="CreateContainer within sandbox \"7d91da5de22e059ff544107090aadd535e652cb462205a22f42e5ca53e3dcb84\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 12 23:37:21.102146 containerd[1531]: time="2025-08-12T23:37:21.102078547Z" level=info msg="Container 3b237d272a3ad8cc7b648377a753676699a6c95ec837889e54c8443b6b2031d3: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:21.111419 containerd[1531]: time="2025-08-12T23:37:21.111385543Z" level=info msg="CreateContainer within sandbox \"7d91da5de22e059ff544107090aadd535e652cb462205a22f42e5ca53e3dcb84\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3b237d272a3ad8cc7b648377a753676699a6c95ec837889e54c8443b6b2031d3\"" Aug 12 23:37:21.114638 containerd[1531]: time="2025-08-12T23:37:21.114417340Z" level=info msg="StartContainer for \"3b237d272a3ad8cc7b648377a753676699a6c95ec837889e54c8443b6b2031d3\"" Aug 12 23:37:21.116101 containerd[1531]: time="2025-08-12T23:37:21.116070342Z" level=info msg="connecting to shim 3b237d272a3ad8cc7b648377a753676699a6c95ec837889e54c8443b6b2031d3" address="unix:///run/containerd/s/99c642760023e34b0ea7ad953f1eaae58cd2017d66efb75fa57318ce194b2200" protocol=ttrpc version=3 Aug 12 23:37:21.141467 systemd[1]: Started cri-containerd-3b237d272a3ad8cc7b648377a753676699a6c95ec837889e54c8443b6b2031d3.scope - libcontainer container 3b237d272a3ad8cc7b648377a753676699a6c95ec837889e54c8443b6b2031d3. Aug 12 23:37:21.176030 containerd[1531]: time="2025-08-12T23:37:21.175988060Z" level=info msg="StartContainer for \"3b237d272a3ad8cc7b648377a753676699a6c95ec837889e54c8443b6b2031d3\" returns successfully" Aug 12 23:37:21.394391 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 12 23:37:21.394506 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 12 23:37:21.573474 kubelet[2658]: I0812 23:37:21.573422 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2wqr\" (UniqueName: \"kubernetes.io/projected/f8ddd631-9317-47ab-885f-ccbae4a5157d-kube-api-access-w2wqr\") pod \"f8ddd631-9317-47ab-885f-ccbae4a5157d\" (UID: \"f8ddd631-9317-47ab-885f-ccbae4a5157d\") " Aug 12 23:37:21.573878 kubelet[2658]: I0812 23:37:21.573497 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f8ddd631-9317-47ab-885f-ccbae4a5157d-whisker-backend-key-pair\") pod \"f8ddd631-9317-47ab-885f-ccbae4a5157d\" (UID: \"f8ddd631-9317-47ab-885f-ccbae4a5157d\") " Aug 12 23:37:21.573878 kubelet[2658]: I0812 23:37:21.573584 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8ddd631-9317-47ab-885f-ccbae4a5157d-whisker-ca-bundle\") pod \"f8ddd631-9317-47ab-885f-ccbae4a5157d\" (UID: \"f8ddd631-9317-47ab-885f-ccbae4a5157d\") " Aug 12 23:37:21.574103 kubelet[2658]: I0812 23:37:21.574080 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8ddd631-9317-47ab-885f-ccbae4a5157d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f8ddd631-9317-47ab-885f-ccbae4a5157d" (UID: "f8ddd631-9317-47ab-885f-ccbae4a5157d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 12 23:37:21.577264 kubelet[2658]: I0812 23:37:21.577224 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ddd631-9317-47ab-885f-ccbae4a5157d-kube-api-access-w2wqr" (OuterVolumeSpecName: "kube-api-access-w2wqr") pod "f8ddd631-9317-47ab-885f-ccbae4a5157d" (UID: "f8ddd631-9317-47ab-885f-ccbae4a5157d"). InnerVolumeSpecName "kube-api-access-w2wqr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 12 23:37:21.581653 kubelet[2658]: I0812 23:37:21.581601 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8ddd631-9317-47ab-885f-ccbae4a5157d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f8ddd631-9317-47ab-885f-ccbae4a5157d" (UID: "f8ddd631-9317-47ab-885f-ccbae4a5157d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 12 23:37:21.674622 kubelet[2658]: I0812 23:37:21.674580 2658 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8ddd631-9317-47ab-885f-ccbae4a5157d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 12 23:37:21.674622 kubelet[2658]: I0812 23:37:21.674620 2658 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w2wqr\" (UniqueName: \"kubernetes.io/projected/f8ddd631-9317-47ab-885f-ccbae4a5157d-kube-api-access-w2wqr\") on node \"localhost\" DevicePath \"\"" Aug 12 23:37:21.674622 kubelet[2658]: I0812 23:37:21.674630 2658 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f8ddd631-9317-47ab-885f-ccbae4a5157d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 12 23:37:21.778813 systemd[1]: var-lib-kubelet-pods-f8ddd631\x2d9317\x2d47ab\x2d885f\x2dccbae4a5157d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw2wqr.mount: Deactivated successfully. Aug 12 23:37:21.778908 systemd[1]: var-lib-kubelet-pods-f8ddd631\x2d9317\x2d47ab\x2d885f\x2dccbae4a5157d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 12 23:37:22.159711 systemd[1]: Removed slice kubepods-besteffort-podf8ddd631_9317_47ab_885f_ccbae4a5157d.slice - libcontainer container kubepods-besteffort-podf8ddd631_9317_47ab_885f_ccbae4a5157d.slice. Aug 12 23:37:22.192449 kubelet[2658]: I0812 23:37:22.192384 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bj2r9" podStartSLOduration=1.725626246 podStartE2EDuration="16.192363262s" podCreationTimestamp="2025-08-12 23:37:06 +0000 UTC" firstStartedPulling="2025-08-12 23:37:06.60453763 +0000 UTC m=+21.671570642" lastFinishedPulling="2025-08-12 23:37:21.071274606 +0000 UTC m=+36.138307658" observedRunningTime="2025-08-12 23:37:22.191580203 +0000 UTC m=+37.258613255" watchObservedRunningTime="2025-08-12 23:37:22.192363262 +0000 UTC m=+37.259396354" Aug 12 23:37:22.379693 systemd[1]: Created slice kubepods-besteffort-pod05cb6bad_e1c4_40ec_a17e_60735c9f45b8.slice - libcontainer container kubepods-besteffort-pod05cb6bad_e1c4_40ec_a17e_60735c9f45b8.slice. Aug 12 23:37:22.480583 kubelet[2658]: I0812 23:37:22.480488 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05cb6bad-e1c4-40ec-a17e-60735c9f45b8-whisker-backend-key-pair\") pod \"whisker-748b9899-brv8p\" (UID: \"05cb6bad-e1c4-40ec-a17e-60735c9f45b8\") " pod="calico-system/whisker-748b9899-brv8p" Aug 12 23:37:22.480583 kubelet[2658]: I0812 23:37:22.480534 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05cb6bad-e1c4-40ec-a17e-60735c9f45b8-whisker-ca-bundle\") pod \"whisker-748b9899-brv8p\" (UID: \"05cb6bad-e1c4-40ec-a17e-60735c9f45b8\") " pod="calico-system/whisker-748b9899-brv8p" Aug 12 23:37:22.480583 kubelet[2658]: I0812 23:37:22.480562 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwt9\" (UniqueName: \"kubernetes.io/projected/05cb6bad-e1c4-40ec-a17e-60735c9f45b8-kube-api-access-2rwt9\") pod \"whisker-748b9899-brv8p\" (UID: \"05cb6bad-e1c4-40ec-a17e-60735c9f45b8\") " pod="calico-system/whisker-748b9899-brv8p" Aug 12 23:37:22.683649 containerd[1531]: time="2025-08-12T23:37:22.683607807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-748b9899-brv8p,Uid:05cb6bad-e1c4-40ec-a17e-60735c9f45b8,Namespace:calico-system,Attempt:0,}" Aug 12 23:37:23.010327 kubelet[2658]: I0812 23:37:23.010279 2658 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8ddd631-9317-47ab-885f-ccbae4a5157d" path="/var/lib/kubelet/pods/f8ddd631-9317-47ab-885f-ccbae4a5157d/volumes" Aug 12 23:37:23.032919 systemd-networkd[1439]: cali26deee87f39: Link UP Aug 12 23:37:23.033244 systemd-networkd[1439]: cali26deee87f39: Gained carrier Aug 12 23:37:23.048797 containerd[1531]: 2025-08-12 23:37:22.768 [INFO][3887] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:37:23.048797 containerd[1531]: 2025-08-12 23:37:22.842 [INFO][3887] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--748b9899--brv8p-eth0 whisker-748b9899- calico-system 05cb6bad-e1c4-40ec-a17e-60735c9f45b8 929 0 2025-08-12 23:37:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:748b9899 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-748b9899-brv8p eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali26deee87f39 [] [] }} ContainerID="e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" Namespace="calico-system" Pod="whisker-748b9899-brv8p" WorkloadEndpoint="localhost-k8s-whisker--748b9899--brv8p-" Aug 12 23:37:23.048797 containerd[1531]: 2025-08-12 23:37:22.842 [INFO][3887] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" Namespace="calico-system" Pod="whisker-748b9899-brv8p" WorkloadEndpoint="localhost-k8s-whisker--748b9899--brv8p-eth0" Aug 12 23:37:23.048797 containerd[1531]: 2025-08-12 23:37:22.979 [INFO][3908] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" HandleID="k8s-pod-network.e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" Workload="localhost-k8s-whisker--748b9899--brv8p-eth0" Aug 12 23:37:23.049097 containerd[1531]: 2025-08-12 23:37:22.980 [INFO][3908] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" HandleID="k8s-pod-network.e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" Workload="localhost-k8s-whisker--748b9899--brv8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000207640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-748b9899-brv8p", "timestamp":"2025-08-12 23:37:22.979955814 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:37:23.049097 containerd[1531]: 2025-08-12 23:37:22.980 [INFO][3908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:37:23.049097 containerd[1531]: 2025-08-12 23:37:22.980 [INFO][3908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:37:23.049097 containerd[1531]: 2025-08-12 23:37:22.980 [INFO][3908] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:37:23.049097 containerd[1531]: 2025-08-12 23:37:22.992 [INFO][3908] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" host="localhost" Aug 12 23:37:23.049097 containerd[1531]: 2025-08-12 23:37:22.997 [INFO][3908] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:37:23.049097 containerd[1531]: 2025-08-12 23:37:23.002 [INFO][3908] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:37:23.049097 containerd[1531]: 2025-08-12 23:37:23.005 [INFO][3908] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:23.049097 containerd[1531]: 2025-08-12 23:37:23.007 [INFO][3908] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:23.049097 containerd[1531]: 2025-08-12 23:37:23.007 [INFO][3908] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" host="localhost" Aug 12 23:37:23.049294 containerd[1531]: 2025-08-12 23:37:23.009 [INFO][3908] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d Aug 12 23:37:23.049294 containerd[1531]: 2025-08-12 23:37:23.013 [INFO][3908] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" host="localhost" Aug 12 23:37:23.049294 containerd[1531]: 2025-08-12 23:37:23.018 [INFO][3908] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" host="localhost" Aug 12 23:37:23.049294 containerd[1531]: 2025-08-12 23:37:23.019 [INFO][3908] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" host="localhost" Aug 12 23:37:23.049294 containerd[1531]: 2025-08-12 23:37:23.019 [INFO][3908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:37:23.049294 containerd[1531]: 2025-08-12 23:37:23.019 [INFO][3908] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" HandleID="k8s-pod-network.e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" Workload="localhost-k8s-whisker--748b9899--brv8p-eth0" Aug 12 23:37:23.049430 containerd[1531]: 2025-08-12 23:37:23.022 [INFO][3887] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" Namespace="calico-system" Pod="whisker-748b9899-brv8p" WorkloadEndpoint="localhost-k8s-whisker--748b9899--brv8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--748b9899--brv8p-eth0", GenerateName:"whisker-748b9899-", Namespace:"calico-system", SelfLink:"", UID:"05cb6bad-e1c4-40ec-a17e-60735c9f45b8", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"748b9899", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-748b9899-brv8p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26deee87f39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:23.049430 containerd[1531]: 2025-08-12 23:37:23.022 [INFO][3887] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" Namespace="calico-system" Pod="whisker-748b9899-brv8p" WorkloadEndpoint="localhost-k8s-whisker--748b9899--brv8p-eth0" Aug 12 23:37:23.049522 containerd[1531]: 2025-08-12 23:37:23.022 [INFO][3887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26deee87f39 ContainerID="e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" Namespace="calico-system" Pod="whisker-748b9899-brv8p" WorkloadEndpoint="localhost-k8s-whisker--748b9899--brv8p-eth0" Aug 12 23:37:23.049522 containerd[1531]: 2025-08-12 23:37:23.033 [INFO][3887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" Namespace="calico-system" Pod="whisker-748b9899-brv8p" WorkloadEndpoint="localhost-k8s-whisker--748b9899--brv8p-eth0" Aug 12 23:37:23.049569 containerd[1531]: 2025-08-12 23:37:23.035 [INFO][3887] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" Namespace="calico-system" Pod="whisker-748b9899-brv8p" WorkloadEndpoint="localhost-k8s-whisker--748b9899--brv8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--748b9899--brv8p-eth0", GenerateName:"whisker-748b9899-", Namespace:"calico-system", SelfLink:"", UID:"05cb6bad-e1c4-40ec-a17e-60735c9f45b8", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"748b9899", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d", Pod:"whisker-748b9899-brv8p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26deee87f39", MAC:"1e:38:71:50:40:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:23.049617 containerd[1531]: 2025-08-12 23:37:23.046 [INFO][3887] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" Namespace="calico-system" Pod="whisker-748b9899-brv8p" WorkloadEndpoint="localhost-k8s-whisker--748b9899--brv8p-eth0" Aug 12 23:37:23.114805 containerd[1531]: time="2025-08-12T23:37:23.114747360Z" level=info msg="connecting to shim e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d" address="unix:///run/containerd/s/468f7a0ca1a4aba669f6edd547dedd2ff81392a543d526a6702d11d8bfaf6453" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:37:23.143518 systemd[1]: Started cri-containerd-e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d.scope - libcontainer container e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d. Aug 12 23:37:23.154614 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:37:23.159539 kubelet[2658]: I0812 23:37:23.159340 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:37:23.178812 containerd[1531]: time="2025-08-12T23:37:23.178771346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-748b9899-brv8p,Uid:05cb6bad-e1c4-40ec-a17e-60735c9f45b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d\"" Aug 12 23:37:23.183027 containerd[1531]: time="2025-08-12T23:37:23.182810043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 12 23:37:24.390434 containerd[1531]: time="2025-08-12T23:37:24.390389023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:24.391410 containerd[1531]: time="2025-08-12T23:37:24.390794633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 12 23:37:24.391679 containerd[1531]: time="2025-08-12T23:37:24.391653253Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:24.393898 containerd[1531]: time="2025-08-12T23:37:24.393867985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:24.394720 containerd[1531]: time="2025-08-12T23:37:24.394392397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.211543473s" Aug 12 23:37:24.395050 containerd[1531]: time="2025-08-12T23:37:24.395031452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 12 23:37:24.398901 containerd[1531]: time="2025-08-12T23:37:24.398858703Z" level=info msg="CreateContainer within sandbox \"e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 12 23:37:24.405020 containerd[1531]: time="2025-08-12T23:37:24.404981487Z" level=info msg="Container e835b021338f929251f8fd97898dcb6bdd417aaade7de32799963f5ce199dcd9: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:24.457832 containerd[1531]: time="2025-08-12T23:37:24.457776413Z" level=info msg="CreateContainer within sandbox \"e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e835b021338f929251f8fd97898dcb6bdd417aaade7de32799963f5ce199dcd9\"" Aug 12 23:37:24.459725 containerd[1531]: time="2025-08-12T23:37:24.459641097Z" level=info msg="StartContainer for \"e835b021338f929251f8fd97898dcb6bdd417aaade7de32799963f5ce199dcd9\"" Aug 12 23:37:24.461436 containerd[1531]: time="2025-08-12T23:37:24.461410779Z" level=info msg="connecting to shim e835b021338f929251f8fd97898dcb6bdd417aaade7de32799963f5ce199dcd9" address="unix:///run/containerd/s/468f7a0ca1a4aba669f6edd547dedd2ff81392a543d526a6702d11d8bfaf6453" protocol=ttrpc version=3 Aug 12 23:37:24.481459 systemd[1]: Started cri-containerd-e835b021338f929251f8fd97898dcb6bdd417aaade7de32799963f5ce199dcd9.scope - libcontainer container e835b021338f929251f8fd97898dcb6bdd417aaade7de32799963f5ce199dcd9. Aug 12 23:37:24.514072 containerd[1531]: time="2025-08-12T23:37:24.514035780Z" level=info msg="StartContainer for \"e835b021338f929251f8fd97898dcb6bdd417aaade7de32799963f5ce199dcd9\" returns successfully" Aug 12 23:37:24.516260 containerd[1531]: time="2025-08-12T23:37:24.515969146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 12 23:37:24.562432 systemd-networkd[1439]: cali26deee87f39: Gained IPv6LL Aug 12 23:37:26.231101 systemd[1]: Started sshd@7-10.0.0.30:22-10.0.0.1:57018.service - OpenSSH per-connection server daemon (10.0.0.1:57018). Aug 12 23:37:26.253552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2632689373.mount: Deactivated successfully. Aug 12 23:37:26.319593 containerd[1531]: time="2025-08-12T23:37:26.319539154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:26.320041 containerd[1531]: time="2025-08-12T23:37:26.320008284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 12 23:37:26.320772 containerd[1531]: time="2025-08-12T23:37:26.320739861Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:26.322924 containerd[1531]: time="2025-08-12T23:37:26.322889709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:26.323847 containerd[1531]: time="2025-08-12T23:37:26.323509803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.807476776s" Aug 12 23:37:26.323847 containerd[1531]: time="2025-08-12T23:37:26.323540724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 12 23:37:26.325895 containerd[1531]: time="2025-08-12T23:37:26.325837176Z" level=info msg="CreateContainer within sandbox \"e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 12 23:37:26.348246 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 57018 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:37:26.350425 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:37:26.353690 containerd[1531]: time="2025-08-12T23:37:26.352817026Z" level=info msg="Container a13487e3b0bddb17b288a018f31080335426ea6bb490e0350cc0abb6f440a0ab: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:26.356794 systemd-logind[1515]: New session 8 of user core. Aug 12 23:37:26.361608 containerd[1531]: time="2025-08-12T23:37:26.361554223Z" level=info msg="CreateContainer within sandbox \"e73b1d7b8780082c045b4ba595b2c13d02a2888e51d7241d5140d003fa71fe0d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a13487e3b0bddb17b288a018f31080335426ea6bb490e0350cc0abb6f440a0ab\"" Aug 12 23:37:26.362135 containerd[1531]: time="2025-08-12T23:37:26.362088635Z" level=info msg="StartContainer for \"a13487e3b0bddb17b288a018f31080335426ea6bb490e0350cc0abb6f440a0ab\"" Aug 12 23:37:26.363785 containerd[1531]: time="2025-08-12T23:37:26.363747193Z" level=info msg="connecting to shim a13487e3b0bddb17b288a018f31080335426ea6bb490e0350cc0abb6f440a0ab" address="unix:///run/containerd/s/468f7a0ca1a4aba669f6edd547dedd2ff81392a543d526a6702d11d8bfaf6453" protocol=ttrpc version=3 Aug 12 23:37:26.365510 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 12 23:37:26.395538 systemd[1]: Started cri-containerd-a13487e3b0bddb17b288a018f31080335426ea6bb490e0350cc0abb6f440a0ab.scope - libcontainer container a13487e3b0bddb17b288a018f31080335426ea6bb490e0350cc0abb6f440a0ab. Aug 12 23:37:26.454160 containerd[1531]: time="2025-08-12T23:37:26.449612214Z" level=info msg="StartContainer for \"a13487e3b0bddb17b288a018f31080335426ea6bb490e0350cc0abb6f440a0ab\" returns successfully" Aug 12 23:37:26.557557 sshd[4096]: Connection closed by 10.0.0.1 port 57018 Aug 12 23:37:26.557822 sshd-session[4085]: pam_unix(sshd:session): session closed for user core Aug 12 23:37:26.561462 systemd[1]: sshd@7-10.0.0.30:22-10.0.0.1:57018.service: Deactivated successfully. Aug 12 23:37:26.563980 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:37:26.565257 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:37:26.566575 systemd-logind[1515]: Removed session 8. Aug 12 23:37:27.008643 kubelet[2658]: E0812 23:37:27.008537 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:27.009922 containerd[1531]: time="2025-08-12T23:37:27.008945374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ms66l,Uid:f962c9f9-6696-4a88-914d-4cf57af3f039,Namespace:kube-system,Attempt:0,}" Aug 12 23:37:27.137344 systemd-networkd[1439]: calid4b2e71d0b3: Link UP Aug 12 23:37:27.138037 systemd-networkd[1439]: calid4b2e71d0b3: Gained carrier Aug 12 23:37:27.161405 containerd[1531]: 2025-08-12 23:37:27.035 [INFO][4142] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:37:27.161405 containerd[1531]: 2025-08-12 23:37:27.049 [INFO][4142] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--ms66l-eth0 coredns-668d6bf9bc- kube-system f962c9f9-6696-4a88-914d-4cf57af3f039 864 0 2025-08-12 23:36:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-ms66l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid4b2e71d0b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms66l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms66l-" Aug 12 23:37:27.161405 containerd[1531]: 2025-08-12 23:37:27.049 [INFO][4142] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms66l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms66l-eth0" Aug 12 23:37:27.161405 containerd[1531]: 2025-08-12 23:37:27.085 [INFO][4158] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" HandleID="k8s-pod-network.4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" Workload="localhost-k8s-coredns--668d6bf9bc--ms66l-eth0" Aug 12 23:37:27.161627 containerd[1531]: 2025-08-12 23:37:27.085 [INFO][4158] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" HandleID="k8s-pod-network.4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" Workload="localhost-k8s-coredns--668d6bf9bc--ms66l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d850), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-ms66l", "timestamp":"2025-08-12 23:37:27.085542551 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:37:27.161627 containerd[1531]: 2025-08-12 23:37:27.085 [INFO][4158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:37:27.161627 containerd[1531]: 2025-08-12 23:37:27.085 [INFO][4158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:37:27.161627 containerd[1531]: 2025-08-12 23:37:27.085 [INFO][4158] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:37:27.161627 containerd[1531]: 2025-08-12 23:37:27.099 [INFO][4158] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" host="localhost" Aug 12 23:37:27.161627 containerd[1531]: 2025-08-12 23:37:27.104 [INFO][4158] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:37:27.161627 containerd[1531]: 2025-08-12 23:37:27.109 [INFO][4158] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:37:27.161627 containerd[1531]: 2025-08-12 23:37:27.113 [INFO][4158] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:27.161627 containerd[1531]: 2025-08-12 23:37:27.116 [INFO][4158] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:27.161627 containerd[1531]: 2025-08-12 23:37:27.117 [INFO][4158] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" host="localhost" Aug 12 23:37:27.161838 containerd[1531]: 2025-08-12 23:37:27.121 [INFO][4158] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd Aug 12 23:37:27.161838 containerd[1531]: 2025-08-12 23:37:27.125 [INFO][4158] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" host="localhost" Aug 12 23:37:27.161838 containerd[1531]: 2025-08-12 23:37:27.131 [INFO][4158] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" host="localhost" Aug 12 23:37:27.161838 containerd[1531]: 2025-08-12 23:37:27.131 [INFO][4158] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" host="localhost" Aug 12 23:37:27.161838 containerd[1531]: 2025-08-12 23:37:27.131 [INFO][4158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:37:27.161838 containerd[1531]: 2025-08-12 23:37:27.131 [INFO][4158] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" HandleID="k8s-pod-network.4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" Workload="localhost-k8s-coredns--668d6bf9bc--ms66l-eth0" Aug 12 23:37:27.161968 containerd[1531]: 2025-08-12 23:37:27.134 [INFO][4142] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms66l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms66l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ms66l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f962c9f9-6696-4a88-914d-4cf57af3f039", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 36, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-ms66l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4b2e71d0b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:27.162043 containerd[1531]: 2025-08-12 23:37:27.134 [INFO][4142] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms66l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms66l-eth0" Aug 12 23:37:27.162043 containerd[1531]: 2025-08-12 23:37:27.134 [INFO][4142] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4b2e71d0b3 ContainerID="4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms66l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms66l-eth0" Aug 12 23:37:27.162043 containerd[1531]: 2025-08-12 23:37:27.139 [INFO][4142] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms66l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms66l-eth0" Aug 12 23:37:27.162118 containerd[1531]: 2025-08-12 23:37:27.140 [INFO][4142] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms66l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms66l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ms66l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f962c9f9-6696-4a88-914d-4cf57af3f039", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 36, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd", Pod:"coredns-668d6bf9bc-ms66l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4b2e71d0b3", MAC:"de:c7:7e:1e:78:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:27.162118 containerd[1531]: 2025-08-12 23:37:27.159 [INFO][4142] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms66l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms66l-eth0" Aug 12 23:37:27.212496 containerd[1531]: time="2025-08-12T23:37:27.212449162Z" level=info msg="connecting to shim 4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd" address="unix:///run/containerd/s/9ccb08dfb4c3fdc8245cc0558ed5fbc0a585f244dd296610b6d9157f4b3f7f27" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:37:27.218165 kubelet[2658]: I0812 23:37:27.218098 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-748b9899-brv8p" podStartSLOduration=2.07619478 podStartE2EDuration="5.218082287s" podCreationTimestamp="2025-08-12 23:37:22 +0000 UTC" firstStartedPulling="2025-08-12 23:37:23.182489076 +0000 UTC m=+38.249522128" lastFinishedPulling="2025-08-12 23:37:26.324376583 +0000 UTC m=+41.391409635" observedRunningTime="2025-08-12 23:37:27.217734159 +0000 UTC m=+42.284767211" watchObservedRunningTime="2025-08-12 23:37:27.218082287 +0000 UTC m=+42.285115339" Aug 12 23:37:27.253683 systemd[1]: Started cri-containerd-4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd.scope - libcontainer container 4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd. Aug 12 23:37:27.266740 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:37:27.289124 containerd[1531]: time="2025-08-12T23:37:27.289084420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ms66l,Uid:f962c9f9-6696-4a88-914d-4cf57af3f039,Namespace:kube-system,Attempt:0,} returns sandbox id \"4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd\"" Aug 12 23:37:27.289951 kubelet[2658]: E0812 23:37:27.289925 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:27.293360 containerd[1531]: time="2025-08-12T23:37:27.293023147Z" level=info msg="CreateContainer within sandbox \"4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:37:27.306759 containerd[1531]: time="2025-08-12T23:37:27.306699930Z" level=info msg="Container 6614120a4706f0b060bc66f7b806c6c8870c6d05d562ac251ae49d5fad835de1: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:27.308175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406906845.mount: Deactivated successfully. Aug 12 23:37:27.313586 containerd[1531]: time="2025-08-12T23:37:27.313543281Z" level=info msg="CreateContainer within sandbox \"4029c120bfccd08997812bbd6664d35ff2465507ab74de4b472868b852c161bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6614120a4706f0b060bc66f7b806c6c8870c6d05d562ac251ae49d5fad835de1\"" Aug 12 23:37:27.314222 containerd[1531]: time="2025-08-12T23:37:27.314194576Z" level=info msg="StartContainer for \"6614120a4706f0b060bc66f7b806c6c8870c6d05d562ac251ae49d5fad835de1\"" Aug 12 23:37:27.315664 containerd[1531]: time="2025-08-12T23:37:27.315365042Z" level=info msg="connecting to shim 6614120a4706f0b060bc66f7b806c6c8870c6d05d562ac251ae49d5fad835de1" address="unix:///run/containerd/s/9ccb08dfb4c3fdc8245cc0558ed5fbc0a585f244dd296610b6d9157f4b3f7f27" protocol=ttrpc version=3 Aug 12 23:37:27.344508 systemd[1]: Started cri-containerd-6614120a4706f0b060bc66f7b806c6c8870c6d05d562ac251ae49d5fad835de1.scope - libcontainer container 6614120a4706f0b060bc66f7b806c6c8870c6d05d562ac251ae49d5fad835de1. Aug 12 23:37:27.371882 containerd[1531]: time="2025-08-12T23:37:27.371844413Z" level=info msg="StartContainer for \"6614120a4706f0b060bc66f7b806c6c8870c6d05d562ac251ae49d5fad835de1\" returns successfully" Aug 12 23:37:28.007969 kubelet[2658]: E0812 23:37:28.007818 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:28.008396 containerd[1531]: time="2025-08-12T23:37:28.008354830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b577fd4f9-z6p5k,Uid:c7d69676-2d42-4a97-b0df-97abd0f14cb8,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:37:28.008472 containerd[1531]: time="2025-08-12T23:37:28.008390471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9n5cx,Uid:98c68851-1e5d-47be-8edf-990e370bc5b7,Namespace:kube-system,Attempt:0,}" Aug 12 23:37:28.009022 containerd[1531]: time="2025-08-12T23:37:28.008994404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bdf877cd4-rvm5d,Uid:419b28e0-84b9-4d61-aa83-b5377b54bd8b,Namespace:calico-system,Attempt:0,}" Aug 12 23:37:28.155662 systemd-networkd[1439]: cali184b3a92a86: Link UP Aug 12 23:37:28.156392 systemd-networkd[1439]: cali184b3a92a86: Gained carrier Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.043 [INFO][4296] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.069 [INFO][4296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0 calico-kube-controllers-5bdf877cd4- calico-system 419b28e0-84b9-4d61-aa83-b5377b54bd8b 856 0 2025-08-12 23:37:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bdf877cd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5bdf877cd4-rvm5d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali184b3a92a86 [] [] }} ContainerID="0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" Namespace="calico-system" Pod="calico-kube-controllers-5bdf877cd4-rvm5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-" Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.069 [INFO][4296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" Namespace="calico-system" Pod="calico-kube-controllers-5bdf877cd4-rvm5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0" Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.110 [INFO][4333] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" HandleID="k8s-pod-network.0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" Workload="localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0" Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.110 [INFO][4333] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" HandleID="k8s-pod-network.0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" Workload="localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5bdf877cd4-rvm5d", "timestamp":"2025-08-12 23:37:28.110705054 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.110 [INFO][4333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.110 [INFO][4333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.110 [INFO][4333] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.121 [INFO][4333] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" host="localhost" Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.128 [INFO][4333] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.132 [INFO][4333] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.133 [INFO][4333] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.136 [INFO][4333] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.136 [INFO][4333] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" host="localhost" Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.138 [INFO][4333] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761 Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.143 [INFO][4333] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" host="localhost" Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.149 [INFO][4333] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" host="localhost" Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.149 [INFO][4333] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" host="localhost" Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.149 [INFO][4333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:37:28.171777 containerd[1531]: 2025-08-12 23:37:28.150 [INFO][4333] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" HandleID="k8s-pod-network.0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" Workload="localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0" Aug 12 23:37:28.172554 containerd[1531]: 2025-08-12 23:37:28.153 [INFO][4296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" Namespace="calico-system" Pod="calico-kube-controllers-5bdf877cd4-rvm5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0", GenerateName:"calico-kube-controllers-5bdf877cd4-", Namespace:"calico-system", SelfLink:"", UID:"419b28e0-84b9-4d61-aa83-b5377b54bd8b", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 37, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bdf877cd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5bdf877cd4-rvm5d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali184b3a92a86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:28.172554 containerd[1531]: 2025-08-12 23:37:28.153 [INFO][4296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" Namespace="calico-system" Pod="calico-kube-controllers-5bdf877cd4-rvm5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0" Aug 12 23:37:28.172554 containerd[1531]: 2025-08-12 23:37:28.153 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali184b3a92a86 ContainerID="0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" Namespace="calico-system" Pod="calico-kube-controllers-5bdf877cd4-rvm5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0" Aug 12 23:37:28.172554 containerd[1531]: 2025-08-12 23:37:28.157 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" Namespace="calico-system" Pod="calico-kube-controllers-5bdf877cd4-rvm5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0" Aug 12 23:37:28.172554 containerd[1531]: 2025-08-12 23:37:28.157 [INFO][4296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" Namespace="calico-system" Pod="calico-kube-controllers-5bdf877cd4-rvm5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0", GenerateName:"calico-kube-controllers-5bdf877cd4-", Namespace:"calico-system", SelfLink:"", UID:"419b28e0-84b9-4d61-aa83-b5377b54bd8b", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 37, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bdf877cd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761", Pod:"calico-kube-controllers-5bdf877cd4-rvm5d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali184b3a92a86", MAC:"82:4b:71:7b:f2:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:28.172554 containerd[1531]: 2025-08-12 23:37:28.170 [INFO][4296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" Namespace="calico-system" Pod="calico-kube-controllers-5bdf877cd4-rvm5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdf877cd4--rvm5d-eth0" Aug 12 23:37:28.197567 kubelet[2658]: E0812 23:37:28.196477 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:28.214648 kubelet[2658]: I0812 23:37:28.214546 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ms66l" podStartSLOduration=36.214525791 podStartE2EDuration="36.214525791s" podCreationTimestamp="2025-08-12 23:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:37:28.21266815 +0000 UTC m=+43.279701202" watchObservedRunningTime="2025-08-12 23:37:28.214525791 +0000 UTC m=+43.281558843" Aug 12 23:37:28.226127 containerd[1531]: time="2025-08-12T23:37:28.226089082Z" level=info msg="connecting to shim 0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761" address="unix:///run/containerd/s/b39d87a28db800ddb46f4476a2455e53cafa684df95b600206c8adf2b3e4528b" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:37:28.272398 systemd-networkd[1439]: caliad0b08e299f: Link UP Aug 12 23:37:28.272896 systemd-networkd[1439]: caliad0b08e299f: Gained carrier Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.041 [INFO][4280] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.061 [INFO][4280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0 coredns-668d6bf9bc- kube-system 98c68851-1e5d-47be-8edf-990e370bc5b7 860 0 2025-08-12 23:36:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-9n5cx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad0b08e299f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9n5cx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9n5cx-" Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.062 [INFO][4280] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9n5cx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0" Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.112 [INFO][4325] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" HandleID="k8s-pod-network.62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" Workload="localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0" Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.112 [INFO][4325] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" HandleID="k8s-pod-network.62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" Workload="localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000131760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-9n5cx", "timestamp":"2025-08-12 23:37:28.112739059 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.112 [INFO][4325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.149 [INFO][4325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.149 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.223 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" host="localhost" Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.229 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.238 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.240 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.243 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.243 [INFO][4325] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" host="localhost" Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.247 [INFO][4325] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.252 [INFO][4325] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" host="localhost" Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.257 [INFO][4325] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" host="localhost" Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.257 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" host="localhost" Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.257 [INFO][4325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:37:28.292911 containerd[1531]: 2025-08-12 23:37:28.257 [INFO][4325] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" HandleID="k8s-pod-network.62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" Workload="localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0" Aug 12 23:37:28.293486 containerd[1531]: 2025-08-12 23:37:28.264 [INFO][4280] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9n5cx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"98c68851-1e5d-47be-8edf-990e370bc5b7", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 36, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-9n5cx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad0b08e299f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:28.293486 containerd[1531]: 2025-08-12 23:37:28.264 [INFO][4280] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9n5cx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0" Aug 12 23:37:28.293486 containerd[1531]: 2025-08-12 23:37:28.264 [INFO][4280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad0b08e299f ContainerID="62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9n5cx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0" Aug 12 23:37:28.293486 containerd[1531]: 2025-08-12 23:37:28.274 [INFO][4280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9n5cx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0" Aug 12 23:37:28.293486 containerd[1531]: 2025-08-12 23:37:28.274 [INFO][4280] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9n5cx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"98c68851-1e5d-47be-8edf-990e370bc5b7", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 36, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f", Pod:"coredns-668d6bf9bc-9n5cx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad0b08e299f", MAC:"c2:74:62:e0:3e:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:28.293486 containerd[1531]: 2025-08-12 23:37:28.288 [INFO][4280] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9n5cx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9n5cx-eth0" Aug 12 23:37:28.303519 systemd[1]: Started cri-containerd-0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761.scope - libcontainer container 0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761. Aug 12 23:37:28.322542 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:37:28.393092 containerd[1531]: time="2025-08-12T23:37:28.393043990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bdf877cd4-rvm5d,Uid:419b28e0-84b9-4d61-aa83-b5377b54bd8b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761\"" Aug 12 23:37:28.394797 containerd[1531]: time="2025-08-12T23:37:28.394768707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 12 23:37:28.400072 systemd-networkd[1439]: cali1e8fcc5b7ed: Link UP Aug 12 23:37:28.401106 systemd-networkd[1439]: cali1e8fcc5b7ed: Gained carrier Aug 12 23:37:28.410592 containerd[1531]: time="2025-08-12T23:37:28.410526410Z" level=info msg="connecting to shim 62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f" address="unix:///run/containerd/s/0dd2bebdd493613875bbb5aa7586f5115e4366ce49d05087c9b476d0b7abcc28" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.045 [INFO][4287] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.067 [INFO][4287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0 calico-apiserver-5b577fd4f9- calico-apiserver c7d69676-2d42-4a97-b0df-97abd0f14cb8 867 0 2025-08-12 23:37:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b577fd4f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b577fd4f9-z6p5k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e8fcc5b7ed [] [] }} ContainerID="945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-z6p5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-" Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.067 [INFO][4287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-z6p5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0" Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.116 [INFO][4332] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" HandleID="k8s-pod-network.945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" Workload="localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0" Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.116 [INFO][4332] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" HandleID="k8s-pod-network.945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" Workload="localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b577fd4f9-z6p5k", "timestamp":"2025-08-12 23:37:28.116440579 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.116 [INFO][4332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.258 [INFO][4332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.258 [INFO][4332] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.322 [INFO][4332] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" host="localhost" Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.333 [INFO][4332] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.370 [INFO][4332] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.373 [INFO][4332] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.376 [INFO][4332] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.376 [INFO][4332] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" host="localhost" Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.378 [INFO][4332] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26 Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.384 [INFO][4332] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" host="localhost" Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.391 [INFO][4332] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" host="localhost" Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.391 [INFO][4332] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" host="localhost" Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.391 [INFO][4332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:37:28.414936 containerd[1531]: 2025-08-12 23:37:28.391 [INFO][4332] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" HandleID="k8s-pod-network.945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" Workload="localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0" Aug 12 23:37:28.415459 containerd[1531]: 2025-08-12 23:37:28.395 [INFO][4287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-z6p5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0", GenerateName:"calico-apiserver-5b577fd4f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7d69676-2d42-4a97-b0df-97abd0f14cb8", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 37, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b577fd4f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b577fd4f9-z6p5k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e8fcc5b7ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:28.415459 containerd[1531]: 2025-08-12 23:37:28.395 [INFO][4287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-z6p5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0" Aug 12 23:37:28.415459 containerd[1531]: 2025-08-12 23:37:28.395 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e8fcc5b7ed ContainerID="945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-z6p5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0" Aug 12 23:37:28.415459 containerd[1531]: 2025-08-12 23:37:28.402 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-z6p5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0" Aug 12 23:37:28.415459 containerd[1531]: 2025-08-12 23:37:28.403 [INFO][4287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-z6p5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0", GenerateName:"calico-apiserver-5b577fd4f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7d69676-2d42-4a97-b0df-97abd0f14cb8", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 37, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b577fd4f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26", Pod:"calico-apiserver-5b577fd4f9-z6p5k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e8fcc5b7ed", MAC:"7a:9e:50:80:67:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:28.415459 containerd[1531]: 2025-08-12 23:37:28.412 [INFO][4287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-z6p5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--z6p5k-eth0" Aug 12 23:37:28.435727 containerd[1531]: time="2025-08-12T23:37:28.435663636Z" level=info msg="connecting to shim 945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26" address="unix:///run/containerd/s/debac11e04b57da77e55afe95840a666a427cce430551d9f2d62607df65ffb28" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:37:28.452512 systemd[1]: Started cri-containerd-62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f.scope - libcontainer container 62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f. Aug 12 23:37:28.461132 systemd[1]: Started cri-containerd-945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26.scope - libcontainer container 945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26. Aug 12 23:37:28.469833 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:37:28.475803 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:37:28.505153 containerd[1531]: time="2025-08-12T23:37:28.505105105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9n5cx,Uid:98c68851-1e5d-47be-8edf-990e370bc5b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f\"" Aug 12 23:37:28.506064 kubelet[2658]: E0812 23:37:28.506036 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:28.510904 containerd[1531]: time="2025-08-12T23:37:28.510837510Z" level=info msg="CreateContainer within sandbox \"62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:37:28.524835 containerd[1531]: time="2025-08-12T23:37:28.524731372Z" level=info msg="Container 7184f505cbbffaa847c3ffb7e9a67466b4475ed4280eaba1f6f017d86649ef47: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:28.530296 containerd[1531]: time="2025-08-12T23:37:28.530239171Z" level=info msg="CreateContainer within sandbox \"62ba87f1263bed7f6d188783eb74f6eba73390a213b98c5caeb6d123ee16e27f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7184f505cbbffaa847c3ffb7e9a67466b4475ed4280eaba1f6f017d86649ef47\"" Aug 12 23:37:28.531005 containerd[1531]: time="2025-08-12T23:37:28.530976827Z" level=info msg="StartContainer for \"7184f505cbbffaa847c3ffb7e9a67466b4475ed4280eaba1f6f017d86649ef47\"" Aug 12 23:37:28.531844 containerd[1531]: time="2025-08-12T23:37:28.531811806Z" level=info msg="connecting to shim 7184f505cbbffaa847c3ffb7e9a67466b4475ed4280eaba1f6f017d86649ef47" address="unix:///run/containerd/s/0dd2bebdd493613875bbb5aa7586f5115e4366ce49d05087c9b476d0b7abcc28" protocol=ttrpc version=3 Aug 12 23:37:28.543679 containerd[1531]: time="2025-08-12T23:37:28.543639023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b577fd4f9-z6p5k,Uid:c7d69676-2d42-4a97-b0df-97abd0f14cb8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26\"" Aug 12 23:37:28.557546 systemd[1]: Started cri-containerd-7184f505cbbffaa847c3ffb7e9a67466b4475ed4280eaba1f6f017d86649ef47.scope - libcontainer container 7184f505cbbffaa847c3ffb7e9a67466b4475ed4280eaba1f6f017d86649ef47. Aug 12 23:37:28.584207 containerd[1531]: time="2025-08-12T23:37:28.583778695Z" level=info msg="StartContainer for \"7184f505cbbffaa847c3ffb7e9a67466b4475ed4280eaba1f6f017d86649ef47\" returns successfully" Aug 12 23:37:28.721577 systemd-networkd[1439]: calid4b2e71d0b3: Gained IPv6LL Aug 12 23:37:29.200051 kubelet[2658]: E0812 23:37:29.200010 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:29.206780 kubelet[2658]: E0812 23:37:29.206756 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:29.212617 kubelet[2658]: I0812 23:37:29.212529 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9n5cx" podStartSLOduration=37.212512634 podStartE2EDuration="37.212512634s" podCreationTimestamp="2025-08-12 23:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:37:29.212148546 +0000 UTC m=+44.279181598" watchObservedRunningTime="2025-08-12 23:37:29.212512634 +0000 UTC m=+44.279545646" Aug 12 23:37:29.553487 systemd-networkd[1439]: caliad0b08e299f: Gained IPv6LL Aug 12 23:37:29.873470 systemd-networkd[1439]: cali1e8fcc5b7ed: Gained IPv6LL Aug 12 23:37:30.009956 containerd[1531]: time="2025-08-12T23:37:30.009903763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-kbx6z,Uid:fb967153-ca96-4ffe-9267-1fe8dfaad512,Namespace:calico-system,Attempt:0,}" Aug 12 23:37:30.010492 containerd[1531]: time="2025-08-12T23:37:30.010367813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9m5bp,Uid:6f6acd37-c53d-49cd-8abd-4c20e696ec5d,Namespace:calico-system,Attempt:0,}" Aug 12 23:37:30.129827 systemd-networkd[1439]: cali184b3a92a86: Gained IPv6LL Aug 12 23:37:30.154764 systemd-networkd[1439]: caliafdc2e7f08f: Link UP Aug 12 23:37:30.155157 systemd-networkd[1439]: caliafdc2e7f08f: Gained carrier Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.046 [INFO][4605] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.066 [INFO][4605] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0 goldmane-768f4c5c69- calico-system fb967153-ca96-4ffe-9267-1fe8dfaad512 863 0 2025-08-12 23:37:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-kbx6z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliafdc2e7f08f [] [] }} ContainerID="f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" Namespace="calico-system" Pod="goldmane-768f4c5c69-kbx6z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kbx6z-" Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.066 [INFO][4605] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" Namespace="calico-system" Pod="goldmane-768f4c5c69-kbx6z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0" Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.102 [INFO][4631] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" HandleID="k8s-pod-network.f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" Workload="localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0" Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.103 [INFO][4631] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" HandleID="k8s-pod-network.f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" Workload="localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-kbx6z", "timestamp":"2025-08-12 23:37:30.10274595 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.103 [INFO][4631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.103 [INFO][4631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.103 [INFO][4631] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.114 [INFO][4631] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" host="localhost" Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.121 [INFO][4631] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.125 [INFO][4631] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.127 [INFO][4631] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.131 [INFO][4631] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.131 [INFO][4631] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" host="localhost" Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.133 [INFO][4631] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771 Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.137 [INFO][4631] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" host="localhost" Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.145 [INFO][4631] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" host="localhost" Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.145 [INFO][4631] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" host="localhost" Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.145 [INFO][4631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:37:30.172396 containerd[1531]: 2025-08-12 23:37:30.145 [INFO][4631] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" HandleID="k8s-pod-network.f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" Workload="localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0" Aug 12 23:37:30.173048 containerd[1531]: 2025-08-12 23:37:30.148 [INFO][4605] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" Namespace="calico-system" Pod="goldmane-768f4c5c69-kbx6z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"fb967153-ca96-4ffe-9267-1fe8dfaad512", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 37, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-kbx6z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliafdc2e7f08f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:30.173048 containerd[1531]: 2025-08-12 23:37:30.148 [INFO][4605] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" Namespace="calico-system" Pod="goldmane-768f4c5c69-kbx6z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0" Aug 12 23:37:30.173048 containerd[1531]: 2025-08-12 23:37:30.148 [INFO][4605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliafdc2e7f08f ContainerID="f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" Namespace="calico-system" Pod="goldmane-768f4c5c69-kbx6z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0" Aug 12 23:37:30.173048 containerd[1531]: 2025-08-12 23:37:30.155 [INFO][4605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" Namespace="calico-system" Pod="goldmane-768f4c5c69-kbx6z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0" Aug 12 23:37:30.173048 containerd[1531]: 2025-08-12 23:37:30.157 [INFO][4605] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" Namespace="calico-system" Pod="goldmane-768f4c5c69-kbx6z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"fb967153-ca96-4ffe-9267-1fe8dfaad512", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 37, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771", Pod:"goldmane-768f4c5c69-kbx6z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliafdc2e7f08f", MAC:"96:fa:73:1c:6f:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:30.173048 containerd[1531]: 2025-08-12 23:37:30.170 [INFO][4605] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" Namespace="calico-system" Pod="goldmane-768f4c5c69-kbx6z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kbx6z-eth0" Aug 12 23:37:30.211395 kubelet[2658]: E0812 23:37:30.210101 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:30.211395 kubelet[2658]: E0812 23:37:30.210189 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:30.225263 containerd[1531]: time="2025-08-12T23:37:30.225214557Z" level=info msg="connecting to shim f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771" address="unix:///run/containerd/s/30e8b4eea42808ead3aa291461399a42d73b58c04a2497230c9afa3477156b41" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:37:30.259526 systemd[1]: Started cri-containerd-f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771.scope - libcontainer container f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771. Aug 12 23:37:30.277430 systemd-networkd[1439]: caliea6b65c4a5b: Link UP Aug 12 23:37:30.277820 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:37:30.279549 systemd-networkd[1439]: caliea6b65c4a5b: Gained carrier Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.048 [INFO][4604] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.066 [INFO][4604] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9m5bp-eth0 csi-node-driver- calico-system 6f6acd37-c53d-49cd-8abd-4c20e696ec5d 759 0 2025-08-12 23:37:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9m5bp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliea6b65c4a5b [] [] }} ContainerID="f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" Namespace="calico-system" Pod="csi-node-driver-9m5bp" WorkloadEndpoint="localhost-k8s-csi--node--driver--9m5bp-" Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.066 [INFO][4604] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" Namespace="calico-system" Pod="csi-node-driver-9m5bp" WorkloadEndpoint="localhost-k8s-csi--node--driver--9m5bp-eth0" Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.110 [INFO][4633] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" HandleID="k8s-pod-network.f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" Workload="localhost-k8s-csi--node--driver--9m5bp-eth0" Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.110 [INFO][4633] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" HandleID="k8s-pod-network.f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" Workload="localhost-k8s-csi--node--driver--9m5bp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a34f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9m5bp", "timestamp":"2025-08-12 23:37:30.110479552 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.110 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.145 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.146 [INFO][4633] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.215 [INFO][4633] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" host="localhost" Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.228 [INFO][4633] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.238 [INFO][4633] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.241 [INFO][4633] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.248 [INFO][4633] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.249 [INFO][4633] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" host="localhost" Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.252 [INFO][4633] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813 Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.259 [INFO][4633] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" host="localhost" Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.267 [INFO][4633] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" host="localhost" Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.268 [INFO][4633] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" host="localhost" Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.269 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:37:30.302828 containerd[1531]: 2025-08-12 23:37:30.269 [INFO][4633] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" HandleID="k8s-pod-network.f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" Workload="localhost-k8s-csi--node--driver--9m5bp-eth0" Aug 12 23:37:30.304884 containerd[1531]: 2025-08-12 23:37:30.272 [INFO][4604] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" Namespace="calico-system" Pod="csi-node-driver-9m5bp" WorkloadEndpoint="localhost-k8s-csi--node--driver--9m5bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9m5bp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f6acd37-c53d-49cd-8abd-4c20e696ec5d", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 37, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9m5bp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea6b65c4a5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:30.304884 containerd[1531]: 2025-08-12 23:37:30.272 [INFO][4604] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" Namespace="calico-system" Pod="csi-node-driver-9m5bp" WorkloadEndpoint="localhost-k8s-csi--node--driver--9m5bp-eth0" Aug 12 23:37:30.304884 containerd[1531]: 2025-08-12 23:37:30.273 [INFO][4604] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea6b65c4a5b ContainerID="f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" Namespace="calico-system" Pod="csi-node-driver-9m5bp" WorkloadEndpoint="localhost-k8s-csi--node--driver--9m5bp-eth0" Aug 12 23:37:30.304884 containerd[1531]: 2025-08-12 23:37:30.281 [INFO][4604] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" Namespace="calico-system" Pod="csi-node-driver-9m5bp" WorkloadEndpoint="localhost-k8s-csi--node--driver--9m5bp-eth0" Aug 12 23:37:30.304884 containerd[1531]: 2025-08-12 23:37:30.281 [INFO][4604] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" Namespace="calico-system" Pod="csi-node-driver-9m5bp" WorkloadEndpoint="localhost-k8s-csi--node--driver--9m5bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9m5bp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f6acd37-c53d-49cd-8abd-4c20e696ec5d", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 37, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813", Pod:"csi-node-driver-9m5bp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea6b65c4a5b", MAC:"62:70:0a:a9:27:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:30.304884 containerd[1531]: 2025-08-12 23:37:30.294 [INFO][4604] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" Namespace="calico-system" Pod="csi-node-driver-9m5bp" WorkloadEndpoint="localhost-k8s-csi--node--driver--9m5bp-eth0" Aug 12 23:37:30.324148 containerd[1531]: time="2025-08-12T23:37:30.323843345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-kbx6z,Uid:fb967153-ca96-4ffe-9267-1fe8dfaad512,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771\"" Aug 12 23:37:30.344838 containerd[1531]: time="2025-08-12T23:37:30.344767264Z" level=info msg="connecting to shim f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813" address="unix:///run/containerd/s/b8b5dd0149cdd67852290f5f9803b36c0a4f363f64661f34a40852ae1e627918" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:37:30.374500 systemd[1]: Started cri-containerd-f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813.scope - libcontainer container f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813. Aug 12 23:37:30.399258 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:37:30.424799 containerd[1531]: time="2025-08-12T23:37:30.424749180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9m5bp,Uid:6f6acd37-c53d-49cd-8abd-4c20e696ec5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813\"" Aug 12 23:37:30.662469 containerd[1531]: time="2025-08-12T23:37:30.662355962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:30.663100 containerd[1531]: time="2025-08-12T23:37:30.663071737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 12 23:37:30.663950 containerd[1531]: time="2025-08-12T23:37:30.663901634Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:30.666460 containerd[1531]: time="2025-08-12T23:37:30.666418887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:30.666874 containerd[1531]: time="2025-08-12T23:37:30.666841656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.272023588s" Aug 12 23:37:30.666955 containerd[1531]: time="2025-08-12T23:37:30.666941858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 12 23:37:30.667915 containerd[1531]: time="2025-08-12T23:37:30.667878438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:37:30.673850 containerd[1531]: time="2025-08-12T23:37:30.673812442Z" level=info msg="CreateContainer within sandbox \"0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 12 23:37:30.680768 containerd[1531]: time="2025-08-12T23:37:30.680718707Z" level=info msg="Container 2141ca0a477dfb6d916ab5399830c14583af5624ad0528e29f0c07e4e10a6dbb: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:30.692436 containerd[1531]: time="2025-08-12T23:37:30.692302670Z" level=info msg="CreateContainer within sandbox \"0cbf530b078e590a5c8c94ada4973cf5163a6085980b21b695a016f221406761\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2141ca0a477dfb6d916ab5399830c14583af5624ad0528e29f0c07e4e10a6dbb\"" Aug 12 23:37:30.693291 containerd[1531]: time="2025-08-12T23:37:30.693258730Z" level=info msg="StartContainer for \"2141ca0a477dfb6d916ab5399830c14583af5624ad0528e29f0c07e4e10a6dbb\"" Aug 12 23:37:30.694541 containerd[1531]: time="2025-08-12T23:37:30.694515756Z" level=info msg="connecting to shim 2141ca0a477dfb6d916ab5399830c14583af5624ad0528e29f0c07e4e10a6dbb" address="unix:///run/containerd/s/b39d87a28db800ddb46f4476a2455e53cafa684df95b600206c8adf2b3e4528b" protocol=ttrpc version=3 Aug 12 23:37:30.712549 systemd[1]: Started cri-containerd-2141ca0a477dfb6d916ab5399830c14583af5624ad0528e29f0c07e4e10a6dbb.scope - libcontainer container 2141ca0a477dfb6d916ab5399830c14583af5624ad0528e29f0c07e4e10a6dbb. Aug 12 23:37:30.750960 containerd[1531]: time="2025-08-12T23:37:30.750925699Z" level=info msg="StartContainer for \"2141ca0a477dfb6d916ab5399830c14583af5624ad0528e29f0c07e4e10a6dbb\" returns successfully" Aug 12 23:37:31.008823 containerd[1531]: time="2025-08-12T23:37:31.008743581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b577fd4f9-gmxv5,Uid:9eb00c3b-1588-41d0-a343-7a8582fb6f38,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:37:31.133703 systemd-networkd[1439]: calidd0aac457a1: Link UP Aug 12 23:37:31.134452 systemd-networkd[1439]: calidd0aac457a1: Gained carrier Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.054 [INFO][4824] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.068 [INFO][4824] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0 calico-apiserver-5b577fd4f9- calico-apiserver 9eb00c3b-1588-41d0-a343-7a8582fb6f38 866 0 2025-08-12 23:37:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b577fd4f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b577fd4f9-gmxv5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd0aac457a1 [] [] }} ContainerID="eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-gmxv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-" Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.068 [INFO][4824] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-gmxv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0" Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.094 [INFO][4837] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" HandleID="k8s-pod-network.eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" Workload="localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0" Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.094 [INFO][4837] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" HandleID="k8s-pod-network.eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" Workload="localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ad330), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b577fd4f9-gmxv5", "timestamp":"2025-08-12 23:37:31.094647792 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.094 [INFO][4837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.094 [INFO][4837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.094 [INFO][4837] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.104 [INFO][4837] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" host="localhost" Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.108 [INFO][4837] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.112 [INFO][4837] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.114 [INFO][4837] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.116 [INFO][4837] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.117 [INFO][4837] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" host="localhost" Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.118 [INFO][4837] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.122 [INFO][4837] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" host="localhost" Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.128 [INFO][4837] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" host="localhost" Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.128 [INFO][4837] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" host="localhost" Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.129 [INFO][4837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:37:31.149749 containerd[1531]: 2025-08-12 23:37:31.129 [INFO][4837] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" HandleID="k8s-pod-network.eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" Workload="localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0" Aug 12 23:37:31.150498 containerd[1531]: 2025-08-12 23:37:31.131 [INFO][4824] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-gmxv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0", GenerateName:"calico-apiserver-5b577fd4f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9eb00c3b-1588-41d0-a343-7a8582fb6f38", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 37, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b577fd4f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b577fd4f9-gmxv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd0aac457a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:31.150498 containerd[1531]: 2025-08-12 23:37:31.131 [INFO][4824] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-gmxv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0" Aug 12 23:37:31.150498 containerd[1531]: 2025-08-12 23:37:31.131 [INFO][4824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd0aac457a1 ContainerID="eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-gmxv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0" Aug 12 23:37:31.150498 containerd[1531]: 2025-08-12 23:37:31.134 [INFO][4824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-gmxv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0" Aug 12 23:37:31.150498 containerd[1531]: 2025-08-12 23:37:31.135 [INFO][4824] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-gmxv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0", GenerateName:"calico-apiserver-5b577fd4f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9eb00c3b-1588-41d0-a343-7a8582fb6f38", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 37, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b577fd4f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e", Pod:"calico-apiserver-5b577fd4f9-gmxv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd0aac457a1", MAC:"ca:62:e4:ce:36:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:37:31.150498 containerd[1531]: 2025-08-12 23:37:31.147 [INFO][4824] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" Namespace="calico-apiserver" Pod="calico-apiserver-5b577fd4f9-gmxv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b577fd4f9--gmxv5-eth0" Aug 12 23:37:31.171504 containerd[1531]: time="2025-08-12T23:37:31.171444336Z" level=info msg="connecting to shim eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e" address="unix:///run/containerd/s/c19940024ef04aa784b0207907e35ad964a8f8c9e8a60211bd8d3dbc5b4e9299" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:37:31.196534 systemd[1]: Started cri-containerd-eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e.scope - libcontainer container eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e. Aug 12 23:37:31.213011 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:37:31.217352 kubelet[2658]: E0812 23:37:31.217317 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:31.228659 kubelet[2658]: I0812 23:37:31.228579 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bdf877cd4-rvm5d" podStartSLOduration=22.95531566 podStartE2EDuration="25.228557513s" podCreationTimestamp="2025-08-12 23:37:06 +0000 UTC" firstStartedPulling="2025-08-12 23:37:28.394512822 +0000 UTC m=+43.461545874" lastFinishedPulling="2025-08-12 23:37:30.667754675 +0000 UTC m=+45.734787727" observedRunningTime="2025-08-12 23:37:31.226780356 +0000 UTC m=+46.293813408" watchObservedRunningTime="2025-08-12 23:37:31.228557513 +0000 UTC m=+46.295590565" Aug 12 23:37:31.238299 containerd[1531]: time="2025-08-12T23:37:31.238249153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b577fd4f9-gmxv5,Uid:9eb00c3b-1588-41d0-a343-7a8582fb6f38,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e\"" Aug 12 23:37:31.310563 containerd[1531]: time="2025-08-12T23:37:31.310121035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2141ca0a477dfb6d916ab5399830c14583af5624ad0528e29f0c07e4e10a6dbb\" id:\"c98642e164246d62456f501d7addbc6472adcb52eb9f6f619b1f99c7bcb99de5\" pid:4907 exited_at:{seconds:1755041851 nanos:309679385}" Aug 12 23:37:31.409441 systemd-networkd[1439]: caliafdc2e7f08f: Gained IPv6LL Aug 12 23:37:31.572550 systemd[1]: Started sshd@8-10.0.0.30:22-10.0.0.1:57020.service - OpenSSH per-connection server daemon (10.0.0.1:57020). Aug 12 23:37:31.636744 sshd[4941]: Accepted publickey for core from 10.0.0.1 port 57020 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:37:31.638362 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:37:31.642671 systemd-logind[1515]: New session 9 of user core. Aug 12 23:37:31.659489 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 12 23:37:31.793519 systemd-networkd[1439]: caliea6b65c4a5b: Gained IPv6LL Aug 12 23:37:31.860054 sshd[4943]: Connection closed by 10.0.0.1 port 57020 Aug 12 23:37:31.861210 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Aug 12 23:37:31.864761 systemd[1]: sshd@8-10.0.0.30:22-10.0.0.1:57020.service: Deactivated successfully. Aug 12 23:37:31.866771 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:37:31.867561 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:37:31.868933 systemd-logind[1515]: Removed session 9. Aug 12 23:37:32.242460 systemd-networkd[1439]: calidd0aac457a1: Gained IPv6LL Aug 12 23:37:32.884330 containerd[1531]: time="2025-08-12T23:37:32.884262681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:32.885326 containerd[1531]: time="2025-08-12T23:37:32.884828653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 12 23:37:32.886211 containerd[1531]: time="2025-08-12T23:37:32.886172640Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:32.887938 containerd[1531]: time="2025-08-12T23:37:32.887888955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:32.888713 containerd[1531]: time="2025-08-12T23:37:32.888458566Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.220546288s" Aug 12 23:37:32.888713 containerd[1531]: time="2025-08-12T23:37:32.888496287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:37:32.890481 containerd[1531]: time="2025-08-12T23:37:32.890450207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 12 23:37:32.891489 containerd[1531]: time="2025-08-12T23:37:32.891170101Z" level=info msg="CreateContainer within sandbox \"945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:37:32.897601 containerd[1531]: time="2025-08-12T23:37:32.897557671Z" level=info msg="Container 604515ee32b93c388fbe3cc1e1e7bbe3b3686fbf5a4012f4b42fb3f9037f09dc: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:32.905125 containerd[1531]: time="2025-08-12T23:37:32.905078943Z" level=info msg="CreateContainer within sandbox \"945a8cf18837ffc722362df9004808c47d178792619a7a184b03de76791f1a26\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"604515ee32b93c388fbe3cc1e1e7bbe3b3686fbf5a4012f4b42fb3f9037f09dc\"" Aug 12 23:37:32.905878 containerd[1531]: time="2025-08-12T23:37:32.905847679Z" level=info msg="StartContainer for \"604515ee32b93c388fbe3cc1e1e7bbe3b3686fbf5a4012f4b42fb3f9037f09dc\"" Aug 12 23:37:32.907174 containerd[1531]: time="2025-08-12T23:37:32.907144305Z" level=info msg="connecting to shim 604515ee32b93c388fbe3cc1e1e7bbe3b3686fbf5a4012f4b42fb3f9037f09dc" address="unix:///run/containerd/s/debac11e04b57da77e55afe95840a666a427cce430551d9f2d62607df65ffb28" protocol=ttrpc version=3 Aug 12 23:37:32.926656 systemd[1]: Started cri-containerd-604515ee32b93c388fbe3cc1e1e7bbe3b3686fbf5a4012f4b42fb3f9037f09dc.scope - libcontainer container 604515ee32b93c388fbe3cc1e1e7bbe3b3686fbf5a4012f4b42fb3f9037f09dc. Aug 12 23:37:33.049423 containerd[1531]: time="2025-08-12T23:37:33.049218093Z" level=info msg="StartContainer for \"604515ee32b93c388fbe3cc1e1e7bbe3b3686fbf5a4012f4b42fb3f9037f09dc\" returns successfully" Aug 12 23:37:33.242635 kubelet[2658]: I0812 23:37:33.242571 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b577fd4f9-z6p5k" podStartSLOduration=26.89843491 podStartE2EDuration="31.242550797s" podCreationTimestamp="2025-08-12 23:37:02 +0000 UTC" firstStartedPulling="2025-08-12 23:37:28.545229537 +0000 UTC m=+43.612262589" lastFinishedPulling="2025-08-12 23:37:32.889345424 +0000 UTC m=+47.956378476" observedRunningTime="2025-08-12 23:37:33.236826043 +0000 UTC m=+48.303859095" watchObservedRunningTime="2025-08-12 23:37:33.242550797 +0000 UTC m=+48.309583849" Aug 12 23:37:33.482502 kubelet[2658]: I0812 23:37:33.482460 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:37:33.563508 containerd[1531]: time="2025-08-12T23:37:33.563404290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b237d272a3ad8cc7b648377a753676699a6c95ec837889e54c8443b6b2031d3\" id:\"581130783bcf348282d6149df7095789f5d6433850556ac0f49ac1c8e3737973\" pid:5040 exit_status:1 exited_at:{seconds:1755041853 nanos:562644915}" Aug 12 23:37:33.649660 containerd[1531]: time="2025-08-12T23:37:33.649619693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b237d272a3ad8cc7b648377a753676699a6c95ec837889e54c8443b6b2031d3\" id:\"4b185a5fa11d291839a8d1670dce1bdb90318618559f7df3a5b346db58f26b79\" pid:5066 exit_status:1 exited_at:{seconds:1755041853 nanos:649281566}" Aug 12 23:37:34.231045 kubelet[2658]: I0812 23:37:34.230947 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:37:34.475585 kubelet[2658]: I0812 23:37:34.475538 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:37:34.475999 kubelet[2658]: E0812 23:37:34.475856 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:35.038917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014747444.mount: Deactivated successfully. Aug 12 23:37:35.235069 kubelet[2658]: E0812 23:37:35.234945 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:35.734124 containerd[1531]: time="2025-08-12T23:37:35.733873974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:35.736141 containerd[1531]: time="2025-08-12T23:37:35.734800392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 12 23:37:35.745529 containerd[1531]: time="2025-08-12T23:37:35.745478040Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:35.748337 containerd[1531]: time="2025-08-12T23:37:35.748288774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:35.749974 containerd[1531]: time="2025-08-12T23:37:35.749913086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.859425839s" Aug 12 23:37:35.750079 containerd[1531]: time="2025-08-12T23:37:35.750000768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 12 23:37:35.751335 containerd[1531]: time="2025-08-12T23:37:35.751161270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 12 23:37:35.754777 containerd[1531]: time="2025-08-12T23:37:35.754745580Z" level=info msg="CreateContainer within sandbox \"f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 12 23:37:35.768323 containerd[1531]: time="2025-08-12T23:37:35.766195402Z" level=info msg="Container 59ca33436323fe4b288758ffa431e00b6f1097ab0e28d2a88a6ac38b68e2d8a6: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:35.780256 containerd[1531]: time="2025-08-12T23:37:35.780192874Z" level=info msg="CreateContainer within sandbox \"f5fd84a05ee7b390082a9ba8823f828937ecbd97c065e131c8853b30c2c3e771\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"59ca33436323fe4b288758ffa431e00b6f1097ab0e28d2a88a6ac38b68e2d8a6\"" Aug 12 23:37:35.784387 containerd[1531]: time="2025-08-12T23:37:35.784301834Z" level=info msg="StartContainer for \"59ca33436323fe4b288758ffa431e00b6f1097ab0e28d2a88a6ac38b68e2d8a6\"" Aug 12 23:37:35.786436 containerd[1531]: time="2025-08-12T23:37:35.786386795Z" level=info msg="connecting to shim 59ca33436323fe4b288758ffa431e00b6f1097ab0e28d2a88a6ac38b68e2d8a6" address="unix:///run/containerd/s/30e8b4eea42808ead3aa291461399a42d73b58c04a2497230c9afa3477156b41" protocol=ttrpc version=3 Aug 12 23:37:35.813424 systemd-networkd[1439]: vxlan.calico: Link UP Aug 12 23:37:35.813431 systemd-networkd[1439]: vxlan.calico: Gained carrier Aug 12 23:37:35.841521 systemd[1]: Started cri-containerd-59ca33436323fe4b288758ffa431e00b6f1097ab0e28d2a88a6ac38b68e2d8a6.scope - libcontainer container 59ca33436323fe4b288758ffa431e00b6f1097ab0e28d2a88a6ac38b68e2d8a6. Aug 12 23:37:35.955948 containerd[1531]: time="2025-08-12T23:37:35.955836527Z" level=info msg="StartContainer for \"59ca33436323fe4b288758ffa431e00b6f1097ab0e28d2a88a6ac38b68e2d8a6\" returns successfully" Aug 12 23:37:36.259304 kubelet[2658]: I0812 23:37:36.258598 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-kbx6z" podStartSLOduration=25.844913505 podStartE2EDuration="31.258556144s" podCreationTimestamp="2025-08-12 23:37:05 +0000 UTC" firstStartedPulling="2025-08-12 23:37:30.337386029 +0000 UTC m=+45.404419081" lastFinishedPulling="2025-08-12 23:37:35.751028548 +0000 UTC m=+50.818061720" observedRunningTime="2025-08-12 23:37:36.258420582 +0000 UTC m=+51.325453594" watchObservedRunningTime="2025-08-12 23:37:36.258556144 +0000 UTC m=+51.325589356" Aug 12 23:37:36.358129 containerd[1531]: time="2025-08-12T23:37:36.358082893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59ca33436323fe4b288758ffa431e00b6f1097ab0e28d2a88a6ac38b68e2d8a6\" id:\"e9f216ac7af97d14cf54980c9a8fbed0c0c7b4192d5787fa607fe5fdab5c565a\" pid:5307 exit_status:1 exited_at:{seconds:1755041856 nanos:357672845}" Aug 12 23:37:36.875481 systemd[1]: Started sshd@9-10.0.0.30:22-10.0.0.1:35686.service - OpenSSH per-connection server daemon (10.0.0.1:35686). Aug 12 23:37:36.965596 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 35686 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:37:36.967827 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:37:36.975149 systemd-logind[1515]: New session 10 of user core. Aug 12 23:37:36.993561 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 12 23:37:37.184106 containerd[1531]: time="2025-08-12T23:37:37.183970929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:37.185529 containerd[1531]: time="2025-08-12T23:37:37.185486118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 12 23:37:37.186664 containerd[1531]: time="2025-08-12T23:37:37.186615820Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:37.191296 containerd[1531]: time="2025-08-12T23:37:37.191225707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:37.192451 containerd[1531]: time="2025-08-12T23:37:37.192335288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.441142257s" Aug 12 23:37:37.192451 containerd[1531]: time="2025-08-12T23:37:37.192368489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 12 23:37:37.212509 containerd[1531]: time="2025-08-12T23:37:37.212385828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:37:37.231116 containerd[1531]: time="2025-08-12T23:37:37.231042621Z" level=info msg="CreateContainer within sandbox \"f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 12 23:37:37.242699 sshd[5325]: Connection closed by 10.0.0.1 port 35686 Aug 12 23:37:37.243489 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Aug 12 23:37:37.253622 systemd[1]: sshd@9-10.0.0.30:22-10.0.0.1:35686.service: Deactivated successfully. Aug 12 23:37:37.256874 containerd[1531]: time="2025-08-12T23:37:37.256821909Z" level=info msg="Container e858a1482568854dd41dc274a6cdeb409138698a33c9a7dcaff7694fe4507bb1: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:37.261856 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:37:37.266424 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:37:37.272540 systemd[1]: Started sshd@10-10.0.0.30:22-10.0.0.1:35722.service - OpenSSH per-connection server daemon (10.0.0.1:35722). Aug 12 23:37:37.274858 systemd-logind[1515]: Removed session 10. Aug 12 23:37:37.287820 containerd[1531]: time="2025-08-12T23:37:37.287728695Z" level=info msg="CreateContainer within sandbox \"f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e858a1482568854dd41dc274a6cdeb409138698a33c9a7dcaff7694fe4507bb1\"" Aug 12 23:37:37.289230 containerd[1531]: time="2025-08-12T23:37:37.289200403Z" level=info msg="StartContainer for \"e858a1482568854dd41dc274a6cdeb409138698a33c9a7dcaff7694fe4507bb1\"" Aug 12 23:37:37.290860 containerd[1531]: time="2025-08-12T23:37:37.290833434Z" level=info msg="connecting to shim e858a1482568854dd41dc274a6cdeb409138698a33c9a7dcaff7694fe4507bb1" address="unix:///run/containerd/s/b8b5dd0149cdd67852290f5f9803b36c0a4f363f64661f34a40852ae1e627918" protocol=ttrpc version=3 Aug 12 23:37:37.322566 systemd[1]: Started cri-containerd-e858a1482568854dd41dc274a6cdeb409138698a33c9a7dcaff7694fe4507bb1.scope - libcontainer container e858a1482568854dd41dc274a6cdeb409138698a33c9a7dcaff7694fe4507bb1. Aug 12 23:37:37.346756 sshd[5352]: Accepted publickey for core from 10.0.0.1 port 35722 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:37:37.349440 sshd-session[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:37:37.358309 systemd-logind[1515]: New session 11 of user core. Aug 12 23:37:37.366518 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 12 23:37:37.376388 containerd[1531]: time="2025-08-12T23:37:37.376290893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59ca33436323fe4b288758ffa431e00b6f1097ab0e28d2a88a6ac38b68e2d8a6\" id:\"bfd42a13b8901c4e8941bc0110b6697153a981bb5711d46bd3ad4abc7e465eb2\" pid:5360 exit_status:1 exited_at:{seconds:1755041857 nanos:375912245}" Aug 12 23:37:37.390780 containerd[1531]: time="2025-08-12T23:37:37.390747646Z" level=info msg="StartContainer for \"e858a1482568854dd41dc274a6cdeb409138698a33c9a7dcaff7694fe4507bb1\" returns successfully" Aug 12 23:37:37.515565 containerd[1531]: time="2025-08-12T23:37:37.515518650Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:37.517251 containerd[1531]: time="2025-08-12T23:37:37.517032039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 12 23:37:37.518852 containerd[1531]: time="2025-08-12T23:37:37.518820472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 306.373883ms" Aug 12 23:37:37.518922 containerd[1531]: time="2025-08-12T23:37:37.518853393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:37:37.521852 containerd[1531]: time="2025-08-12T23:37:37.520757829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 12 23:37:37.534136 containerd[1531]: time="2025-08-12T23:37:37.534084962Z" level=info msg="CreateContainer within sandbox \"eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:37:37.544413 containerd[1531]: time="2025-08-12T23:37:37.544339276Z" level=info msg="Container 4c0561dfa9a63cce6cb904684fc265ea5d441d5deb2fc33251b230c4fe054432: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:37.548610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2565848779.mount: Deactivated successfully. Aug 12 23:37:37.559908 containerd[1531]: time="2025-08-12T23:37:37.559839809Z" level=info msg="CreateContainer within sandbox \"eb8ab9c8cbf4f01fdf3b3ef7f573ed3efd2116180d3d1a18ee4df1fef0d7796e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4c0561dfa9a63cce6cb904684fc265ea5d441d5deb2fc33251b230c4fe054432\"" Aug 12 23:37:37.562551 containerd[1531]: time="2025-08-12T23:37:37.562512780Z" level=info msg="StartContainer for \"4c0561dfa9a63cce6cb904684fc265ea5d441d5deb2fc33251b230c4fe054432\"" Aug 12 23:37:37.563981 containerd[1531]: time="2025-08-12T23:37:37.563953247Z" level=info msg="connecting to shim 4c0561dfa9a63cce6cb904684fc265ea5d441d5deb2fc33251b230c4fe054432" address="unix:///run/containerd/s/c19940024ef04aa784b0207907e35ad964a8f8c9e8a60211bd8d3dbc5b4e9299" protocol=ttrpc version=3 Aug 12 23:37:37.590537 systemd[1]: Started cri-containerd-4c0561dfa9a63cce6cb904684fc265ea5d441d5deb2fc33251b230c4fe054432.scope - libcontainer container 4c0561dfa9a63cce6cb904684fc265ea5d441d5deb2fc33251b230c4fe054432. Aug 12 23:37:37.607015 sshd[5392]: Connection closed by 10.0.0.1 port 35722 Aug 12 23:37:37.608619 sshd-session[5352]: pam_unix(sshd:session): session closed for user core Aug 12 23:37:37.621162 systemd[1]: sshd@10-10.0.0.30:22-10.0.0.1:35722.service: Deactivated successfully. Aug 12 23:37:37.625051 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:37:37.632811 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:37:37.636600 systemd[1]: Started sshd@11-10.0.0.30:22-10.0.0.1:35746.service - OpenSSH per-connection server daemon (10.0.0.1:35746). Aug 12 23:37:37.638847 systemd-logind[1515]: Removed session 11. Aug 12 23:37:37.666076 containerd[1531]: time="2025-08-12T23:37:37.665963580Z" level=info msg="StartContainer for \"4c0561dfa9a63cce6cb904684fc265ea5d441d5deb2fc33251b230c4fe054432\" returns successfully" Aug 12 23:37:37.682467 systemd-networkd[1439]: vxlan.calico: Gained IPv6LL Aug 12 23:37:37.697962 sshd[5433]: Accepted publickey for core from 10.0.0.1 port 35746 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:37:37.699844 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:37:37.704567 systemd-logind[1515]: New session 12 of user core. Aug 12 23:37:37.713536 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 12 23:37:37.882955 sshd[5451]: Connection closed by 10.0.0.1 port 35746 Aug 12 23:37:37.883303 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Aug 12 23:37:37.887588 systemd[1]: sshd@11-10.0.0.30:22-10.0.0.1:35746.service: Deactivated successfully. Aug 12 23:37:37.889715 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:37:37.890924 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:37:37.892303 systemd-logind[1515]: Removed session 12. Aug 12 23:37:38.275642 kubelet[2658]: I0812 23:37:38.275567 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b577fd4f9-gmxv5" podStartSLOduration=29.995793325 podStartE2EDuration="36.275544666s" podCreationTimestamp="2025-08-12 23:37:02 +0000 UTC" firstStartedPulling="2025-08-12 23:37:31.24005011 +0000 UTC m=+46.307083162" lastFinishedPulling="2025-08-12 23:37:37.519801451 +0000 UTC m=+52.586834503" observedRunningTime="2025-08-12 23:37:38.273537108 +0000 UTC m=+53.340570160" watchObservedRunningTime="2025-08-12 23:37:38.275544666 +0000 UTC m=+53.342577718" Aug 12 23:37:39.269235 containerd[1531]: time="2025-08-12T23:37:39.269008209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:39.270400 containerd[1531]: time="2025-08-12T23:37:39.270367275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 12 23:37:39.271872 containerd[1531]: time="2025-08-12T23:37:39.271828702Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:39.274369 containerd[1531]: time="2025-08-12T23:37:39.274078223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:37:39.275869 containerd[1531]: time="2025-08-12T23:37:39.275836616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.755045826s" Aug 12 23:37:39.275945 containerd[1531]: time="2025-08-12T23:37:39.275872857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 12 23:37:39.279457 containerd[1531]: time="2025-08-12T23:37:39.279424922Z" level=info msg="CreateContainer within sandbox \"f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 12 23:37:39.288517 containerd[1531]: time="2025-08-12T23:37:39.288308647Z" level=info msg="Container 8a452dff7ac8d4cad3016757d3967733b041df56a6d2fa40fc00bf6f0ecb60c0: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:37:39.299673 containerd[1531]: time="2025-08-12T23:37:39.299636216Z" level=info msg="CreateContainer within sandbox \"f3897f0a019921bbc555505e93c8c7bad34c004fcb029a06f4781d84e8206813\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8a452dff7ac8d4cad3016757d3967733b041df56a6d2fa40fc00bf6f0ecb60c0\"" Aug 12 23:37:39.300934 containerd[1531]: time="2025-08-12T23:37:39.300908640Z" level=info msg="StartContainer for \"8a452dff7ac8d4cad3016757d3967733b041df56a6d2fa40fc00bf6f0ecb60c0\"" Aug 12 23:37:39.303439 containerd[1531]: time="2025-08-12T23:37:39.303364045Z" level=info msg="connecting to shim 8a452dff7ac8d4cad3016757d3967733b041df56a6d2fa40fc00bf6f0ecb60c0" address="unix:///run/containerd/s/b8b5dd0149cdd67852290f5f9803b36c0a4f363f64661f34a40852ae1e627918" protocol=ttrpc version=3 Aug 12 23:37:39.336914 systemd[1]: Started cri-containerd-8a452dff7ac8d4cad3016757d3967733b041df56a6d2fa40fc00bf6f0ecb60c0.scope - libcontainer container 8a452dff7ac8d4cad3016757d3967733b041df56a6d2fa40fc00bf6f0ecb60c0. Aug 12 23:37:39.387595 containerd[1531]: time="2025-08-12T23:37:39.387554204Z" level=info msg="StartContainer for \"8a452dff7ac8d4cad3016757d3967733b041df56a6d2fa40fc00bf6f0ecb60c0\" returns successfully" Aug 12 23:37:40.100379 kubelet[2658]: I0812 23:37:40.100327 2658 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 12 23:37:40.117194 kubelet[2658]: I0812 23:37:40.117151 2658 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 12 23:37:40.312176 kubelet[2658]: I0812 23:37:40.312095 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9m5bp" podStartSLOduration=25.463494399 podStartE2EDuration="34.31207314s" podCreationTimestamp="2025-08-12 23:37:06 +0000 UTC" firstStartedPulling="2025-08-12 23:37:30.428010329 +0000 UTC m=+45.495043381" lastFinishedPulling="2025-08-12 23:37:39.27658907 +0000 UTC m=+54.343622122" observedRunningTime="2025-08-12 23:37:40.301438545 +0000 UTC m=+55.368471637" watchObservedRunningTime="2025-08-12 23:37:40.31207314 +0000 UTC m=+55.379106192" Aug 12 23:37:42.898918 systemd[1]: Started sshd@12-10.0.0.30:22-10.0.0.1:35894.service - OpenSSH per-connection server daemon (10.0.0.1:35894). Aug 12 23:37:42.968725 sshd[5521]: Accepted publickey for core from 10.0.0.1 port 35894 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:37:42.970361 sshd-session[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:37:42.975123 systemd-logind[1515]: New session 13 of user core. Aug 12 23:37:42.982505 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 12 23:37:43.256085 sshd[5523]: Connection closed by 10.0.0.1 port 35894 Aug 12 23:37:43.257029 sshd-session[5521]: pam_unix(sshd:session): session closed for user core Aug 12 23:37:43.262615 systemd[1]: sshd@12-10.0.0.30:22-10.0.0.1:35894.service: Deactivated successfully. Aug 12 23:37:43.265577 systemd[1]: session-13.scope: Deactivated successfully. Aug 12 23:37:43.267027 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. Aug 12 23:37:43.271962 systemd-logind[1515]: Removed session 13. Aug 12 23:37:44.474244 kubelet[2658]: I0812 23:37:44.474121 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:37:48.282651 systemd[1]: Started sshd@13-10.0.0.30:22-10.0.0.1:35910.service - OpenSSH per-connection server daemon (10.0.0.1:35910). Aug 12 23:37:48.357926 sshd[5551]: Accepted publickey for core from 10.0.0.1 port 35910 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:37:48.359411 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:37:48.363619 systemd-logind[1515]: New session 14 of user core. Aug 12 23:37:48.373528 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 12 23:37:48.540861 sshd[5553]: Connection closed by 10.0.0.1 port 35910 Aug 12 23:37:48.541761 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Aug 12 23:37:48.545156 systemd[1]: sshd@13-10.0.0.30:22-10.0.0.1:35910.service: Deactivated successfully. Aug 12 23:37:48.547018 systemd[1]: session-14.scope: Deactivated successfully. Aug 12 23:37:48.548970 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. Aug 12 23:37:48.550270 systemd-logind[1515]: Removed session 14. Aug 12 23:37:53.557495 systemd[1]: Started sshd@14-10.0.0.30:22-10.0.0.1:34180.service - OpenSSH per-connection server daemon (10.0.0.1:34180). Aug 12 23:37:53.634102 sshd[5573]: Accepted publickey for core from 10.0.0.1 port 34180 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:37:53.634932 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:37:53.638830 systemd-logind[1515]: New session 15 of user core. Aug 12 23:37:53.645539 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 12 23:37:53.790432 sshd[5575]: Connection closed by 10.0.0.1 port 34180 Aug 12 23:37:53.790907 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Aug 12 23:37:53.794429 systemd[1]: sshd@14-10.0.0.30:22-10.0.0.1:34180.service: Deactivated successfully. Aug 12 23:37:53.796864 systemd[1]: session-15.scope: Deactivated successfully. Aug 12 23:37:53.798967 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. Aug 12 23:37:53.800185 systemd-logind[1515]: Removed session 15. Aug 12 23:37:55.008050 kubelet[2658]: E0812 23:37:55.007895 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:37:58.801713 systemd[1]: Started sshd@15-10.0.0.30:22-10.0.0.1:34186.service - OpenSSH per-connection server daemon (10.0.0.1:34186). Aug 12 23:37:58.854037 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 34186 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:37:58.856693 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:37:58.862054 systemd-logind[1515]: New session 16 of user core. Aug 12 23:37:58.879626 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 12 23:37:59.055962 sshd[5600]: Connection closed by 10.0.0.1 port 34186 Aug 12 23:37:59.054365 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Aug 12 23:37:59.069698 systemd[1]: sshd@15-10.0.0.30:22-10.0.0.1:34186.service: Deactivated successfully. Aug 12 23:37:59.076643 systemd[1]: session-16.scope: Deactivated successfully. Aug 12 23:37:59.080794 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. Aug 12 23:37:59.084843 systemd[1]: Started sshd@16-10.0.0.30:22-10.0.0.1:34200.service - OpenSSH per-connection server daemon (10.0.0.1:34200). Aug 12 23:37:59.087581 systemd-logind[1515]: Removed session 16. Aug 12 23:37:59.148434 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 34200 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:37:59.149795 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:37:59.155241 systemd-logind[1515]: New session 17 of user core. Aug 12 23:37:59.166525 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 12 23:37:59.578798 sshd[5616]: Connection closed by 10.0.0.1 port 34200 Aug 12 23:37:59.580146 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Aug 12 23:37:59.594708 systemd[1]: sshd@16-10.0.0.30:22-10.0.0.1:34200.service: Deactivated successfully. Aug 12 23:37:59.596611 systemd[1]: session-17.scope: Deactivated successfully. Aug 12 23:37:59.598224 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. Aug 12 23:37:59.600688 systemd-logind[1515]: Removed session 17. Aug 12 23:37:59.602431 systemd[1]: Started sshd@17-10.0.0.30:22-10.0.0.1:34206.service - OpenSSH per-connection server daemon (10.0.0.1:34206). Aug 12 23:37:59.658016 sshd[5628]: Accepted publickey for core from 10.0.0.1 port 34206 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:37:59.659430 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:37:59.663783 systemd-logind[1515]: New session 18 of user core. Aug 12 23:37:59.673510 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 12 23:38:00.001796 containerd[1531]: time="2025-08-12T23:38:00.001535804Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2141ca0a477dfb6d916ab5399830c14583af5624ad0528e29f0c07e4e10a6dbb\" id:\"7d6f90b2780e6472e5b8d31ce3331feb116bab95cdd0b84a321c0bc7f0179424\" pid:5649 exited_at:{seconds:1755041880 nanos:1188085}" Aug 12 23:38:00.327438 sshd[5630]: Connection closed by 10.0.0.1 port 34206 Aug 12 23:38:00.327801 sshd-session[5628]: pam_unix(sshd:session): session closed for user core Aug 12 23:38:00.336759 systemd[1]: sshd@17-10.0.0.30:22-10.0.0.1:34206.service: Deactivated successfully. Aug 12 23:38:00.339921 systemd[1]: session-18.scope: Deactivated successfully. Aug 12 23:38:00.342405 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. Aug 12 23:38:00.348897 systemd[1]: Started sshd@18-10.0.0.30:22-10.0.0.1:34208.service - OpenSSH per-connection server daemon (10.0.0.1:34208). Aug 12 23:38:00.350746 systemd-logind[1515]: Removed session 18. Aug 12 23:38:00.416894 sshd[5672]: Accepted publickey for core from 10.0.0.1 port 34208 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:38:00.418520 sshd-session[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:38:00.422892 systemd-logind[1515]: New session 19 of user core. Aug 12 23:38:00.430880 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 12 23:38:00.524858 kernel: hrtimer: interrupt took 917277 ns Aug 12 23:38:00.722417 sshd[5675]: Connection closed by 10.0.0.1 port 34208 Aug 12 23:38:00.722661 sshd-session[5672]: pam_unix(sshd:session): session closed for user core Aug 12 23:38:00.734045 systemd[1]: sshd@18-10.0.0.30:22-10.0.0.1:34208.service: Deactivated successfully. Aug 12 23:38:00.736913 systemd[1]: session-19.scope: Deactivated successfully. Aug 12 23:38:00.737837 systemd-logind[1515]: Session 19 logged out. Waiting for processes to exit. Aug 12 23:38:00.742714 systemd[1]: Started sshd@19-10.0.0.30:22-10.0.0.1:34210.service - OpenSSH per-connection server daemon (10.0.0.1:34210). Aug 12 23:38:00.743719 systemd-logind[1515]: Removed session 19. Aug 12 23:38:00.795330 sshd[5687]: Accepted publickey for core from 10.0.0.1 port 34210 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:38:00.796073 sshd-session[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:38:00.802095 systemd-logind[1515]: New session 20 of user core. Aug 12 23:38:00.813532 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 12 23:38:00.944461 sshd[5689]: Connection closed by 10.0.0.1 port 34210 Aug 12 23:38:00.944778 sshd-session[5687]: pam_unix(sshd:session): session closed for user core Aug 12 23:38:00.948669 systemd[1]: sshd@19-10.0.0.30:22-10.0.0.1:34210.service: Deactivated successfully. Aug 12 23:38:00.950839 systemd[1]: session-20.scope: Deactivated successfully. Aug 12 23:38:00.952209 systemd-logind[1515]: Session 20 logged out. Waiting for processes to exit. Aug 12 23:38:00.953988 systemd-logind[1515]: Removed session 20. Aug 12 23:38:01.247436 containerd[1531]: time="2025-08-12T23:38:01.247396349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2141ca0a477dfb6d916ab5399830c14583af5624ad0528e29f0c07e4e10a6dbb\" id:\"52ff2dfbedb6b353998dbb2bcc2b79978fa1fac1583bf58c938dca63b6edd92f\" pid:5714 exited_at:{seconds:1755041881 nanos:247104510}" Aug 12 23:38:03.646165 containerd[1531]: time="2025-08-12T23:38:03.646054485Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b237d272a3ad8cc7b648377a753676699a6c95ec837889e54c8443b6b2031d3\" id:\"6a43e3221d532d62286f002010e94081703141d6797357bc06db6be0403f0d3f\" pid:5735 exited_at:{seconds:1755041883 nanos:645606526}" Aug 12 23:38:05.957794 systemd[1]: Started sshd@20-10.0.0.30:22-10.0.0.1:36858.service - OpenSSH per-connection server daemon (10.0.0.1:36858). Aug 12 23:38:06.003873 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 36858 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:38:06.005337 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:38:06.007863 kubelet[2658]: E0812 23:38:06.007771 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:38:06.010928 systemd-logind[1515]: New session 21 of user core. Aug 12 23:38:06.019481 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 12 23:38:06.147254 sshd[5753]: Connection closed by 10.0.0.1 port 36858 Aug 12 23:38:06.147566 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Aug 12 23:38:06.151492 systemd[1]: sshd@20-10.0.0.30:22-10.0.0.1:36858.service: Deactivated successfully. Aug 12 23:38:06.153379 systemd[1]: session-21.scope: Deactivated successfully. Aug 12 23:38:06.154260 systemd-logind[1515]: Session 21 logged out. Waiting for processes to exit. Aug 12 23:38:06.155389 systemd-logind[1515]: Removed session 21. Aug 12 23:38:07.316983 containerd[1531]: time="2025-08-12T23:38:07.316945733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59ca33436323fe4b288758ffa431e00b6f1097ab0e28d2a88a6ac38b68e2d8a6\" id:\"e86f42ed2b753423ee28e96cfce20491d751b957edc7963dff06677b3b095dda\" pid:5780 exited_at:{seconds:1755041887 nanos:316664253}" Aug 12 23:38:07.609881 containerd[1531]: time="2025-08-12T23:38:07.609639789Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59ca33436323fe4b288758ffa431e00b6f1097ab0e28d2a88a6ac38b68e2d8a6\" id:\"68500b04caa41cd3062205363f6c2ffba14e12032fb147d85e83bc9c506b4de4\" pid:5805 exited_at:{seconds:1755041887 nanos:609245029}" Aug 12 23:38:11.161859 systemd[1]: Started sshd@21-10.0.0.30:22-10.0.0.1:36864.service - OpenSSH per-connection server daemon (10.0.0.1:36864). Aug 12 23:38:11.231523 sshd[5818]: Accepted publickey for core from 10.0.0.1 port 36864 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:38:11.233179 sshd-session[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:38:11.238576 systemd-logind[1515]: New session 22 of user core. Aug 12 23:38:11.253455 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 12 23:38:11.433520 sshd[5820]: Connection closed by 10.0.0.1 port 36864 Aug 12 23:38:11.433784 sshd-session[5818]: pam_unix(sshd:session): session closed for user core Aug 12 23:38:11.442475 systemd[1]: sshd@21-10.0.0.30:22-10.0.0.1:36864.service: Deactivated successfully. Aug 12 23:38:11.445114 systemd[1]: session-22.scope: Deactivated successfully. Aug 12 23:38:11.447706 systemd-logind[1515]: Session 22 logged out. Waiting for processes to exit. Aug 12 23:38:11.448693 systemd-logind[1515]: Removed session 22. Aug 12 23:38:13.008366 kubelet[2658]: E0812 23:38:13.008186 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:38:16.454403 systemd[1]: Started sshd@22-10.0.0.30:22-10.0.0.1:34864.service - OpenSSH per-connection server daemon (10.0.0.1:34864). Aug 12 23:38:16.506008 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 34864 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:38:16.507156 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:38:16.512015 systemd-logind[1515]: New session 23 of user core. Aug 12 23:38:16.520499 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 12 23:38:16.714449 sshd[5843]: Connection closed by 10.0.0.1 port 34864 Aug 12 23:38:16.714149 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Aug 12 23:38:16.719115 systemd[1]: sshd@22-10.0.0.30:22-10.0.0.1:34864.service: Deactivated successfully. Aug 12 23:38:16.721567 systemd[1]: session-23.scope: Deactivated successfully. Aug 12 23:38:16.722819 systemd-logind[1515]: Session 23 logged out. Waiting for processes to exit. Aug 12 23:38:16.724894 systemd-logind[1515]: Removed session 23.