Aug 12 23:58:24.884350 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 12 23:58:24.884372 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Aug 12 21:51:24 -00 2025 Aug 12 23:58:24.884381 kernel: KASLR enabled Aug 12 23:58:24.884387 kernel: efi: EFI v2.7 by EDK II Aug 12 23:58:24.884393 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Aug 12 23:58:24.884398 kernel: random: crng init done Aug 12 23:58:24.884405 kernel: secureboot: Secure boot disabled Aug 12 23:58:24.884411 kernel: ACPI: Early table checksum verification disabled Aug 12 23:58:24.884416 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Aug 12 23:58:24.884423 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 12 23:58:24.884429 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:24.884435 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:24.884441 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:24.884447 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:24.884454 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:24.884461 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:24.884468 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:24.884474 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:24.884480 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:24.884486 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 12 23:58:24.884495 kernel: ACPI: Use ACPI SPCR as default console: Yes Aug 12 23:58:24.884502 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:58:24.884508 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Aug 12 23:58:24.884514 kernel: Zone ranges: Aug 12 23:58:24.884520 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:58:24.884528 kernel: DMA32 empty Aug 12 23:58:24.884534 kernel: Normal empty Aug 12 23:58:24.884539 kernel: Device empty Aug 12 23:58:24.884545 kernel: Movable zone start for each node Aug 12 23:58:24.884551 kernel: Early memory node ranges Aug 12 23:58:24.884560 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Aug 12 23:58:24.884578 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Aug 12 23:58:24.884587 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Aug 12 23:58:24.884597 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Aug 12 23:58:24.884607 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Aug 12 23:58:24.884617 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Aug 12 23:58:24.884626 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Aug 12 23:58:24.884640 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Aug 12 23:58:24.884727 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Aug 12 23:58:24.884737 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 12 23:58:24.884747 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 12 23:58:24.884754 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 12 23:58:24.884760 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 12 23:58:24.884768 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:58:24.884775 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 12 23:58:24.884781 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Aug 12 23:58:24.884788 kernel: psci: probing for conduit method from ACPI. Aug 12 23:58:24.884794 kernel: psci: PSCIv1.1 detected in firmware. Aug 12 23:58:24.884800 kernel: psci: Using standard PSCI v0.2 function IDs Aug 12 23:58:24.884807 kernel: psci: Trusted OS migration not required Aug 12 23:58:24.884814 kernel: psci: SMC Calling Convention v1.1 Aug 12 23:58:24.884820 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 12 23:58:24.884827 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Aug 12 23:58:24.884835 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Aug 12 23:58:24.884841 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 12 23:58:24.884848 kernel: Detected PIPT I-cache on CPU0 Aug 12 23:58:24.884854 kernel: CPU features: detected: GIC system register CPU interface Aug 12 23:58:24.884860 kernel: CPU features: detected: Spectre-v4 Aug 12 23:58:24.884867 kernel: CPU features: detected: Spectre-BHB Aug 12 23:58:24.884874 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 12 23:58:24.884880 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 12 23:58:24.884886 kernel: CPU features: detected: ARM erratum 1418040 Aug 12 23:58:24.884893 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 12 23:58:24.884900 kernel: alternatives: applying boot alternatives Aug 12 23:58:24.884907 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ce82f1ef836ba8581e59ce9db4eef4240d287b2b5f9937c28f0cd024f4dc9107 Aug 12 23:58:24.884916 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:58:24.884923 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:58:24.884929 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:58:24.884936 kernel: Fallback order for Node 0: 0 Aug 12 23:58:24.884942 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Aug 12 23:58:24.884949 kernel: Policy zone: DMA Aug 12 23:58:24.884955 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:58:24.884962 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Aug 12 23:58:24.884969 kernel: software IO TLB: area num 4. Aug 12 23:58:24.884975 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Aug 12 23:58:24.884981 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Aug 12 23:58:24.884989 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 12 23:58:24.884996 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:58:24.885003 kernel: rcu: RCU event tracing is enabled. Aug 12 23:58:24.885010 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 12 23:58:24.885017 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:58:24.885023 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:58:24.885030 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:58:24.885037 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 12 23:58:24.885043 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:58:24.885050 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:58:24.885057 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 12 23:58:24.885065 kernel: GICv3: 256 SPIs implemented Aug 12 23:58:24.885071 kernel: GICv3: 0 Extended SPIs implemented Aug 12 23:58:24.885078 kernel: Root IRQ handler: gic_handle_irq Aug 12 23:58:24.885085 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 12 23:58:24.885091 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Aug 12 23:58:24.885098 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 12 23:58:24.885105 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 12 23:58:24.885112 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Aug 12 23:58:24.885119 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Aug 12 23:58:24.885126 kernel: GICv3: using LPI property table @0x0000000040130000 Aug 12 23:58:24.885133 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Aug 12 23:58:24.885139 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:58:24.885147 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:58:24.885154 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 12 23:58:24.885160 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 12 23:58:24.885167 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 12 23:58:24.885194 kernel: arm-pv: using stolen time PV Aug 12 23:58:24.885203 kernel: Console: colour dummy device 80x25 Aug 12 23:58:24.885210 kernel: ACPI: Core revision 20240827 Aug 12 23:58:24.885217 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 12 23:58:24.885224 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:58:24.885231 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 12 23:58:24.885241 kernel: landlock: Up and running. Aug 12 23:58:24.885248 kernel: SELinux: Initializing. Aug 12 23:58:24.885255 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:58:24.885262 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:58:24.885269 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:58:24.885276 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:58:24.885283 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 12 23:58:24.885289 kernel: Remapping and enabling EFI services. Aug 12 23:58:24.885296 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:58:24.885309 kernel: Detected PIPT I-cache on CPU1 Aug 12 23:58:24.885316 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 12 23:58:24.885323 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Aug 12 23:58:24.885332 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:58:24.885339 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 12 23:58:24.885346 kernel: Detected PIPT I-cache on CPU2 Aug 12 23:58:24.885353 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 12 23:58:24.885361 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Aug 12 23:58:24.885369 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:58:24.885376 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 12 23:58:24.885384 kernel: Detected PIPT I-cache on CPU3 Aug 12 23:58:24.885391 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 12 23:58:24.885399 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Aug 12 23:58:24.885406 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:58:24.885414 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 12 23:58:24.885421 kernel: smp: Brought up 1 node, 4 CPUs Aug 12 23:58:24.885428 kernel: SMP: Total of 4 processors activated. Aug 12 23:58:24.885437 kernel: CPU: All CPU(s) started at EL1 Aug 12 23:58:24.885444 kernel: CPU features: detected: 32-bit EL0 Support Aug 12 23:58:24.885451 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 12 23:58:24.885458 kernel: CPU features: detected: Common not Private translations Aug 12 23:58:24.885465 kernel: CPU features: detected: CRC32 instructions Aug 12 23:58:24.885473 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 12 23:58:24.885480 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 12 23:58:24.885487 kernel: CPU features: detected: LSE atomic instructions Aug 12 23:58:24.885494 kernel: CPU features: detected: Privileged Access Never Aug 12 23:58:24.885502 kernel: CPU features: detected: RAS Extension Support Aug 12 23:58:24.885509 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 12 23:58:24.885516 kernel: alternatives: applying system-wide alternatives Aug 12 23:58:24.885523 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Aug 12 23:58:24.885531 kernel: Memory: 2423968K/2572288K available (11136K kernel code, 2436K rwdata, 9080K rodata, 39488K init, 1038K bss, 125984K reserved, 16384K cma-reserved) Aug 12 23:58:24.885538 kernel: devtmpfs: initialized Aug 12 23:58:24.885546 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:58:24.885553 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 12 23:58:24.885560 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 12 23:58:24.885574 kernel: 0 pages in range for non-PLT usage Aug 12 23:58:24.885582 kernel: 508432 pages in range for PLT usage Aug 12 23:58:24.885589 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:58:24.885598 kernel: SMBIOS 3.0.0 present. Aug 12 23:58:24.885606 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Aug 12 23:58:24.885613 kernel: DMI: Memory slots populated: 1/1 Aug 12 23:58:24.885620 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:58:24.885627 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 12 23:58:24.885634 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 12 23:58:24.885643 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 12 23:58:24.885655 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:58:24.885664 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Aug 12 23:58:24.885671 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:58:24.885679 kernel: cpuidle: using governor menu Aug 12 23:58:24.885686 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 12 23:58:24.885693 kernel: ASID allocator initialised with 32768 entries Aug 12 23:58:24.885700 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:58:24.885707 kernel: Serial: AMBA PL011 UART driver Aug 12 23:58:24.885716 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:58:24.885723 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 12 23:58:24.885730 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 12 23:58:24.885737 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 12 23:58:24.885744 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:58:24.885751 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:58:24.885758 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 12 23:58:24.885765 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 12 23:58:24.885772 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:58:24.885779 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:58:24.885787 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:58:24.885794 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:58:24.885802 kernel: ACPI: Interpreter enabled Aug 12 23:58:24.885809 kernel: ACPI: Using GIC for interrupt routing Aug 12 23:58:24.885815 kernel: ACPI: MCFG table detected, 1 entries Aug 12 23:58:24.885822 kernel: ACPI: CPU0 has been hot-added Aug 12 23:58:24.885829 kernel: ACPI: CPU1 has been hot-added Aug 12 23:58:24.885836 kernel: ACPI: CPU2 has been hot-added Aug 12 23:58:24.885843 kernel: ACPI: CPU3 has been hot-added Aug 12 23:58:24.885851 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 12 23:58:24.885858 kernel: printk: legacy console [ttyAMA0] enabled Aug 12 23:58:24.885866 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:58:24.886047 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:58:24.886119 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 12 23:58:24.886183 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 12 23:58:24.886243 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 12 23:58:24.886316 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 12 23:58:24.886326 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 12 23:58:24.886333 kernel: PCI host bridge to bus 0000:00 Aug 12 23:58:24.886401 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 12 23:58:24.886460 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 12 23:58:24.886516 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 12 23:58:24.886580 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:58:24.886702 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Aug 12 23:58:24.886779 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 12 23:58:24.886843 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Aug 12 23:58:24.886905 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Aug 12 23:58:24.886965 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:58:24.887026 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Aug 12 23:58:24.887087 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Aug 12 23:58:24.887153 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Aug 12 23:58:24.887211 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 12 23:58:24.887265 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 12 23:58:24.887319 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 12 23:58:24.887328 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 12 23:58:24.887336 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 12 23:58:24.887343 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 12 23:58:24.887352 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 12 23:58:24.887359 kernel: iommu: Default domain type: Translated Aug 12 23:58:24.887366 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 12 23:58:24.887373 kernel: efivars: Registered efivars operations Aug 12 23:58:24.887380 kernel: vgaarb: loaded Aug 12 23:58:24.887387 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 12 23:58:24.887394 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:58:24.887401 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:58:24.887409 kernel: pnp: PnP ACPI init Aug 12 23:58:24.887480 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 12 23:58:24.887490 kernel: pnp: PnP ACPI: found 1 devices Aug 12 23:58:24.887497 kernel: NET: Registered PF_INET protocol family Aug 12 23:58:24.887504 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:58:24.887511 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:58:24.887519 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:58:24.887526 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:58:24.887533 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 12 23:58:24.887542 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:58:24.887549 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:58:24.887556 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:58:24.887569 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:58:24.887577 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:58:24.887584 kernel: kvm [1]: HYP mode not available Aug 12 23:58:24.887591 kernel: Initialise system trusted keyrings Aug 12 23:58:24.887598 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:58:24.887605 kernel: Key type asymmetric registered Aug 12 23:58:24.887613 kernel: Asymmetric key parser 'x509' registered Aug 12 23:58:24.887622 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 12 23:58:24.887629 kernel: io scheduler mq-deadline registered Aug 12 23:58:24.887636 kernel: io scheduler kyber registered Aug 12 23:58:24.887643 kernel: io scheduler bfq registered Aug 12 23:58:24.887661 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 12 23:58:24.887669 kernel: ACPI: button: Power Button [PWRB] Aug 12 23:58:24.887677 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 12 23:58:24.887747 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 12 23:58:24.887757 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:58:24.887766 kernel: thunder_xcv, ver 1.0 Aug 12 23:58:24.887773 kernel: thunder_bgx, ver 1.0 Aug 12 23:58:24.887780 kernel: nicpf, ver 1.0 Aug 12 23:58:24.887787 kernel: nicvf, ver 1.0 Aug 12 23:58:24.887863 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 12 23:58:24.887922 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-12T23:58:24 UTC (1755043104) Aug 12 23:58:24.887932 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 12 23:58:24.887939 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Aug 12 23:58:24.887948 kernel: watchdog: NMI not fully supported Aug 12 23:58:24.887955 kernel: watchdog: Hard watchdog permanently disabled Aug 12 23:58:24.887962 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:58:24.887969 kernel: Segment Routing with IPv6 Aug 12 23:58:24.887976 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:58:24.887984 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:58:24.887991 kernel: Key type dns_resolver registered Aug 12 23:58:24.887998 kernel: registered taskstats version 1 Aug 12 23:58:24.888005 kernel: Loading compiled-in X.509 certificates Aug 12 23:58:24.888014 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: e74bfacfa68399ed7282bf533dd5901fdb84b882' Aug 12 23:58:24.888021 kernel: Demotion targets for Node 0: null Aug 12 23:58:24.888028 kernel: Key type .fscrypt registered Aug 12 23:58:24.888036 kernel: Key type fscrypt-provisioning registered Aug 12 23:58:24.888044 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:58:24.888051 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:58:24.888058 kernel: ima: No architecture policies found Aug 12 23:58:24.888065 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 12 23:58:24.888073 kernel: clk: Disabling unused clocks Aug 12 23:58:24.888081 kernel: PM: genpd: Disabling unused power domains Aug 12 23:58:24.888088 kernel: Warning: unable to open an initial console. Aug 12 23:58:24.888095 kernel: Freeing unused kernel memory: 39488K Aug 12 23:58:24.888102 kernel: Run /init as init process Aug 12 23:58:24.888109 kernel: with arguments: Aug 12 23:58:24.888116 kernel: /init Aug 12 23:58:24.888123 kernel: with environment: Aug 12 23:58:24.888130 kernel: HOME=/ Aug 12 23:58:24.888137 kernel: TERM=linux Aug 12 23:58:24.888146 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:58:24.888155 systemd[1]: Successfully made /usr/ read-only. Aug 12 23:58:24.888166 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:58:24.888174 systemd[1]: Detected virtualization kvm. Aug 12 23:58:24.888182 systemd[1]: Detected architecture arm64. Aug 12 23:58:24.888189 systemd[1]: Running in initrd. Aug 12 23:58:24.888196 systemd[1]: No hostname configured, using default hostname. Aug 12 23:58:24.888205 systemd[1]: Hostname set to . Aug 12 23:58:24.888213 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:58:24.888221 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:58:24.888228 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:58:24.888236 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:58:24.888245 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:58:24.888255 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:58:24.888265 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:58:24.888276 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:58:24.888285 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:58:24.888293 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:58:24.888301 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:58:24.888309 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:58:24.888317 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:58:24.888325 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:58:24.888334 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:58:24.888342 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:58:24.888350 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:58:24.888358 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:58:24.888366 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:58:24.888373 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 12 23:58:24.888381 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:58:24.888389 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:58:24.888398 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:58:24.888406 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:58:24.888413 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:58:24.888421 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:58:24.888429 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:58:24.888437 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 12 23:58:24.888445 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:58:24.888452 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:58:24.888460 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:58:24.888469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:58:24.888477 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:58:24.888485 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:58:24.888493 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:58:24.888521 systemd-journald[245]: Collecting audit messages is disabled. Aug 12 23:58:24.888541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:58:24.888551 systemd-journald[245]: Journal started Aug 12 23:58:24.888577 systemd-journald[245]: Runtime Journal (/run/log/journal/6af1d74f8fc64cdda297f336876cf2a6) is 6M, max 48.5M, 42.4M free. Aug 12 23:58:24.882068 systemd-modules-load[246]: Inserted module 'overlay' Aug 12 23:58:24.890383 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:58:24.903382 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:58:24.903440 kernel: Bridge firewalling registered Aug 12 23:58:24.901728 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:58:24.903335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:58:24.903474 systemd-modules-load[246]: Inserted module 'br_netfilter' Aug 12 23:58:24.906072 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:58:24.909715 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:58:24.911389 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:58:24.911900 systemd-tmpfiles[264]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 12 23:58:24.924881 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:58:24.926144 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:58:24.930370 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:58:24.934768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:58:24.937957 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:58:24.939905 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:58:24.959848 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:58:24.962004 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:58:24.988713 systemd-resolved[285]: Positive Trust Anchors: Aug 12 23:58:24.988733 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:58:24.988764 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:58:24.993761 systemd-resolved[285]: Defaulting to hostname 'linux'. Aug 12 23:58:24.998802 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ce82f1ef836ba8581e59ce9db4eef4240d287b2b5f9937c28f0cd024f4dc9107 Aug 12 23:58:24.995089 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:58:24.996929 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:58:25.083696 kernel: SCSI subsystem initialized Aug 12 23:58:25.088680 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:58:25.099690 kernel: iscsi: registered transport (tcp) Aug 12 23:58:25.112875 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:58:25.112934 kernel: QLogic iSCSI HBA Driver Aug 12 23:58:25.131014 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:58:25.146714 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:58:25.148048 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:58:25.201387 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:58:25.204845 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:58:25.272692 kernel: raid6: neonx8 gen() 15244 MB/s Aug 12 23:58:25.289680 kernel: raid6: neonx4 gen() 15540 MB/s Aug 12 23:58:25.306695 kernel: raid6: neonx2 gen() 13125 MB/s Aug 12 23:58:25.323690 kernel: raid6: neonx1 gen() 10454 MB/s Aug 12 23:58:25.340688 kernel: raid6: int64x8 gen() 6812 MB/s Aug 12 23:58:25.357675 kernel: raid6: int64x4 gen() 7261 MB/s Aug 12 23:58:25.374686 kernel: raid6: int64x2 gen() 6052 MB/s Aug 12 23:58:25.391702 kernel: raid6: int64x1 gen() 5041 MB/s Aug 12 23:58:25.391771 kernel: raid6: using algorithm neonx4 gen() 15540 MB/s Aug 12 23:58:25.408678 kernel: raid6: .... xor() 12317 MB/s, rmw enabled Aug 12 23:58:25.408700 kernel: raid6: using neon recovery algorithm Aug 12 23:58:25.413676 kernel: xor: measuring software checksum speed Aug 12 23:58:25.414874 kernel: 8regs : 19074 MB/sec Aug 12 23:58:25.414894 kernel: 32regs : 21699 MB/sec Aug 12 23:58:25.415839 kernel: arm64_neon : 28099 MB/sec Aug 12 23:58:25.415856 kernel: xor: using function: arm64_neon (28099 MB/sec) Aug 12 23:58:25.476705 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:58:25.485028 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:58:25.488605 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:58:25.517327 systemd-udevd[497]: Using default interface naming scheme 'v255'. Aug 12 23:58:25.521768 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:58:25.523492 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:58:25.551926 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Aug 12 23:58:25.579337 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:58:25.581887 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:58:25.643452 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:58:25.646163 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:58:25.695420 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 12 23:58:25.695609 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 12 23:58:25.698912 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:58:25.698963 kernel: GPT:9289727 != 19775487 Aug 12 23:58:25.698973 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:58:25.699734 kernel: GPT:9289727 != 19775487 Aug 12 23:58:25.701004 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:58:25.701048 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:58:25.708863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:58:25.709015 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:58:25.712431 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:58:25.718225 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:58:25.748837 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 12 23:58:25.750095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:58:25.758702 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:58:25.768244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:58:25.775219 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 12 23:58:25.776213 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 12 23:58:25.785178 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 12 23:58:25.786219 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:58:25.787859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:58:25.789540 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:58:25.791946 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:58:25.793607 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:58:25.811937 disk-uuid[590]: Primary Header is updated. Aug 12 23:58:25.811937 disk-uuid[590]: Secondary Entries is updated. Aug 12 23:58:25.811937 disk-uuid[590]: Secondary Header is updated. Aug 12 23:58:25.815689 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:58:25.819031 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:58:26.838467 disk-uuid[595]: The operation has completed successfully. Aug 12 23:58:26.839647 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:58:26.869921 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:58:26.870022 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:58:26.900673 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:58:26.922920 sh[611]: Success Aug 12 23:58:26.936432 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:58:26.936487 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:58:26.936498 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 12 23:58:26.945692 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Aug 12 23:58:26.975948 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:58:26.978671 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:58:26.992255 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:58:27.000283 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 12 23:58:27.000341 kernel: BTRFS: device fsid 7658cdd8-2ee4-4f84-82be-1f808605c89c devid 1 transid 42 /dev/mapper/usr (253:0) scanned by mount (623) Aug 12 23:58:27.002779 kernel: BTRFS info (device dm-0): first mount of filesystem 7658cdd8-2ee4-4f84-82be-1f808605c89c Aug 12 23:58:27.002813 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:58:27.002823 kernel: BTRFS info (device dm-0): using free-space-tree Aug 12 23:58:27.007361 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:58:27.008517 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 12 23:58:27.009705 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:58:27.010555 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:58:27.013484 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:58:27.039685 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (654) Aug 12 23:58:27.043391 kernel: BTRFS info (device vda6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:58:27.043440 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:58:27.043450 kernel: BTRFS info (device vda6): using free-space-tree Aug 12 23:58:27.050852 kernel: BTRFS info (device vda6): last unmount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:58:27.051310 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:58:27.053619 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:58:27.144086 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:58:27.151140 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:58:27.211857 systemd-networkd[796]: lo: Link UP Aug 12 23:58:27.211868 systemd-networkd[796]: lo: Gained carrier Aug 12 23:58:27.212593 systemd-networkd[796]: Enumeration completed Aug 12 23:58:27.212705 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:58:27.213138 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:58:27.213141 systemd-networkd[796]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:58:27.213595 systemd-networkd[796]: eth0: Link UP Aug 12 23:58:27.213712 systemd[1]: Reached target network.target - Network. Aug 12 23:58:27.214388 systemd-networkd[796]: eth0: Gained carrier Aug 12 23:58:27.214400 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:58:27.237740 systemd-networkd[796]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:58:27.247488 ignition[703]: Ignition 2.21.0 Aug 12 23:58:27.247504 ignition[703]: Stage: fetch-offline Aug 12 23:58:27.247544 ignition[703]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:58:27.247552 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:58:27.247771 ignition[703]: parsed url from cmdline: "" Aug 12 23:58:27.247774 ignition[703]: no config URL provided Aug 12 23:58:27.247779 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:58:27.247786 ignition[703]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:58:27.247807 ignition[703]: op(1): [started] loading QEMU firmware config module Aug 12 23:58:27.247811 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 12 23:58:27.261038 ignition[703]: op(1): [finished] loading QEMU firmware config module Aug 12 23:58:27.301447 ignition[703]: parsing config with SHA512: f6860898d4fc43ee552b47386e8bddc7454ba271e913c6e00d234c1c00625b1dd4c7ae15a8e0345605a2916e7f3a5ef2c00ff1627dbadb15a819ff0ebd460e1d Aug 12 23:58:27.307892 unknown[703]: fetched base config from "system" Aug 12 23:58:27.307906 unknown[703]: fetched user config from "qemu" Aug 12 23:58:27.308342 ignition[703]: fetch-offline: fetch-offline passed Aug 12 23:58:27.308402 ignition[703]: Ignition finished successfully Aug 12 23:58:27.310474 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:58:27.312105 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 12 23:58:27.312908 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:58:27.346571 ignition[809]: Ignition 2.21.0 Aug 12 23:58:27.346589 ignition[809]: Stage: kargs Aug 12 23:58:27.346812 ignition[809]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:58:27.346827 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:58:27.348168 ignition[809]: kargs: kargs passed Aug 12 23:58:27.348226 ignition[809]: Ignition finished successfully Aug 12 23:58:27.350936 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:58:27.353319 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:58:27.381279 ignition[817]: Ignition 2.21.0 Aug 12 23:58:27.381295 ignition[817]: Stage: disks Aug 12 23:58:27.381456 ignition[817]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:58:27.381466 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:58:27.384672 ignition[817]: disks: disks passed Aug 12 23:58:27.384732 ignition[817]: Ignition finished successfully Aug 12 23:58:27.387294 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:58:27.388325 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:58:27.389467 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:58:27.391018 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:58:27.392403 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:58:27.393733 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:58:27.395845 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:58:27.427303 systemd-fsck[827]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 12 23:58:27.432628 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:58:27.435450 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:58:27.507675 kernel: EXT4-fs (vda9): mounted filesystem d634334e-91a3-4b77-89ab-775bdd78a572 r/w with ordered data mode. Quota mode: none. Aug 12 23:58:27.507814 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:58:27.508880 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:58:27.511182 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:58:27.512712 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:58:27.513514 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 12 23:58:27.513561 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:58:27.513590 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:58:27.524296 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:58:27.526346 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:58:27.533675 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (835) Aug 12 23:58:27.536056 kernel: BTRFS info (device vda6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:58:27.536098 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:58:27.536110 kernel: BTRFS info (device vda6): using free-space-tree Aug 12 23:58:27.540565 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:58:27.584966 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:58:27.588240 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:58:27.592315 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:58:27.596399 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:58:27.675737 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:58:27.677535 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:58:27.679059 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:58:27.705692 kernel: BTRFS info (device vda6): last unmount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:58:27.721800 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:58:27.734449 ignition[950]: INFO : Ignition 2.21.0 Aug 12 23:58:27.734449 ignition[950]: INFO : Stage: mount Aug 12 23:58:27.736832 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:58:27.736832 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:58:27.736832 ignition[950]: INFO : mount: mount passed Aug 12 23:58:27.736832 ignition[950]: INFO : Ignition finished successfully Aug 12 23:58:27.738178 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:58:27.740524 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:58:27.999680 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:58:28.001177 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:58:28.023677 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (962) Aug 12 23:58:28.025376 kernel: BTRFS info (device vda6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:58:28.025415 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:58:28.025426 kernel: BTRFS info (device vda6): using free-space-tree Aug 12 23:58:28.029802 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:58:28.067627 ignition[979]: INFO : Ignition 2.21.0 Aug 12 23:58:28.069744 ignition[979]: INFO : Stage: files Aug 12 23:58:28.069744 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:58:28.069744 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:58:28.071958 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:58:28.071958 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:58:28.071958 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:58:28.075192 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:58:28.076332 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:58:28.076332 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:58:28.075743 unknown[979]: wrote ssh authorized keys file for user: core Aug 12 23:58:28.079249 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 12 23:58:28.079249 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 12 23:58:28.323540 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:58:28.875913 systemd-networkd[796]: eth0: Gained IPv6LL Aug 12 23:58:29.928116 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 12 23:58:29.931781 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:58:29.933423 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:58:29.933423 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:58:29.933423 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:58:29.933423 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:58:29.933423 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:58:29.933423 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:58:29.933423 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:58:29.942820 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:58:29.942820 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:58:29.942820 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:58:29.947434 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:58:29.947434 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:58:29.947434 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 12 23:58:30.260574 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 12 23:58:30.663390 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:58:30.663390 ignition[979]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 12 23:58:30.666534 ignition[979]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:58:30.668584 ignition[979]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:58:30.668584 ignition[979]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 12 23:58:30.668584 ignition[979]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 12 23:58:30.672644 ignition[979]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:58:30.672644 ignition[979]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:58:30.672644 ignition[979]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 12 23:58:30.672644 ignition[979]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 12 23:58:30.692903 ignition[979]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:58:30.697527 ignition[979]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:58:30.697527 ignition[979]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 12 23:58:30.697527 ignition[979]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:58:30.697527 ignition[979]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:58:30.702813 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:58:30.702813 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:58:30.702813 ignition[979]: INFO : files: files passed Aug 12 23:58:30.702813 ignition[979]: INFO : Ignition finished successfully Aug 12 23:58:30.706699 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:58:30.709221 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:58:30.711420 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:58:30.731183 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:58:30.731363 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:58:30.734758 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory Aug 12 23:58:30.738296 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:58:30.738296 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:58:30.740940 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:58:30.740444 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:58:30.742336 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:58:30.744868 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:58:30.782179 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:58:30.782508 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:58:30.784496 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:58:30.785958 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:58:30.787513 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:58:30.788474 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:58:30.805502 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:58:30.808232 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:58:30.837748 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:58:30.838793 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:58:30.840564 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:58:30.842024 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:58:30.842166 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:58:30.844278 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:58:30.845904 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:58:30.847250 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:58:30.848629 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:58:30.850261 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:58:30.851876 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 12 23:58:30.853475 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:58:30.855103 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:58:30.856860 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:58:30.858464 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:58:30.859979 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:58:30.861237 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:58:30.861391 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:58:30.863704 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:58:30.865370 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:58:30.867040 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:58:30.868774 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:58:30.870833 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:58:30.870983 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:58:30.873479 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:58:30.873641 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:58:30.875865 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:58:30.877332 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:58:30.880774 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:58:30.883435 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:58:30.884630 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:58:30.885905 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:58:30.886013 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:58:30.887290 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:58:30.887427 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:58:30.888647 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:58:30.888787 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:58:30.890294 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:58:30.890456 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:58:30.892584 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:58:30.894608 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:58:30.895352 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:58:30.895477 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:58:30.897004 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:58:30.897113 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:58:30.906641 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:58:30.917873 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:58:30.933819 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:58:30.934833 ignition[1035]: INFO : Ignition 2.21.0 Aug 12 23:58:30.934833 ignition[1035]: INFO : Stage: umount Aug 12 23:58:30.934833 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:58:30.934833 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:58:30.940750 ignition[1035]: INFO : umount: umount passed Aug 12 23:58:30.940750 ignition[1035]: INFO : Ignition finished successfully Aug 12 23:58:30.939000 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:58:30.939114 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:58:30.943505 systemd[1]: Stopped target network.target - Network. Aug 12 23:58:30.945141 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:58:30.945254 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:58:30.946861 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:58:30.946909 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:58:30.948108 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:58:30.948149 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:58:30.951395 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:58:30.951449 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:58:30.953359 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:58:30.956597 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:58:30.959831 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:58:30.959960 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:58:30.966838 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 12 23:58:30.967140 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:58:30.967188 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:58:30.971241 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:58:30.971506 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:58:30.971646 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:58:30.975111 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 12 23:58:30.975788 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 12 23:58:30.977053 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:58:30.977108 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:58:30.979508 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:58:30.980977 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:58:30.981049 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:58:30.982667 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:58:30.982711 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:58:30.985266 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:58:30.985311 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:58:30.987096 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:58:30.991596 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:58:31.001692 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:58:31.001842 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:58:31.004636 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:58:31.004835 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:58:31.006835 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:58:31.006884 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:58:31.008216 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:58:31.008248 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:58:31.009624 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:58:31.009689 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:58:31.011867 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:58:31.011917 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:58:31.014264 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:58:31.014323 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:58:31.017779 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:58:31.019019 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 12 23:58:31.019091 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:58:31.021929 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:58:31.021987 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:58:31.024616 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 12 23:58:31.024677 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:58:31.027265 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:58:31.027317 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:58:31.029079 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:58:31.029127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:58:31.050539 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:58:31.050706 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:58:31.067484 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:58:31.067628 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:58:31.069208 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:58:31.070255 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:58:31.070320 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:58:31.074541 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:58:31.094993 systemd[1]: Switching root. Aug 12 23:58:31.124897 systemd-journald[245]: Journal stopped Aug 12 23:58:31.964371 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Aug 12 23:58:31.964419 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:58:31.964434 kernel: SELinux: policy capability open_perms=1 Aug 12 23:58:31.964447 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:58:31.964456 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:58:31.964471 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:58:31.964485 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:58:31.964495 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:58:31.964503 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:58:31.964512 kernel: SELinux: policy capability userspace_initial_context=0 Aug 12 23:58:31.964529 kernel: audit: type=1403 audit(1755043111.295:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:58:31.964541 systemd[1]: Successfully loaded SELinux policy in 49.334ms. Aug 12 23:58:31.964572 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.666ms. Aug 12 23:58:31.964585 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:58:31.964597 systemd[1]: Detected virtualization kvm. Aug 12 23:58:31.964607 systemd[1]: Detected architecture arm64. Aug 12 23:58:31.964617 systemd[1]: Detected first boot. Aug 12 23:58:31.964627 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:58:31.964638 zram_generator::config[1082]: No configuration found. Aug 12 23:58:31.964649 kernel: NET: Registered PF_VSOCK protocol family Aug 12 23:58:31.964674 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:58:31.964686 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 12 23:58:31.964699 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:58:31.964709 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:58:31.964719 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:58:31.964729 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:58:31.964739 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:58:31.964750 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:58:31.964761 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:58:31.964771 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:58:31.964783 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:58:31.964793 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:58:31.964803 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:58:31.964813 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:58:31.964823 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:58:31.964833 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:58:31.964843 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:58:31.964853 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:58:31.964863 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:58:31.964875 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 12 23:58:31.964885 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:58:31.964896 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:58:31.964905 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:58:31.964915 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:58:31.964925 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:58:31.964935 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:58:31.964947 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:58:31.964957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:58:31.964967 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:58:31.964978 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:58:31.964987 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:58:31.964997 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:58:31.965007 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 12 23:58:31.965017 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:58:31.965028 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:58:31.965038 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:58:31.965049 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:58:31.965060 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:58:31.965070 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:58:31.965079 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:58:31.965089 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:58:31.965112 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:58:31.965122 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:58:31.965133 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:58:31.965145 systemd[1]: Reached target machines.target - Containers. Aug 12 23:58:31.965155 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:58:31.965165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:58:31.965176 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:58:31.965186 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:58:31.965196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:58:31.965206 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:58:31.965217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:58:31.965226 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:58:31.965247 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:58:31.965259 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:58:31.965269 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:58:31.965279 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:58:31.965290 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:58:31.965299 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:58:31.965310 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:58:31.965320 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:58:31.965332 kernel: fuse: init (API version 7.41) Aug 12 23:58:31.965342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:58:31.965353 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:58:31.965363 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:58:31.965373 kernel: loop: module loaded Aug 12 23:58:31.965382 kernel: ACPI: bus type drm_connector registered Aug 12 23:58:31.965392 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 12 23:58:31.965402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:58:31.965412 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:58:31.965423 systemd[1]: Stopped verity-setup.service. Aug 12 23:58:31.965434 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:58:31.965445 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:58:31.965455 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:58:31.965465 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:58:31.965476 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:58:31.965487 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:58:31.965497 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:58:31.965508 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:58:31.965518 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:58:31.965529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:58:31.965541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:58:31.965557 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:58:31.965567 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:58:31.965579 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:58:31.965589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:58:31.965600 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:58:31.965633 systemd-journald[1150]: Collecting audit messages is disabled. Aug 12 23:58:31.965760 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:58:31.965772 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:58:31.965783 systemd-journald[1150]: Journal started Aug 12 23:58:31.965805 systemd-journald[1150]: Runtime Journal (/run/log/journal/6af1d74f8fc64cdda297f336876cf2a6) is 6M, max 48.5M, 42.4M free. Aug 12 23:58:31.696294 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:58:31.718771 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 12 23:58:31.719183 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:58:31.968683 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:58:31.969387 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:58:31.969589 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:58:31.970874 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:58:31.972112 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:58:31.973609 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:58:31.975028 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 12 23:58:31.990344 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:58:31.992998 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:58:31.995136 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:58:31.996104 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:58:31.996139 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:58:31.998042 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 12 23:58:32.007096 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:58:32.008191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:58:32.009578 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:58:32.013720 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:58:32.014758 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:58:32.023308 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:58:32.024431 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:58:32.025856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:58:32.029821 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:58:32.037339 systemd-journald[1150]: Time spent on flushing to /var/log/journal/6af1d74f8fc64cdda297f336876cf2a6 is 23.418ms for 883 entries. Aug 12 23:58:32.037339 systemd-journald[1150]: System Journal (/var/log/journal/6af1d74f8fc64cdda297f336876cf2a6) is 8M, max 195.6M, 187.6M free. Aug 12 23:58:32.072688 systemd-journald[1150]: Received client request to flush runtime journal. Aug 12 23:58:32.072803 kernel: loop0: detected capacity change from 0 to 138376 Aug 12 23:58:32.033072 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:58:32.041725 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:58:32.043082 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:58:32.044440 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:58:32.045909 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:58:32.049335 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:58:32.052406 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 12 23:58:32.080730 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:58:32.085034 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:58:32.095690 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:58:32.104449 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Aug 12 23:58:32.104467 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Aug 12 23:58:32.111744 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:58:32.113891 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 12 23:58:32.117755 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:58:32.124766 kernel: loop1: detected capacity change from 0 to 203944 Aug 12 23:58:32.153771 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:58:32.156278 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:58:32.160012 kernel: loop2: detected capacity change from 0 to 107312 Aug 12 23:58:32.183930 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Aug 12 23:58:32.183953 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Aug 12 23:58:32.188672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:58:32.193702 kernel: loop3: detected capacity change from 0 to 138376 Aug 12 23:58:32.205680 kernel: loop4: detected capacity change from 0 to 203944 Aug 12 23:58:32.215684 kernel: loop5: detected capacity change from 0 to 107312 Aug 12 23:58:32.224392 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 12 23:58:32.224837 (sd-merge)[1225]: Merged extensions into '/usr'. Aug 12 23:58:32.228878 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:58:32.228899 systemd[1]: Reloading... Aug 12 23:58:32.283935 zram_generator::config[1250]: No configuration found. Aug 12 23:58:32.379631 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:58:32.411242 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:58:32.447632 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:58:32.447724 systemd[1]: Reloading finished in 217 ms. Aug 12 23:58:32.479700 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:58:32.480979 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:58:32.495988 systemd[1]: Starting ensure-sysext.service... Aug 12 23:58:32.497853 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:58:32.513417 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:58:32.513439 systemd[1]: Reloading... Aug 12 23:58:32.518033 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 12 23:58:32.518485 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 12 23:58:32.518800 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:58:32.518988 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:58:32.519618 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:58:32.519848 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Aug 12 23:58:32.519891 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Aug 12 23:58:32.522787 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:58:32.522906 systemd-tmpfiles[1286]: Skipping /boot Aug 12 23:58:32.533372 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:58:32.533530 systemd-tmpfiles[1286]: Skipping /boot Aug 12 23:58:32.565696 zram_generator::config[1313]: No configuration found. Aug 12 23:58:32.637400 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:58:32.701579 systemd[1]: Reloading finished in 187 ms. Aug 12 23:58:32.724355 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:58:32.730462 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:58:32.741411 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:58:32.744173 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:58:32.746488 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:58:32.752245 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:58:32.759891 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:58:32.762223 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:58:32.778159 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:58:32.779816 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:58:32.784457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:58:32.794493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:58:32.797353 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:58:32.810075 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:58:32.811491 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:58:32.811686 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:58:32.813877 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:58:32.814861 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Aug 12 23:58:32.818771 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:58:32.819390 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:58:32.821371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:58:32.822037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:58:32.827458 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:58:32.827735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:58:32.839334 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:58:32.841458 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:58:32.846967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:58:32.849369 systemd[1]: Finished ensure-sysext.service. Aug 12 23:58:32.851788 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:58:32.857267 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:58:32.864082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:58:32.865501 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:58:32.867634 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:58:32.869904 augenrules[1411]: No rules Aug 12 23:58:32.870445 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:58:32.872454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:58:32.873480 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:58:32.873535 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:58:32.896910 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:58:32.902940 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 12 23:58:32.904082 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:58:32.904639 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:58:32.905746 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:58:32.909257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:58:32.909477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:58:32.920550 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:58:32.921045 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:58:32.925605 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 12 23:58:32.928957 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:58:32.944967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:58:32.949848 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:58:32.967224 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:58:32.967724 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:58:32.969890 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:58:33.046373 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:58:33.047510 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 12 23:58:33.049059 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:58:33.052177 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:58:33.082588 systemd-networkd[1427]: lo: Link UP Aug 12 23:58:33.082938 systemd-networkd[1427]: lo: Gained carrier Aug 12 23:58:33.089357 systemd-resolved[1353]: Positive Trust Anchors: Aug 12 23:58:33.089378 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:58:33.089410 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:58:33.092196 systemd-networkd[1427]: Enumeration completed Aug 12 23:58:33.092670 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:58:33.093302 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:58:33.093401 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:58:33.093970 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:58:33.094199 systemd-networkd[1427]: eth0: Link UP Aug 12 23:58:33.094459 systemd-networkd[1427]: eth0: Gained carrier Aug 12 23:58:33.094588 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:58:33.098905 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 12 23:58:33.099566 systemd-resolved[1353]: Defaulting to hostname 'linux'. Aug 12 23:58:33.103673 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:58:33.104723 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:58:33.105745 systemd[1]: Reached target network.target - Network. Aug 12 23:58:33.107369 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:58:33.108409 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:58:33.109340 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:58:33.110299 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:58:33.111456 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:58:33.112424 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:58:33.113455 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:58:33.114716 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:58:33.114754 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:58:33.115464 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:58:33.117642 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:58:33.122770 systemd-networkd[1427]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:58:33.127331 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Aug 12 23:58:33.128352 systemd-timesyncd[1429]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 12 23:58:33.128402 systemd-timesyncd[1429]: Initial clock synchronization to Tue 2025-08-12 23:58:33.348269 UTC. Aug 12 23:58:33.131777 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:58:33.136169 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 12 23:58:33.137641 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 12 23:58:33.138970 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 12 23:58:33.142490 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:58:33.145267 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 12 23:58:33.147606 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 12 23:58:33.148933 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:58:33.158433 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:58:33.159300 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:58:33.160198 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:58:33.160229 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:58:33.161666 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:58:33.163524 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:58:33.165422 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:58:33.173622 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:58:33.175588 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:58:33.176459 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:58:33.177622 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:58:33.179761 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 12 23:58:33.181886 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:58:33.185900 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:58:33.189826 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:58:33.195674 jq[1470]: false Aug 12 23:58:33.191931 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:58:33.193917 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:58:33.194453 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:58:33.195882 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:58:33.197781 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:58:33.202678 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:58:33.203946 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:58:33.204153 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:58:33.204476 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:58:33.204895 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:58:33.213744 jq[1485]: true Aug 12 23:58:33.215430 extend-filesystems[1471]: Found /dev/vda6 Aug 12 23:58:33.222527 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:58:33.223858 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:58:33.237033 extend-filesystems[1471]: Found /dev/vda9 Aug 12 23:58:33.242205 jq[1499]: true Aug 12 23:58:33.243219 (ntainerd)[1504]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:58:33.245930 extend-filesystems[1471]: Checking size of /dev/vda9 Aug 12 23:58:33.276559 extend-filesystems[1471]: Resized partition /dev/vda9 Aug 12 23:58:33.297778 extend-filesystems[1522]: resize2fs 1.47.2 (1-Jan-2025) Aug 12 23:58:33.305685 tar[1500]: linux-arm64/helm Aug 12 23:58:33.310729 dbus-daemon[1468]: [system] SELinux support is enabled Aug 12 23:58:33.310930 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:58:33.313952 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:58:33.313995 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:58:33.314784 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 12 23:58:33.315809 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:58:33.315837 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:58:33.353245 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 12 23:58:33.376970 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:58:33.376970 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 12 23:58:33.376970 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 12 23:58:33.379757 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:58:33.381593 update_engine[1484]: I20250812 23:58:33.378585 1484 main.cc:92] Flatcar Update Engine starting Aug 12 23:58:33.381808 extend-filesystems[1471]: Resized filesystem in /dev/vda9 Aug 12 23:58:33.380015 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:58:33.383450 update_engine[1484]: I20250812 23:58:33.383369 1484 update_check_scheduler.cc:74] Next update check in 2m11s Aug 12 23:58:33.384078 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:58:33.387149 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:58:33.390091 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:58:33.391058 bash[1529]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:58:33.395754 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:58:33.395807 systemd-logind[1481]: Watching system buttons on /dev/input/event0 (Power Button) Aug 12 23:58:33.396072 systemd-logind[1481]: New seat seat0. Aug 12 23:58:33.397275 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 12 23:58:33.397605 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:58:33.486034 locksmithd[1537]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:58:33.541349 containerd[1504]: time="2025-08-12T23:58:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 12 23:58:33.543006 containerd[1504]: time="2025-08-12T23:58:33.542953800Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 12 23:58:33.553925 containerd[1504]: time="2025-08-12T23:58:33.553473680Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.84µs" Aug 12 23:58:33.553925 containerd[1504]: time="2025-08-12T23:58:33.553530600Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 12 23:58:33.553925 containerd[1504]: time="2025-08-12T23:58:33.553569160Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 12 23:58:33.553925 containerd[1504]: time="2025-08-12T23:58:33.553773520Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 12 23:58:33.553925 containerd[1504]: time="2025-08-12T23:58:33.553793960Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 12 23:58:33.553925 containerd[1504]: time="2025-08-12T23:58:33.553825160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 12 23:58:33.553925 containerd[1504]: time="2025-08-12T23:58:33.553888560Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 12 23:58:33.554145 containerd[1504]: time="2025-08-12T23:58:33.554106040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 12 23:58:33.554464 containerd[1504]: time="2025-08-12T23:58:33.554437120Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 12 23:58:33.554488 containerd[1504]: time="2025-08-12T23:58:33.554461840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 12 23:58:33.554506 containerd[1504]: time="2025-08-12T23:58:33.554480280Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 12 23:58:33.554506 containerd[1504]: time="2025-08-12T23:58:33.554493760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 12 23:58:33.554885 containerd[1504]: time="2025-08-12T23:58:33.554696840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 12 23:58:33.554957 containerd[1504]: time="2025-08-12T23:58:33.554930120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 12 23:58:33.554993 containerd[1504]: time="2025-08-12T23:58:33.554969600Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 12 23:58:33.554993 containerd[1504]: time="2025-08-12T23:58:33.554980840Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 12 23:58:33.555037 containerd[1504]: time="2025-08-12T23:58:33.555016160Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 12 23:58:33.555253 containerd[1504]: time="2025-08-12T23:58:33.555236840Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 12 23:58:33.555322 containerd[1504]: time="2025-08-12T23:58:33.555306000Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:58:33.560740 containerd[1504]: time="2025-08-12T23:58:33.560692840Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 12 23:58:33.560889 containerd[1504]: time="2025-08-12T23:58:33.560769720Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 12 23:58:33.560889 containerd[1504]: time="2025-08-12T23:58:33.560785840Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 12 23:58:33.560889 containerd[1504]: time="2025-08-12T23:58:33.560801960Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 12 23:58:33.560889 containerd[1504]: time="2025-08-12T23:58:33.560866600Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 12 23:58:33.560889 containerd[1504]: time="2025-08-12T23:58:33.560884600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 12 23:58:33.560984 containerd[1504]: time="2025-08-12T23:58:33.560899640Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 12 23:58:33.560984 containerd[1504]: time="2025-08-12T23:58:33.560912560Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 12 23:58:33.560984 containerd[1504]: time="2025-08-12T23:58:33.560924240Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 12 23:58:33.560984 containerd[1504]: time="2025-08-12T23:58:33.560936840Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 12 23:58:33.560984 containerd[1504]: time="2025-08-12T23:58:33.560947560Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 12 23:58:33.560984 containerd[1504]: time="2025-08-12T23:58:33.560960880Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 12 23:58:33.561136 containerd[1504]: time="2025-08-12T23:58:33.561114920Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 12 23:58:33.561159 containerd[1504]: time="2025-08-12T23:58:33.561142640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 12 23:58:33.561176 containerd[1504]: time="2025-08-12T23:58:33.561160960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 12 23:58:33.561176 containerd[1504]: time="2025-08-12T23:58:33.561172760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 12 23:58:33.561207 containerd[1504]: time="2025-08-12T23:58:33.561184360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 12 23:58:33.561207 containerd[1504]: time="2025-08-12T23:58:33.561195360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 12 23:58:33.561243 containerd[1504]: time="2025-08-12T23:58:33.561207440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 12 23:58:33.561243 containerd[1504]: time="2025-08-12T23:58:33.561218120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 12 23:58:33.561243 containerd[1504]: time="2025-08-12T23:58:33.561229240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 12 23:58:33.561243 containerd[1504]: time="2025-08-12T23:58:33.561239760Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 12 23:58:33.561312 containerd[1504]: time="2025-08-12T23:58:33.561250360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 12 23:58:33.561472 containerd[1504]: time="2025-08-12T23:58:33.561442920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 12 23:58:33.561472 containerd[1504]: time="2025-08-12T23:58:33.561467080Z" level=info msg="Start snapshots syncer" Aug 12 23:58:33.561531 containerd[1504]: time="2025-08-12T23:58:33.561496080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 12 23:58:33.561800 containerd[1504]: time="2025-08-12T23:58:33.561760840Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 12 23:58:33.561920 containerd[1504]: time="2025-08-12T23:58:33.561820680Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 12 23:58:33.561920 containerd[1504]: time="2025-08-12T23:58:33.561897640Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 12 23:58:33.562055 containerd[1504]: time="2025-08-12T23:58:33.562031800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 12 23:58:33.562082 containerd[1504]: time="2025-08-12T23:58:33.562061720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 12 23:58:33.562082 containerd[1504]: time="2025-08-12T23:58:33.562073560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 12 23:58:33.562115 containerd[1504]: time="2025-08-12T23:58:33.562087360Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 12 23:58:33.562115 containerd[1504]: time="2025-08-12T23:58:33.562107080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 12 23:58:33.562155 containerd[1504]: time="2025-08-12T23:58:33.562117960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 12 23:58:33.562155 containerd[1504]: time="2025-08-12T23:58:33.562128840Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 12 23:58:33.562190 containerd[1504]: time="2025-08-12T23:58:33.562162440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 12 23:58:33.562190 containerd[1504]: time="2025-08-12T23:58:33.562174520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 12 23:58:33.562190 containerd[1504]: time="2025-08-12T23:58:33.562185200Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 12 23:58:33.562237 containerd[1504]: time="2025-08-12T23:58:33.562229200Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 12 23:58:33.562259 containerd[1504]: time="2025-08-12T23:58:33.562246120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 12 23:58:33.562278 containerd[1504]: time="2025-08-12T23:58:33.562256240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 12 23:58:33.562278 containerd[1504]: time="2025-08-12T23:58:33.562266760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 12 23:58:33.562341 containerd[1504]: time="2025-08-12T23:58:33.562325280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 12 23:58:33.562364 containerd[1504]: time="2025-08-12T23:58:33.562342000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 12 23:58:33.562364 containerd[1504]: time="2025-08-12T23:58:33.562354920Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 12 23:58:33.562633 containerd[1504]: time="2025-08-12T23:58:33.562606360Z" level=info msg="runtime interface created" Aug 12 23:58:33.562633 containerd[1504]: time="2025-08-12T23:58:33.562619880Z" level=info msg="created NRI interface" Aug 12 23:58:33.562633 containerd[1504]: time="2025-08-12T23:58:33.562631320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 12 23:58:33.562717 containerd[1504]: time="2025-08-12T23:58:33.562644480Z" level=info msg="Connect containerd service" Aug 12 23:58:33.562717 containerd[1504]: time="2025-08-12T23:58:33.562691520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:58:33.563695 containerd[1504]: time="2025-08-12T23:58:33.563646480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:58:33.701617 containerd[1504]: time="2025-08-12T23:58:33.701549160Z" level=info msg="Start subscribing containerd event" Aug 12 23:58:33.701907 containerd[1504]: time="2025-08-12T23:58:33.701888680Z" level=info msg="Start recovering state" Aug 12 23:58:33.702128 containerd[1504]: time="2025-08-12T23:58:33.702101120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:58:33.702161 containerd[1504]: time="2025-08-12T23:58:33.702150640Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:58:33.702238 containerd[1504]: time="2025-08-12T23:58:33.702221800Z" level=info msg="Start event monitor" Aug 12 23:58:33.702304 containerd[1504]: time="2025-08-12T23:58:33.702292480Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:58:33.702470 containerd[1504]: time="2025-08-12T23:58:33.702454920Z" level=info msg="Start streaming server" Aug 12 23:58:33.702545 containerd[1504]: time="2025-08-12T23:58:33.702525600Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 12 23:58:33.703153 containerd[1504]: time="2025-08-12T23:58:33.703117240Z" level=info msg="runtime interface starting up..." Aug 12 23:58:33.703153 containerd[1504]: time="2025-08-12T23:58:33.703149360Z" level=info msg="starting plugins..." Aug 12 23:58:33.703203 containerd[1504]: time="2025-08-12T23:58:33.703177840Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 12 23:58:33.705173 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:58:33.706267 containerd[1504]: time="2025-08-12T23:58:33.706225600Z" level=info msg="containerd successfully booted in 0.165232s" Aug 12 23:58:33.717262 tar[1500]: linux-arm64/LICENSE Aug 12 23:58:33.717383 tar[1500]: linux-arm64/README.md Aug 12 23:58:33.734075 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 12 23:58:34.209611 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:58:34.231749 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:58:34.234581 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:58:34.260060 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:58:34.260336 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:58:34.263406 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:58:34.286822 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:58:34.289469 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:58:34.291416 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 12 23:58:34.292486 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:58:34.380281 systemd-networkd[1427]: eth0: Gained IPv6LL Aug 12 23:58:34.382748 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:58:34.384165 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:58:34.386440 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 12 23:58:34.390090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:58:34.397079 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:58:34.418668 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 12 23:58:34.419348 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 12 23:58:34.420896 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:58:34.422617 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:58:35.016342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:58:35.017823 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:58:35.020229 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:58:35.020763 systemd[1]: Startup finished in 2.131s (kernel) + 6.661s (initrd) + 3.784s (userspace) = 12.577s. Aug 12 23:58:35.549083 kubelet[1609]: E0812 23:58:35.549014 1609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:58:35.551916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:58:35.552274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:58:35.552964 systemd[1]: kubelet.service: Consumed 904ms CPU time, 258.5M memory peak. Aug 12 23:58:37.921033 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:58:37.922350 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:34446.service - OpenSSH per-connection server daemon (10.0.0.1:34446). Aug 12 23:58:37.998067 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 34446 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:58:38.000126 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:38.013178 systemd-logind[1481]: New session 1 of user core. Aug 12 23:58:38.014243 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:58:38.015374 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:58:38.043775 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:58:38.047663 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:58:38.063083 (systemd)[1627]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:58:38.065752 systemd-logind[1481]: New session c1 of user core. Aug 12 23:58:38.189393 systemd[1627]: Queued start job for default target default.target. Aug 12 23:58:38.207700 systemd[1627]: Created slice app.slice - User Application Slice. Aug 12 23:58:38.207730 systemd[1627]: Reached target paths.target - Paths. Aug 12 23:58:38.207770 systemd[1627]: Reached target timers.target - Timers. Aug 12 23:58:38.209074 systemd[1627]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:58:38.219498 systemd[1627]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:58:38.219569 systemd[1627]: Reached target sockets.target - Sockets. Aug 12 23:58:38.219611 systemd[1627]: Reached target basic.target - Basic System. Aug 12 23:58:38.219640 systemd[1627]: Reached target default.target - Main User Target. Aug 12 23:58:38.219697 systemd[1627]: Startup finished in 147ms. Aug 12 23:58:38.219894 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:58:38.222426 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:58:38.291312 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:34456.service - OpenSSH per-connection server daemon (10.0.0.1:34456). Aug 12 23:58:38.347402 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 34456 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:58:38.349051 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:38.353921 systemd-logind[1481]: New session 2 of user core. Aug 12 23:58:38.372413 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:58:38.437776 sshd[1640]: Connection closed by 10.0.0.1 port 34456 Aug 12 23:58:38.440426 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:38.454631 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:34456.service: Deactivated successfully. Aug 12 23:58:38.459235 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:58:38.463278 systemd-logind[1481]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:58:38.465360 systemd-logind[1481]: Removed session 2. Aug 12 23:58:38.467536 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:34470.service - OpenSSH per-connection server daemon (10.0.0.1:34470). Aug 12 23:58:38.540327 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 34470 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:58:38.541843 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:38.546489 systemd-logind[1481]: New session 3 of user core. Aug 12 23:58:38.553844 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:58:38.602805 sshd[1648]: Connection closed by 10.0.0.1 port 34470 Aug 12 23:58:38.603514 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:38.625270 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:34470.service: Deactivated successfully. Aug 12 23:58:38.627046 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:58:38.629309 systemd-logind[1481]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:58:38.632388 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:34482.service - OpenSSH per-connection server daemon (10.0.0.1:34482). Aug 12 23:58:38.633152 systemd-logind[1481]: Removed session 3. Aug 12 23:58:38.696090 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 34482 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:58:38.697784 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:38.702807 systemd-logind[1481]: New session 4 of user core. Aug 12 23:58:38.712847 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:58:38.766585 sshd[1656]: Connection closed by 10.0.0.1 port 34482 Aug 12 23:58:38.767189 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:38.777893 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:34482.service: Deactivated successfully. Aug 12 23:58:38.779549 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:58:38.781550 systemd-logind[1481]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:58:38.784403 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:34486.service - OpenSSH per-connection server daemon (10.0.0.1:34486). Aug 12 23:58:38.785506 systemd-logind[1481]: Removed session 4. Aug 12 23:58:38.833822 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 34486 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:58:38.835169 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:38.839740 systemd-logind[1481]: New session 5 of user core. Aug 12 23:58:38.853882 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:58:38.944323 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:58:38.944609 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:58:38.965507 sudo[1666]: pam_unix(sudo:session): session closed for user root Aug 12 23:58:38.967432 sshd[1665]: Connection closed by 10.0.0.1 port 34486 Aug 12 23:58:38.968229 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:38.977515 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:34486.service: Deactivated successfully. Aug 12 23:58:38.979366 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:58:38.980203 systemd-logind[1481]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:58:38.984136 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:34500.service - OpenSSH per-connection server daemon (10.0.0.1:34500). Aug 12 23:58:38.985185 systemd-logind[1481]: Removed session 5. Aug 12 23:58:39.048254 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 34500 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:58:39.049702 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:39.053796 systemd-logind[1481]: New session 6 of user core. Aug 12 23:58:39.071894 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:58:39.123787 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:58:39.124059 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:58:39.161089 sudo[1676]: pam_unix(sudo:session): session closed for user root Aug 12 23:58:39.166461 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 12 23:58:39.166762 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:58:39.176600 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:58:39.216997 augenrules[1698]: No rules Aug 12 23:58:39.218798 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:58:39.219007 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:58:39.220905 sudo[1675]: pam_unix(sudo:session): session closed for user root Aug 12 23:58:39.222172 sshd[1674]: Connection closed by 10.0.0.1 port 34500 Aug 12 23:58:39.222606 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:39.234883 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:34500.service: Deactivated successfully. Aug 12 23:58:39.236560 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:58:39.237311 systemd-logind[1481]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:58:39.240077 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:34510.service - OpenSSH per-connection server daemon (10.0.0.1:34510). Aug 12 23:58:39.240974 systemd-logind[1481]: Removed session 6. Aug 12 23:58:39.297112 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 34510 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:58:39.298455 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:39.303097 systemd-logind[1481]: New session 7 of user core. Aug 12 23:58:39.313880 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:58:39.365217 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:58:39.365478 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:58:39.816609 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 12 23:58:39.835092 (dockerd)[1731]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 12 23:58:40.156754 dockerd[1731]: time="2025-08-12T23:58:40.156624001Z" level=info msg="Starting up" Aug 12 23:58:40.157764 dockerd[1731]: time="2025-08-12T23:58:40.157726036Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 12 23:58:40.250936 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2150007928-merged.mount: Deactivated successfully. Aug 12 23:58:40.267876 dockerd[1731]: time="2025-08-12T23:58:40.267829851Z" level=info msg="Loading containers: start." Aug 12 23:58:40.275711 kernel: Initializing XFRM netlink socket Aug 12 23:58:40.489754 systemd-networkd[1427]: docker0: Link UP Aug 12 23:58:40.494097 dockerd[1731]: time="2025-08-12T23:58:40.493959514Z" level=info msg="Loading containers: done." Aug 12 23:58:40.508259 dockerd[1731]: time="2025-08-12T23:58:40.508183840Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:58:40.508417 dockerd[1731]: time="2025-08-12T23:58:40.508291472Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 12 23:58:40.508417 dockerd[1731]: time="2025-08-12T23:58:40.508409147Z" level=info msg="Initializing buildkit" Aug 12 23:58:40.534000 dockerd[1731]: time="2025-08-12T23:58:40.533946427Z" level=info msg="Completed buildkit initialization" Aug 12 23:58:40.540194 dockerd[1731]: time="2025-08-12T23:58:40.540144790Z" level=info msg="Daemon has completed initialization" Aug 12 23:58:40.540342 dockerd[1731]: time="2025-08-12T23:58:40.540245943Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:58:40.540351 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 12 23:58:41.106578 containerd[1504]: time="2025-08-12T23:58:41.106528678Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 12 23:58:41.896091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1555576840.mount: Deactivated successfully. Aug 12 23:58:42.755257 containerd[1504]: time="2025-08-12T23:58:42.755205032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:42.756351 containerd[1504]: time="2025-08-12T23:58:42.756310903Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651815" Aug 12 23:58:42.758592 containerd[1504]: time="2025-08-12T23:58:42.758538958Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:42.762924 containerd[1504]: time="2025-08-12T23:58:42.762862670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:42.764064 containerd[1504]: time="2025-08-12T23:58:42.764007344Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 1.657435984s" Aug 12 23:58:42.764064 containerd[1504]: time="2025-08-12T23:58:42.764059230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 12 23:58:42.769631 containerd[1504]: time="2025-08-12T23:58:42.769585155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 12 23:58:43.824065 containerd[1504]: time="2025-08-12T23:58:43.823994300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:43.824916 containerd[1504]: time="2025-08-12T23:58:43.824888835Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460285" Aug 12 23:58:43.831645 containerd[1504]: time="2025-08-12T23:58:43.831611036Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:43.847187 containerd[1504]: time="2025-08-12T23:58:43.847129566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:43.848286 containerd[1504]: time="2025-08-12T23:58:43.848239306Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.078613583s" Aug 12 23:58:43.848324 containerd[1504]: time="2025-08-12T23:58:43.848278951Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 12 23:58:43.848875 containerd[1504]: time="2025-08-12T23:58:43.848842936Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 12 23:58:45.020042 containerd[1504]: time="2025-08-12T23:58:45.019980175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:45.021295 containerd[1504]: time="2025-08-12T23:58:45.021047489Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125091" Aug 12 23:58:45.022141 containerd[1504]: time="2025-08-12T23:58:45.022112749Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:45.025551 containerd[1504]: time="2025-08-12T23:58:45.025511890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:45.026760 containerd[1504]: time="2025-08-12T23:58:45.026726329Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.177848325s" Aug 12 23:58:45.026822 containerd[1504]: time="2025-08-12T23:58:45.026759659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 12 23:58:45.027271 containerd[1504]: time="2025-08-12T23:58:45.027250507Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 12 23:58:45.674518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:58:45.677885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:58:45.852069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:58:45.882107 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:58:45.944358 kubelet[2012]: E0812 23:58:45.944063 2012 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:58:45.948501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:58:45.948641 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:58:45.951773 systemd[1]: kubelet.service: Consumed 159ms CPU time, 107.5M memory peak. Aug 12 23:58:46.219479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257889431.mount: Deactivated successfully. Aug 12 23:58:46.521714 containerd[1504]: time="2025-08-12T23:58:46.521500443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:46.522791 containerd[1504]: time="2025-08-12T23:58:46.522741077Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26915995" Aug 12 23:58:46.525250 containerd[1504]: time="2025-08-12T23:58:46.525213254Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:46.528673 containerd[1504]: time="2025-08-12T23:58:46.528626575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:46.529180 containerd[1504]: time="2025-08-12T23:58:46.529144267Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 1.501864706s" Aug 12 23:58:46.529213 containerd[1504]: time="2025-08-12T23:58:46.529182477Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 12 23:58:46.529648 containerd[1504]: time="2025-08-12T23:58:46.529620490Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:58:47.101227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1064058949.mount: Deactivated successfully. Aug 12 23:58:47.770700 containerd[1504]: time="2025-08-12T23:58:47.770588423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:47.771786 containerd[1504]: time="2025-08-12T23:58:47.771747574Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 12 23:58:47.772721 containerd[1504]: time="2025-08-12T23:58:47.772669983Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:47.775646 containerd[1504]: time="2025-08-12T23:58:47.775605621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:47.776976 containerd[1504]: time="2025-08-12T23:58:47.776814693Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.247158976s" Aug 12 23:58:47.776976 containerd[1504]: time="2025-08-12T23:58:47.776850144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 12 23:58:47.777455 containerd[1504]: time="2025-08-12T23:58:47.777385485Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:58:48.394080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561487393.mount: Deactivated successfully. Aug 12 23:58:48.439320 containerd[1504]: time="2025-08-12T23:58:48.438798684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:58:48.439450 containerd[1504]: time="2025-08-12T23:58:48.439345714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 12 23:58:48.440592 containerd[1504]: time="2025-08-12T23:58:48.440553213Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:58:48.442558 containerd[1504]: time="2025-08-12T23:58:48.442525140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:58:48.443199 containerd[1504]: time="2025-08-12T23:58:48.443175406Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 665.757087ms" Aug 12 23:58:48.443228 containerd[1504]: time="2025-08-12T23:58:48.443205654Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 12 23:58:48.443822 containerd[1504]: time="2025-08-12T23:58:48.443695562Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 12 23:58:49.056433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1365804006.mount: Deactivated successfully. Aug 12 23:58:50.943322 containerd[1504]: time="2025-08-12T23:58:50.943269650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:50.944489 containerd[1504]: time="2025-08-12T23:58:50.944019717Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Aug 12 23:58:50.945601 containerd[1504]: time="2025-08-12T23:58:50.945012329Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:50.948679 containerd[1504]: time="2025-08-12T23:58:50.948619523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:50.949799 containerd[1504]: time="2025-08-12T23:58:50.949756522Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.505979826s" Aug 12 23:58:50.949799 containerd[1504]: time="2025-08-12T23:58:50.949799060Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 12 23:58:55.524459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:58:55.524875 systemd[1]: kubelet.service: Consumed 159ms CPU time, 107.5M memory peak. Aug 12 23:58:55.527914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:58:55.549523 systemd[1]: Reload requested from client PID 2170 ('systemctl') (unit session-7.scope)... Aug 12 23:58:55.549542 systemd[1]: Reloading... Aug 12 23:58:55.625750 zram_generator::config[2215]: No configuration found. Aug 12 23:58:55.763769 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:58:55.855740 systemd[1]: Reloading finished in 305 ms. Aug 12 23:58:55.905252 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 12 23:58:55.905335 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 12 23:58:55.905574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:58:55.905624 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95M memory peak. Aug 12 23:58:55.907309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:58:56.119424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:58:56.140043 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:58:56.182675 kubelet[2257]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:58:56.182675 kubelet[2257]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:58:56.182675 kubelet[2257]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:58:56.183102 kubelet[2257]: I0812 23:58:56.182742 2257 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:58:56.588251 kubelet[2257]: I0812 23:58:56.588199 2257 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:58:56.588251 kubelet[2257]: I0812 23:58:56.588237 2257 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:58:56.588500 kubelet[2257]: I0812 23:58:56.588476 2257 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:58:56.622446 kubelet[2257]: E0812 23:58:56.622394 2257 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:56.624332 kubelet[2257]: I0812 23:58:56.624241 2257 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:58:56.637956 kubelet[2257]: I0812 23:58:56.637927 2257 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 12 23:58:56.641820 kubelet[2257]: I0812 23:58:56.641785 2257 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:58:56.643146 kubelet[2257]: I0812 23:58:56.643048 2257 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:58:56.643247 kubelet[2257]: I0812 23:58:56.643213 2257 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:58:56.643432 kubelet[2257]: I0812 23:58:56.643244 2257 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:58:56.643522 kubelet[2257]: I0812 23:58:56.643490 2257 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:58:56.643522 kubelet[2257]: I0812 23:58:56.643500 2257 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:58:56.643790 kubelet[2257]: I0812 23:58:56.643765 2257 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:58:56.646109 kubelet[2257]: I0812 23:58:56.645933 2257 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:58:56.646109 kubelet[2257]: I0812 23:58:56.645968 2257 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:58:56.646109 kubelet[2257]: I0812 23:58:56.645991 2257 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:58:56.646109 kubelet[2257]: I0812 23:58:56.646067 2257 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:58:56.650610 kubelet[2257]: W0812 23:58:56.650546 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 12 23:58:56.650826 kubelet[2257]: E0812 23:58:56.650805 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:56.650909 kubelet[2257]: W0812 23:58:56.650564 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 12 23:58:56.650982 kubelet[2257]: E0812 23:58:56.650967 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:56.653923 kubelet[2257]: I0812 23:58:56.653888 2257 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 12 23:58:56.654636 kubelet[2257]: I0812 23:58:56.654605 2257 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:58:56.654831 kubelet[2257]: W0812 23:58:56.654815 2257 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:58:56.655913 kubelet[2257]: I0812 23:58:56.655890 2257 server.go:1274] "Started kubelet" Aug 12 23:58:56.656142 kubelet[2257]: I0812 23:58:56.656095 2257 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:58:56.656585 kubelet[2257]: I0812 23:58:56.656533 2257 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:58:56.657782 kubelet[2257]: I0812 23:58:56.657423 2257 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:58:56.662027 kubelet[2257]: I0812 23:58:56.661979 2257 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:58:56.662705 kubelet[2257]: I0812 23:58:56.662621 2257 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:58:56.664335 kubelet[2257]: I0812 23:58:56.664240 2257 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:58:56.665392 kubelet[2257]: I0812 23:58:56.665290 2257 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:58:56.665669 kubelet[2257]: E0812 23:58:56.665632 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:58:56.665901 kubelet[2257]: I0812 23:58:56.665889 2257 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:58:56.667179 kubelet[2257]: I0812 23:58:56.667119 2257 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:58:56.668604 kubelet[2257]: E0812 23:58:56.668430 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Aug 12 23:58:56.669687 kubelet[2257]: W0812 23:58:56.668592 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 12 23:58:56.670460 kubelet[2257]: E0812 23:58:56.668156 2257 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2a6f25dbf269 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:58:56.655848041 +0000 UTC m=+0.512664716,LastTimestamp:2025-08-12 23:58:56.655848041 +0000 UTC m=+0.512664716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:58:56.670460 kubelet[2257]: E0812 23:58:56.669986 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:56.670460 kubelet[2257]: I0812 23:58:56.670101 2257 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:58:56.670460 kubelet[2257]: I0812 23:58:56.670215 2257 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:58:56.670643 kubelet[2257]: E0812 23:58:56.670593 2257 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:58:56.671365 kubelet[2257]: I0812 23:58:56.671342 2257 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:58:56.687611 kubelet[2257]: I0812 23:58:56.687584 2257 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:58:56.687611 kubelet[2257]: I0812 23:58:56.687602 2257 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:58:56.687611 kubelet[2257]: I0812 23:58:56.687620 2257 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:58:56.691124 kubelet[2257]: I0812 23:58:56.691042 2257 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:58:56.692359 kubelet[2257]: I0812 23:58:56.692310 2257 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:58:56.692359 kubelet[2257]: I0812 23:58:56.692337 2257 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:58:56.692359 kubelet[2257]: I0812 23:58:56.692365 2257 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:58:56.692513 kubelet[2257]: E0812 23:58:56.692414 2257 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:58:56.766446 kubelet[2257]: E0812 23:58:56.766236 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:58:56.792838 kubelet[2257]: E0812 23:58:56.792790 2257 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 12 23:58:56.867247 kubelet[2257]: E0812 23:58:56.867128 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:58:56.869682 kubelet[2257]: E0812 23:58:56.869620 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Aug 12 23:58:56.908503 kubelet[2257]: W0812 23:58:56.908408 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 12 23:58:56.908503 kubelet[2257]: E0812 23:58:56.908475 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:56.955116 kubelet[2257]: I0812 23:58:56.955036 2257 policy_none.go:49] "None policy: Start" Aug 12 23:58:56.955820 kubelet[2257]: I0812 23:58:56.955783 2257 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:58:56.955820 kubelet[2257]: I0812 23:58:56.955812 2257 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:58:56.968179 kubelet[2257]: E0812 23:58:56.968101 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:58:56.981004 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:58:56.993443 kubelet[2257]: E0812 23:58:56.993330 2257 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 12 23:58:56.997372 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:58:57.000573 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:58:57.009042 kubelet[2257]: I0812 23:58:57.008536 2257 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:58:57.009042 kubelet[2257]: I0812 23:58:57.008803 2257 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:58:57.009042 kubelet[2257]: I0812 23:58:57.008815 2257 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:58:57.009256 kubelet[2257]: I0812 23:58:57.009242 2257 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:58:57.010806 kubelet[2257]: E0812 23:58:57.010550 2257 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 12 23:58:57.110193 kubelet[2257]: I0812 23:58:57.110135 2257 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:58:57.110833 kubelet[2257]: E0812 23:58:57.110803 2257 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Aug 12 23:58:57.270797 kubelet[2257]: E0812 23:58:57.270751 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Aug 12 23:58:57.313158 kubelet[2257]: I0812 23:58:57.312943 2257 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:58:57.313318 kubelet[2257]: E0812 23:58:57.313281 2257 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Aug 12 23:58:57.401917 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice - libcontainer container kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Aug 12 23:58:57.420871 systemd[1]: Created slice kubepods-burstable-pod0ea919629311fabb19135713f7ffc308.slice - libcontainer container kubepods-burstable-pod0ea919629311fabb19135713f7ffc308.slice. Aug 12 23:58:57.424109 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice - libcontainer container kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Aug 12 23:58:57.472253 kubelet[2257]: I0812 23:58:57.471555 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:58:57.472253 kubelet[2257]: I0812 23:58:57.471595 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ea919629311fabb19135713f7ffc308-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ea919629311fabb19135713f7ffc308\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:58:57.472253 kubelet[2257]: I0812 23:58:57.471617 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:57.472253 kubelet[2257]: I0812 23:58:57.471634 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:57.472253 kubelet[2257]: I0812 23:58:57.471662 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:57.472495 kubelet[2257]: I0812 23:58:57.471680 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ea919629311fabb19135713f7ffc308-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ea919629311fabb19135713f7ffc308\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:58:57.472495 kubelet[2257]: I0812 23:58:57.471696 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ea919629311fabb19135713f7ffc308-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0ea919629311fabb19135713f7ffc308\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:58:57.472495 kubelet[2257]: I0812 23:58:57.471710 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:57.472495 kubelet[2257]: I0812 23:58:57.471728 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:58:57.596705 kubelet[2257]: W0812 23:58:57.596495 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 12 23:58:57.596705 kubelet[2257]: E0812 23:58:57.596575 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:57.676199 kubelet[2257]: W0812 23:58:57.676120 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 12 23:58:57.676347 kubelet[2257]: E0812 23:58:57.676202 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:57.714821 kubelet[2257]: I0812 23:58:57.714787 2257 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:58:57.715214 kubelet[2257]: E0812 23:58:57.715171 2257 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Aug 12 23:58:57.718469 kubelet[2257]: E0812 23:58:57.718431 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:57.719038 containerd[1504]: time="2025-08-12T23:58:57.719005569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 12 23:58:57.723300 kubelet[2257]: E0812 23:58:57.723259 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:57.723759 containerd[1504]: time="2025-08-12T23:58:57.723704673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0ea919629311fabb19135713f7ffc308,Namespace:kube-system,Attempt:0,}" Aug 12 23:58:57.726417 kubelet[2257]: E0812 23:58:57.726376 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:57.726880 containerd[1504]: time="2025-08-12T23:58:57.726838063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 12 23:58:57.754379 containerd[1504]: time="2025-08-12T23:58:57.754271154Z" level=info msg="connecting to shim 46c9e6959feb8c49c1fbc4ac8cf75e62f1a449a9b56b3394290bf685c78887ea" address="unix:///run/containerd/s/3517045bde955f84d119df5ababd881371252d12e14f34691e243539e21463f1" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:58:57.781103 containerd[1504]: time="2025-08-12T23:58:57.781055700Z" level=info msg="connecting to shim 21a6fe66da40f4e921ef94cf0f7fb1ff7e80254fe4ddf5c4c60797ca3bbe1bb7" address="unix:///run/containerd/s/bb55bcaba676d535b5d407fc5ff58d4d6b41fcc523147bcb200cd5ae3ea4757f" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:58:57.783893 systemd[1]: Started cri-containerd-46c9e6959feb8c49c1fbc4ac8cf75e62f1a449a9b56b3394290bf685c78887ea.scope - libcontainer container 46c9e6959feb8c49c1fbc4ac8cf75e62f1a449a9b56b3394290bf685c78887ea. Aug 12 23:58:57.801761 containerd[1504]: time="2025-08-12T23:58:57.801709199Z" level=info msg="connecting to shim 410bd8861bafd299ae88f40d7cad5392c2eea0f5997cf325b6c3d9011d1f44ed" address="unix:///run/containerd/s/d9df62531abe8ac31dc3b3231d991d98f31faf177f0c1aa6a565b2514e45f7da" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:58:57.813884 systemd[1]: Started cri-containerd-21a6fe66da40f4e921ef94cf0f7fb1ff7e80254fe4ddf5c4c60797ca3bbe1bb7.scope - libcontainer container 21a6fe66da40f4e921ef94cf0f7fb1ff7e80254fe4ddf5c4c60797ca3bbe1bb7. Aug 12 23:58:57.838888 systemd[1]: Started cri-containerd-410bd8861bafd299ae88f40d7cad5392c2eea0f5997cf325b6c3d9011d1f44ed.scope - libcontainer container 410bd8861bafd299ae88f40d7cad5392c2eea0f5997cf325b6c3d9011d1f44ed. Aug 12 23:58:57.845710 containerd[1504]: time="2025-08-12T23:58:57.845644503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"46c9e6959feb8c49c1fbc4ac8cf75e62f1a449a9b56b3394290bf685c78887ea\"" Aug 12 23:58:57.847948 kubelet[2257]: E0812 23:58:57.847847 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:57.854155 containerd[1504]: time="2025-08-12T23:58:57.852298576Z" level=info msg="CreateContainer within sandbox \"46c9e6959feb8c49c1fbc4ac8cf75e62f1a449a9b56b3394290bf685c78887ea\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:58:57.866669 containerd[1504]: time="2025-08-12T23:58:57.865627949Z" level=info msg="Container 0c1e8ce708730a837538c2ecf7b7e742c28a8a978ef0109b84fcd7ec20e06faa: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:58:57.873873 containerd[1504]: time="2025-08-12T23:58:57.873825787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"21a6fe66da40f4e921ef94cf0f7fb1ff7e80254fe4ddf5c4c60797ca3bbe1bb7\"" Aug 12 23:58:57.874861 kubelet[2257]: E0812 23:58:57.874815 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:57.877142 containerd[1504]: time="2025-08-12T23:58:57.877083536Z" level=info msg="CreateContainer within sandbox \"46c9e6959feb8c49c1fbc4ac8cf75e62f1a449a9b56b3394290bf685c78887ea\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0c1e8ce708730a837538c2ecf7b7e742c28a8a978ef0109b84fcd7ec20e06faa\"" Aug 12 23:58:57.878193 containerd[1504]: time="2025-08-12T23:58:57.878142604Z" level=info msg="StartContainer for \"0c1e8ce708730a837538c2ecf7b7e742c28a8a978ef0109b84fcd7ec20e06faa\"" Aug 12 23:58:57.878272 containerd[1504]: time="2025-08-12T23:58:57.878229555Z" level=info msg="CreateContainer within sandbox \"21a6fe66da40f4e921ef94cf0f7fb1ff7e80254fe4ddf5c4c60797ca3bbe1bb7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:58:57.879339 containerd[1504]: time="2025-08-12T23:58:57.879272683Z" level=info msg="connecting to shim 0c1e8ce708730a837538c2ecf7b7e742c28a8a978ef0109b84fcd7ec20e06faa" address="unix:///run/containerd/s/3517045bde955f84d119df5ababd881371252d12e14f34691e243539e21463f1" protocol=ttrpc version=3 Aug 12 23:58:57.891963 containerd[1504]: time="2025-08-12T23:58:57.891834478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0ea919629311fabb19135713f7ffc308,Namespace:kube-system,Attempt:0,} returns sandbox id \"410bd8861bafd299ae88f40d7cad5392c2eea0f5997cf325b6c3d9011d1f44ed\"" Aug 12 23:58:57.892762 kubelet[2257]: E0812 23:58:57.892742 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:57.894334 containerd[1504]: time="2025-08-12T23:58:57.894283757Z" level=info msg="CreateContainer within sandbox \"410bd8861bafd299ae88f40d7cad5392c2eea0f5997cf325b6c3d9011d1f44ed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:58:57.897983 containerd[1504]: time="2025-08-12T23:58:57.897762907Z" level=info msg="Container 63e2c96effaf7886c9285fbae745972dab85d5cf8d7a499724235b7fc0624ef5: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:58:57.900933 systemd[1]: Started cri-containerd-0c1e8ce708730a837538c2ecf7b7e742c28a8a978ef0109b84fcd7ec20e06faa.scope - libcontainer container 0c1e8ce708730a837538c2ecf7b7e742c28a8a978ef0109b84fcd7ec20e06faa. Aug 12 23:58:57.905452 containerd[1504]: time="2025-08-12T23:58:57.905235623Z" level=info msg="Container 22cc50fc462c04d49f8f28fcd18338c05cff5aeca005525fb6005634a947c7d5: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:58:57.911000 containerd[1504]: time="2025-08-12T23:58:57.910940727Z" level=info msg="CreateContainer within sandbox \"21a6fe66da40f4e921ef94cf0f7fb1ff7e80254fe4ddf5c4c60797ca3bbe1bb7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"63e2c96effaf7886c9285fbae745972dab85d5cf8d7a499724235b7fc0624ef5\"" Aug 12 23:58:57.911666 containerd[1504]: time="2025-08-12T23:58:57.911621274Z" level=info msg="StartContainer for \"63e2c96effaf7886c9285fbae745972dab85d5cf8d7a499724235b7fc0624ef5\"" Aug 12 23:58:57.913005 containerd[1504]: time="2025-08-12T23:58:57.912973115Z" level=info msg="connecting to shim 63e2c96effaf7886c9285fbae745972dab85d5cf8d7a499724235b7fc0624ef5" address="unix:///run/containerd/s/bb55bcaba676d535b5d407fc5ff58d4d6b41fcc523147bcb200cd5ae3ea4757f" protocol=ttrpc version=3 Aug 12 23:58:57.916909 containerd[1504]: time="2025-08-12T23:58:57.916858262Z" level=info msg="CreateContainer within sandbox \"410bd8861bafd299ae88f40d7cad5392c2eea0f5997cf325b6c3d9011d1f44ed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"22cc50fc462c04d49f8f28fcd18338c05cff5aeca005525fb6005634a947c7d5\"" Aug 12 23:58:57.917486 containerd[1504]: time="2025-08-12T23:58:57.917452619Z" level=info msg="StartContainer for \"22cc50fc462c04d49f8f28fcd18338c05cff5aeca005525fb6005634a947c7d5\"" Aug 12 23:58:57.919118 containerd[1504]: time="2025-08-12T23:58:57.919082614Z" level=info msg="connecting to shim 22cc50fc462c04d49f8f28fcd18338c05cff5aeca005525fb6005634a947c7d5" address="unix:///run/containerd/s/d9df62531abe8ac31dc3b3231d991d98f31faf177f0c1aa6a565b2514e45f7da" protocol=ttrpc version=3 Aug 12 23:58:57.935906 systemd[1]: Started cri-containerd-63e2c96effaf7886c9285fbae745972dab85d5cf8d7a499724235b7fc0624ef5.scope - libcontainer container 63e2c96effaf7886c9285fbae745972dab85d5cf8d7a499724235b7fc0624ef5. Aug 12 23:58:57.949884 systemd[1]: Started cri-containerd-22cc50fc462c04d49f8f28fcd18338c05cff5aeca005525fb6005634a947c7d5.scope - libcontainer container 22cc50fc462c04d49f8f28fcd18338c05cff5aeca005525fb6005634a947c7d5. Aug 12 23:58:57.979703 containerd[1504]: time="2025-08-12T23:58:57.979667720Z" level=info msg="StartContainer for \"0c1e8ce708730a837538c2ecf7b7e742c28a8a978ef0109b84fcd7ec20e06faa\" returns successfully" Aug 12 23:58:57.984589 kubelet[2257]: W0812 23:58:57.984529 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 12 23:58:57.984930 kubelet[2257]: E0812 23:58:57.984599 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:58:58.008972 containerd[1504]: time="2025-08-12T23:58:58.008886176Z" level=info msg="StartContainer for \"63e2c96effaf7886c9285fbae745972dab85d5cf8d7a499724235b7fc0624ef5\" returns successfully" Aug 12 23:58:58.031476 containerd[1504]: time="2025-08-12T23:58:58.031420607Z" level=info msg="StartContainer for \"22cc50fc462c04d49f8f28fcd18338c05cff5aeca005525fb6005634a947c7d5\" returns successfully" Aug 12 23:58:58.071780 kubelet[2257]: E0812 23:58:58.071716 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="1.6s" Aug 12 23:58:58.516995 kubelet[2257]: I0812 23:58:58.516947 2257 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:58:58.704367 kubelet[2257]: E0812 23:58:58.704330 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:58.705171 kubelet[2257]: E0812 23:58:58.705147 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:58.708239 kubelet[2257]: E0812 23:58:58.708218 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:59.710404 kubelet[2257]: E0812 23:58:59.710371 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:58:59.911664 kubelet[2257]: E0812 23:58:59.911086 2257 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 12 23:59:00.027022 kubelet[2257]: I0812 23:59:00.026913 2257 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 12 23:59:00.027022 kubelet[2257]: E0812 23:59:00.026958 2257 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 12 23:59:00.049404 kubelet[2257]: E0812 23:59:00.049360 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:59:00.150332 kubelet[2257]: E0812 23:59:00.150282 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:59:00.251038 kubelet[2257]: E0812 23:59:00.250996 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:59:00.351566 kubelet[2257]: E0812 23:59:00.351462 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:59:00.452014 kubelet[2257]: E0812 23:59:00.451940 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:59:00.552972 kubelet[2257]: E0812 23:59:00.552928 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:59:00.648800 kubelet[2257]: I0812 23:59:00.648215 2257 apiserver.go:52] "Watching apiserver" Aug 12 23:59:00.668026 kubelet[2257]: I0812 23:59:00.667991 2257 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:59:02.619203 systemd[1]: Reload requested from client PID 2535 ('systemctl') (unit session-7.scope)... Aug 12 23:59:02.619224 systemd[1]: Reloading... Aug 12 23:59:02.636191 kubelet[2257]: E0812 23:59:02.636108 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:02.696780 zram_generator::config[2579]: No configuration found. Aug 12 23:59:02.714507 kubelet[2257]: E0812 23:59:02.714431 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:02.788352 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:59:02.895044 systemd[1]: Reloading finished in 275 ms. Aug 12 23:59:02.925276 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:59:02.926071 kubelet[2257]: I0812 23:59:02.925312 2257 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:59:02.945490 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:59:02.945754 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:59:02.945828 systemd[1]: kubelet.service: Consumed 911ms CPU time, 126.9M memory peak. Aug 12 23:59:02.949945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:59:03.111436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:59:03.124075 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:59:03.170091 kubelet[2620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:59:03.170091 kubelet[2620]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:59:03.170091 kubelet[2620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:59:03.170091 kubelet[2620]: I0812 23:59:03.167880 2620 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:59:03.175163 kubelet[2620]: I0812 23:59:03.175113 2620 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:59:03.175163 kubelet[2620]: I0812 23:59:03.175149 2620 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:59:03.175432 kubelet[2620]: I0812 23:59:03.175411 2620 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:59:03.176898 kubelet[2620]: I0812 23:59:03.176867 2620 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:59:03.178923 kubelet[2620]: I0812 23:59:03.178882 2620 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:59:03.185365 kubelet[2620]: I0812 23:59:03.185332 2620 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 12 23:59:03.188191 kubelet[2620]: I0812 23:59:03.188139 2620 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:59:03.188322 kubelet[2620]: I0812 23:59:03.188261 2620 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:59:03.188418 kubelet[2620]: I0812 23:59:03.188357 2620 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:59:03.188585 kubelet[2620]: I0812 23:59:03.188391 2620 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:59:03.188585 kubelet[2620]: I0812 23:59:03.188582 2620 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:59:03.188730 kubelet[2620]: I0812 23:59:03.188592 2620 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:59:03.188730 kubelet[2620]: I0812 23:59:03.188628 2620 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:59:03.188832 kubelet[2620]: I0812 23:59:03.188817 2620 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:59:03.188862 kubelet[2620]: I0812 23:59:03.188838 2620 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:59:03.188862 kubelet[2620]: I0812 23:59:03.188859 2620 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:59:03.188915 kubelet[2620]: I0812 23:59:03.188873 2620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:59:03.190359 kubelet[2620]: I0812 23:59:03.190113 2620 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 12 23:59:03.190731 kubelet[2620]: I0812 23:59:03.190666 2620 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:59:03.193663 kubelet[2620]: I0812 23:59:03.191247 2620 server.go:1274] "Started kubelet" Aug 12 23:59:03.193802 kubelet[2620]: I0812 23:59:03.193773 2620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:59:03.195737 kubelet[2620]: I0812 23:59:03.195692 2620 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:59:03.196597 kubelet[2620]: I0812 23:59:03.196554 2620 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:59:03.199462 kubelet[2620]: I0812 23:59:03.199400 2620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:59:03.199627 kubelet[2620]: I0812 23:59:03.199610 2620 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:59:03.199870 kubelet[2620]: I0812 23:59:03.199848 2620 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:59:03.200644 kubelet[2620]: I0812 23:59:03.200561 2620 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:59:03.204018 kubelet[2620]: I0812 23:59:03.203701 2620 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:59:03.204099 kubelet[2620]: E0812 23:59:03.204053 2620 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:59:03.205591 kubelet[2620]: I0812 23:59:03.205562 2620 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:59:03.206862 kubelet[2620]: E0812 23:59:03.206368 2620 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:59:03.213406 kubelet[2620]: I0812 23:59:03.212765 2620 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:59:03.213406 kubelet[2620]: I0812 23:59:03.212790 2620 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:59:03.213406 kubelet[2620]: I0812 23:59:03.212874 2620 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:59:03.217163 kubelet[2620]: I0812 23:59:03.217115 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:59:03.219671 kubelet[2620]: I0812 23:59:03.219616 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:59:03.220500 kubelet[2620]: I0812 23:59:03.220256 2620 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:59:03.220500 kubelet[2620]: I0812 23:59:03.220299 2620 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:59:03.220500 kubelet[2620]: E0812 23:59:03.220355 2620 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:59:03.257637 kubelet[2620]: I0812 23:59:03.257602 2620 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:59:03.257637 kubelet[2620]: I0812 23:59:03.257628 2620 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:59:03.257801 kubelet[2620]: I0812 23:59:03.257681 2620 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:59:03.257955 kubelet[2620]: I0812 23:59:03.257919 2620 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:59:03.257955 kubelet[2620]: I0812 23:59:03.257939 2620 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:59:03.258025 kubelet[2620]: I0812 23:59:03.257960 2620 policy_none.go:49] "None policy: Start" Aug 12 23:59:03.258770 kubelet[2620]: I0812 23:59:03.258744 2620 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:59:03.258770 kubelet[2620]: I0812 23:59:03.258776 2620 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:59:03.258960 kubelet[2620]: I0812 23:59:03.258928 2620 state_mem.go:75] "Updated machine memory state" Aug 12 23:59:03.266105 kubelet[2620]: I0812 23:59:03.266067 2620 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:59:03.266338 kubelet[2620]: I0812 23:59:03.266266 2620 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:59:03.266397 kubelet[2620]: I0812 23:59:03.266281 2620 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:59:03.266618 kubelet[2620]: I0812 23:59:03.266593 2620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:59:03.331924 kubelet[2620]: E0812 23:59:03.331889 2620 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:03.370606 kubelet[2620]: I0812 23:59:03.370579 2620 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:59:03.382910 kubelet[2620]: I0812 23:59:03.382877 2620 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 12 23:59:03.383040 kubelet[2620]: I0812 23:59:03.382973 2620 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 12 23:59:03.407091 kubelet[2620]: I0812 23:59:03.407044 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:03.407091 kubelet[2620]: I0812 23:59:03.407094 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:03.407266 kubelet[2620]: I0812 23:59:03.407120 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:03.407266 kubelet[2620]: I0812 23:59:03.407140 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:59:03.407266 kubelet[2620]: I0812 23:59:03.407158 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ea919629311fabb19135713f7ffc308-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ea919629311fabb19135713f7ffc308\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:03.407266 kubelet[2620]: I0812 23:59:03.407173 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ea919629311fabb19135713f7ffc308-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ea919629311fabb19135713f7ffc308\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:03.407266 kubelet[2620]: I0812 23:59:03.407190 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:03.407366 kubelet[2620]: I0812 23:59:03.407208 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ea919629311fabb19135713f7ffc308-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0ea919629311fabb19135713f7ffc308\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:03.407366 kubelet[2620]: I0812 23:59:03.407225 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:03.629103 kubelet[2620]: E0812 23:59:03.629057 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:03.632558 kubelet[2620]: E0812 23:59:03.632377 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:03.632558 kubelet[2620]: E0812 23:59:03.632511 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:04.191131 kubelet[2620]: I0812 23:59:04.190107 2620 apiserver.go:52] "Watching apiserver" Aug 12 23:59:04.205929 kubelet[2620]: I0812 23:59:04.204811 2620 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:59:04.242339 kubelet[2620]: E0812 23:59:04.241593 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:04.242339 kubelet[2620]: E0812 23:59:04.242309 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:04.404028 kubelet[2620]: E0812 23:59:04.403962 2620 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 12 23:59:04.404265 kubelet[2620]: E0812 23:59:04.404166 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:04.411857 kubelet[2620]: I0812 23:59:04.411760 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.411742461 podStartE2EDuration="1.411742461s" podCreationTimestamp="2025-08-12 23:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:59:04.411644852 +0000 UTC m=+1.284221914" watchObservedRunningTime="2025-08-12 23:59:04.411742461 +0000 UTC m=+1.284319523" Aug 12 23:59:04.428262 kubelet[2620]: I0812 23:59:04.428075 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.428058186 podStartE2EDuration="2.428058186s" podCreationTimestamp="2025-08-12 23:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:59:04.427951572 +0000 UTC m=+1.300528634" watchObservedRunningTime="2025-08-12 23:59:04.428058186 +0000 UTC m=+1.300635208" Aug 12 23:59:05.245755 kubelet[2620]: E0812 23:59:05.245643 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:05.245755 kubelet[2620]: E0812 23:59:05.245726 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:09.260839 kubelet[2620]: I0812 23:59:09.260566 2620 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:59:09.262626 kubelet[2620]: I0812 23:59:09.261238 2620 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:59:09.262687 containerd[1504]: time="2025-08-12T23:59:09.261013677Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:59:09.858933 kubelet[2620]: I0812 23:59:09.857844 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.857825534 podStartE2EDuration="6.857825534s" podCreationTimestamp="2025-08-12 23:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:59:04.449295413 +0000 UTC m=+1.321872475" watchObservedRunningTime="2025-08-12 23:59:09.857825534 +0000 UTC m=+6.730402596" Aug 12 23:59:09.869583 systemd[1]: Created slice kubepods-besteffort-pod3069a554_e738_4e24_8a85_6d0942c9a158.slice - libcontainer container kubepods-besteffort-pod3069a554_e738_4e24_8a85_6d0942c9a158.slice. Aug 12 23:59:09.955571 kubelet[2620]: I0812 23:59:09.955496 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3069a554-e738-4e24-8a85-6d0942c9a158-lib-modules\") pod \"kube-proxy-dcv57\" (UID: \"3069a554-e738-4e24-8a85-6d0942c9a158\") " pod="kube-system/kube-proxy-dcv57" Aug 12 23:59:09.955571 kubelet[2620]: I0812 23:59:09.955555 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3069a554-e738-4e24-8a85-6d0942c9a158-kube-proxy\") pod \"kube-proxy-dcv57\" (UID: \"3069a554-e738-4e24-8a85-6d0942c9a158\") " pod="kube-system/kube-proxy-dcv57" Aug 12 23:59:09.955763 kubelet[2620]: I0812 23:59:09.955606 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3069a554-e738-4e24-8a85-6d0942c9a158-xtables-lock\") pod \"kube-proxy-dcv57\" (UID: \"3069a554-e738-4e24-8a85-6d0942c9a158\") " pod="kube-system/kube-proxy-dcv57" Aug 12 23:59:09.955763 kubelet[2620]: I0812 23:59:09.955671 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lntst\" (UniqueName: \"kubernetes.io/projected/3069a554-e738-4e24-8a85-6d0942c9a158-kube-api-access-lntst\") pod \"kube-proxy-dcv57\" (UID: \"3069a554-e738-4e24-8a85-6d0942c9a158\") " pod="kube-system/kube-proxy-dcv57" Aug 12 23:59:10.065544 kubelet[2620]: E0812 23:59:10.065485 2620 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 12 23:59:10.065544 kubelet[2620]: E0812 23:59:10.065530 2620 projected.go:194] Error preparing data for projected volume kube-api-access-lntst for pod kube-system/kube-proxy-dcv57: configmap "kube-root-ca.crt" not found Aug 12 23:59:10.065747 kubelet[2620]: E0812 23:59:10.065601 2620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3069a554-e738-4e24-8a85-6d0942c9a158-kube-api-access-lntst podName:3069a554-e738-4e24-8a85-6d0942c9a158 nodeName:}" failed. No retries permitted until 2025-08-12 23:59:10.565576736 +0000 UTC m=+7.438153798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lntst" (UniqueName: "kubernetes.io/projected/3069a554-e738-4e24-8a85-6d0942c9a158-kube-api-access-lntst") pod "kube-proxy-dcv57" (UID: "3069a554-e738-4e24-8a85-6d0942c9a158") : configmap "kube-root-ca.crt" not found Aug 12 23:59:10.441790 systemd[1]: Created slice kubepods-besteffort-podddcccf1f_dfbc_495a_a979_59763bed41bf.slice - libcontainer container kubepods-besteffort-podddcccf1f_dfbc_495a_a979_59763bed41bf.slice. Aug 12 23:59:10.459329 kubelet[2620]: I0812 23:59:10.459286 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgf9p\" (UniqueName: \"kubernetes.io/projected/ddcccf1f-dfbc-495a-a979-59763bed41bf-kube-api-access-lgf9p\") pod \"tigera-operator-5bf8dfcb4-xgzv4\" (UID: \"ddcccf1f-dfbc-495a-a979-59763bed41bf\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-xgzv4" Aug 12 23:59:10.459827 kubelet[2620]: I0812 23:59:10.459774 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ddcccf1f-dfbc-495a-a979-59763bed41bf-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-xgzv4\" (UID: \"ddcccf1f-dfbc-495a-a979-59763bed41bf\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-xgzv4" Aug 12 23:59:10.745782 containerd[1504]: time="2025-08-12T23:59:10.745497816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-xgzv4,Uid:ddcccf1f-dfbc-495a-a979-59763bed41bf,Namespace:tigera-operator,Attempt:0,}" Aug 12 23:59:10.769446 containerd[1504]: time="2025-08-12T23:59:10.769389625Z" level=info msg="connecting to shim 4402c2f861b8dacd16e0f0906d4793540fb4ce94596349a185996320d753b2dd" address="unix:///run/containerd/s/28ebc6cad68bcc485fb784cbbf113d3126136d68ef66660010be9692c24e917c" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:59:10.781823 kubelet[2620]: E0812 23:59:10.781780 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:10.783454 containerd[1504]: time="2025-08-12T23:59:10.783379950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dcv57,Uid:3069a554-e738-4e24-8a85-6d0942c9a158,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:10.797874 systemd[1]: Started cri-containerd-4402c2f861b8dacd16e0f0906d4793540fb4ce94596349a185996320d753b2dd.scope - libcontainer container 4402c2f861b8dacd16e0f0906d4793540fb4ce94596349a185996320d753b2dd. Aug 12 23:59:10.815706 containerd[1504]: time="2025-08-12T23:59:10.815098911Z" level=info msg="connecting to shim d88684035b8399c0bf5815c3709ead731aabf21f724ccec7c9f1a190f82d06ee" address="unix:///run/containerd/s/78dbed90bdd561f400e122299d3b3e5ca792189c00bb25d14714024482bb027b" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:59:10.846882 systemd[1]: Started cri-containerd-d88684035b8399c0bf5815c3709ead731aabf21f724ccec7c9f1a190f82d06ee.scope - libcontainer container d88684035b8399c0bf5815c3709ead731aabf21f724ccec7c9f1a190f82d06ee. Aug 12 23:59:10.863140 containerd[1504]: time="2025-08-12T23:59:10.863096208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-xgzv4,Uid:ddcccf1f-dfbc-495a-a979-59763bed41bf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4402c2f861b8dacd16e0f0906d4793540fb4ce94596349a185996320d753b2dd\"" Aug 12 23:59:10.865498 containerd[1504]: time="2025-08-12T23:59:10.865460688Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 12 23:59:10.938583 containerd[1504]: time="2025-08-12T23:59:10.938534234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dcv57,Uid:3069a554-e738-4e24-8a85-6d0942c9a158,Namespace:kube-system,Attempt:0,} returns sandbox id \"d88684035b8399c0bf5815c3709ead731aabf21f724ccec7c9f1a190f82d06ee\"" Aug 12 23:59:10.939280 kubelet[2620]: E0812 23:59:10.939260 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:10.943187 containerd[1504]: time="2025-08-12T23:59:10.943148591Z" level=info msg="CreateContainer within sandbox \"d88684035b8399c0bf5815c3709ead731aabf21f724ccec7c9f1a190f82d06ee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:59:10.997555 containerd[1504]: time="2025-08-12T23:59:10.997388331Z" level=info msg="Container ae714effa4d2729846d15e5a695258ed35af3cd65128d0f54b7b79b5cb11c8d3: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:11.005180 containerd[1504]: time="2025-08-12T23:59:11.005132562Z" level=info msg="CreateContainer within sandbox \"d88684035b8399c0bf5815c3709ead731aabf21f724ccec7c9f1a190f82d06ee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ae714effa4d2729846d15e5a695258ed35af3cd65128d0f54b7b79b5cb11c8d3\"" Aug 12 23:59:11.005801 containerd[1504]: time="2025-08-12T23:59:11.005774669Z" level=info msg="StartContainer for \"ae714effa4d2729846d15e5a695258ed35af3cd65128d0f54b7b79b5cb11c8d3\"" Aug 12 23:59:11.007345 containerd[1504]: time="2025-08-12T23:59:11.007302246Z" level=info msg="connecting to shim ae714effa4d2729846d15e5a695258ed35af3cd65128d0f54b7b79b5cb11c8d3" address="unix:///run/containerd/s/78dbed90bdd561f400e122299d3b3e5ca792189c00bb25d14714024482bb027b" protocol=ttrpc version=3 Aug 12 23:59:11.027882 systemd[1]: Started cri-containerd-ae714effa4d2729846d15e5a695258ed35af3cd65128d0f54b7b79b5cb11c8d3.scope - libcontainer container ae714effa4d2729846d15e5a695258ed35af3cd65128d0f54b7b79b5cb11c8d3. Aug 12 23:59:11.068415 containerd[1504]: time="2025-08-12T23:59:11.068376787Z" level=info msg="StartContainer for \"ae714effa4d2729846d15e5a695258ed35af3cd65128d0f54b7b79b5cb11c8d3\" returns successfully" Aug 12 23:59:11.262107 kubelet[2620]: E0812 23:59:11.262005 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:11.868059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872079533.mount: Deactivated successfully. Aug 12 23:59:11.989695 kubelet[2620]: E0812 23:59:11.989225 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:12.034367 kubelet[2620]: I0812 23:59:12.034225 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dcv57" podStartSLOduration=3.034204856 podStartE2EDuration="3.034204856s" podCreationTimestamp="2025-08-12 23:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:59:11.273341142 +0000 UTC m=+8.145918204" watchObservedRunningTime="2025-08-12 23:59:12.034204856 +0000 UTC m=+8.906781918" Aug 12 23:59:12.262846 kubelet[2620]: E0812 23:59:12.262801 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:12.839067 kubelet[2620]: E0812 23:59:12.839032 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:13.264569 kubelet[2620]: E0812 23:59:13.264491 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:13.627200 containerd[1504]: time="2025-08-12T23:59:13.627082252Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:13.628133 containerd[1504]: time="2025-08-12T23:59:13.628084088Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 12 23:59:13.629736 containerd[1504]: time="2025-08-12T23:59:13.629695237Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:13.632268 containerd[1504]: time="2025-08-12T23:59:13.631967514Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:13.632807 containerd[1504]: time="2025-08-12T23:59:13.632742079Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.767231693s" Aug 12 23:59:13.632807 containerd[1504]: time="2025-08-12T23:59:13.632784652Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 12 23:59:13.635190 containerd[1504]: time="2025-08-12T23:59:13.635155961Z" level=info msg="CreateContainer within sandbox \"4402c2f861b8dacd16e0f0906d4793540fb4ce94596349a185996320d753b2dd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 12 23:59:13.645720 containerd[1504]: time="2025-08-12T23:59:13.645553403Z" level=info msg="Container a3ba6933bd18e050bb7fe36c0d0694b9f806967318a023ca6c422403f640d8ce: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:13.649595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2725861317.mount: Deactivated successfully. Aug 12 23:59:13.655259 containerd[1504]: time="2025-08-12T23:59:13.655202249Z" level=info msg="CreateContainer within sandbox \"4402c2f861b8dacd16e0f0906d4793540fb4ce94596349a185996320d753b2dd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a3ba6933bd18e050bb7fe36c0d0694b9f806967318a023ca6c422403f640d8ce\"" Aug 12 23:59:13.656318 containerd[1504]: time="2025-08-12T23:59:13.656282550Z" level=info msg="StartContainer for \"a3ba6933bd18e050bb7fe36c0d0694b9f806967318a023ca6c422403f640d8ce\"" Aug 12 23:59:13.657376 containerd[1504]: time="2025-08-12T23:59:13.657341325Z" level=info msg="connecting to shim a3ba6933bd18e050bb7fe36c0d0694b9f806967318a023ca6c422403f640d8ce" address="unix:///run/containerd/s/28ebc6cad68bcc485fb784cbbf113d3126136d68ef66660010be9692c24e917c" protocol=ttrpc version=3 Aug 12 23:59:13.682954 systemd[1]: Started cri-containerd-a3ba6933bd18e050bb7fe36c0d0694b9f806967318a023ca6c422403f640d8ce.scope - libcontainer container a3ba6933bd18e050bb7fe36c0d0694b9f806967318a023ca6c422403f640d8ce. Aug 12 23:59:13.711475 containerd[1504]: time="2025-08-12T23:59:13.711440003Z" level=info msg="StartContainer for \"a3ba6933bd18e050bb7fe36c0d0694b9f806967318a023ca6c422403f640d8ce\" returns successfully" Aug 12 23:59:13.845308 kubelet[2620]: E0812 23:59:13.838743 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:14.268362 kubelet[2620]: E0812 23:59:14.268309 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:14.284434 kubelet[2620]: I0812 23:59:14.284327 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-xgzv4" podStartSLOduration=1.5152299930000002 podStartE2EDuration="4.284309258s" podCreationTimestamp="2025-08-12 23:59:10 +0000 UTC" firstStartedPulling="2025-08-12 23:59:10.864990553 +0000 UTC m=+7.737567615" lastFinishedPulling="2025-08-12 23:59:13.634069858 +0000 UTC m=+10.506646880" observedRunningTime="2025-08-12 23:59:14.283448241 +0000 UTC m=+11.156025303" watchObservedRunningTime="2025-08-12 23:59:14.284309258 +0000 UTC m=+11.156886280" Aug 12 23:59:18.449006 update_engine[1484]: I20250812 23:59:18.448439 1484 update_attempter.cc:509] Updating boot flags... Aug 12 23:59:19.530547 sudo[1711]: pam_unix(sudo:session): session closed for user root Aug 12 23:59:19.532141 sshd[1710]: Connection closed by 10.0.0.1 port 34510 Aug 12 23:59:19.537483 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:19.543099 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:34510.service: Deactivated successfully. Aug 12 23:59:19.548445 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:59:19.551779 systemd[1]: session-7.scope: Consumed 6.880s CPU time, 229.5M memory peak. Aug 12 23:59:19.555598 systemd-logind[1481]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:59:19.561172 systemd-logind[1481]: Removed session 7. Aug 12 23:59:23.479283 systemd[1]: Created slice kubepods-besteffort-pod450edd61_fdaa_4ec8_84c8_e0bd7897ca9a.slice - libcontainer container kubepods-besteffort-pod450edd61_fdaa_4ec8_84c8_e0bd7897ca9a.slice. Aug 12 23:59:23.549338 kubelet[2620]: I0812 23:59:23.549296 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/450edd61-fdaa-4ec8-84c8-e0bd7897ca9a-typha-certs\") pod \"calico-typha-7d9bd785b6-8mxhl\" (UID: \"450edd61-fdaa-4ec8-84c8-e0bd7897ca9a\") " pod="calico-system/calico-typha-7d9bd785b6-8mxhl" Aug 12 23:59:23.549828 kubelet[2620]: I0812 23:59:23.549343 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/450edd61-fdaa-4ec8-84c8-e0bd7897ca9a-tigera-ca-bundle\") pod \"calico-typha-7d9bd785b6-8mxhl\" (UID: \"450edd61-fdaa-4ec8-84c8-e0bd7897ca9a\") " pod="calico-system/calico-typha-7d9bd785b6-8mxhl" Aug 12 23:59:23.549828 kubelet[2620]: I0812 23:59:23.549382 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcdht\" (UniqueName: \"kubernetes.io/projected/450edd61-fdaa-4ec8-84c8-e0bd7897ca9a-kube-api-access-hcdht\") pod \"calico-typha-7d9bd785b6-8mxhl\" (UID: \"450edd61-fdaa-4ec8-84c8-e0bd7897ca9a\") " pod="calico-system/calico-typha-7d9bd785b6-8mxhl" Aug 12 23:59:23.669084 systemd[1]: Created slice kubepods-besteffort-pod5e8eca26_db0c_40c8_968b_abe1d2942b48.slice - libcontainer container kubepods-besteffort-pod5e8eca26_db0c_40c8_968b_abe1d2942b48.slice. Aug 12 23:59:23.750064 kubelet[2620]: I0812 23:59:23.749889 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5e8eca26-db0c-40c8-968b-abe1d2942b48-var-run-calico\") pod \"calico-node-9dwkq\" (UID: \"5e8eca26-db0c-40c8-968b-abe1d2942b48\") " pod="calico-system/calico-node-9dwkq" Aug 12 23:59:23.750064 kubelet[2620]: I0812 23:59:23.749946 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5e8eca26-db0c-40c8-968b-abe1d2942b48-policysync\") pod \"calico-node-9dwkq\" (UID: \"5e8eca26-db0c-40c8-968b-abe1d2942b48\") " pod="calico-system/calico-node-9dwkq" Aug 12 23:59:23.750064 kubelet[2620]: I0812 23:59:23.749976 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5e8eca26-db0c-40c8-968b-abe1d2942b48-flexvol-driver-host\") pod \"calico-node-9dwkq\" (UID: \"5e8eca26-db0c-40c8-968b-abe1d2942b48\") " pod="calico-system/calico-node-9dwkq" Aug 12 23:59:23.750064 kubelet[2620]: I0812 23:59:23.750000 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e8eca26-db0c-40c8-968b-abe1d2942b48-tigera-ca-bundle\") pod \"calico-node-9dwkq\" (UID: \"5e8eca26-db0c-40c8-968b-abe1d2942b48\") " pod="calico-system/calico-node-9dwkq" Aug 12 23:59:23.750064 kubelet[2620]: I0812 23:59:23.750016 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e8eca26-db0c-40c8-968b-abe1d2942b48-xtables-lock\") pod \"calico-node-9dwkq\" (UID: \"5e8eca26-db0c-40c8-968b-abe1d2942b48\") " pod="calico-system/calico-node-9dwkq" Aug 12 23:59:23.750255 kubelet[2620]: I0812 23:59:23.750036 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5e8eca26-db0c-40c8-968b-abe1d2942b48-cni-log-dir\") pod \"calico-node-9dwkq\" (UID: \"5e8eca26-db0c-40c8-968b-abe1d2942b48\") " pod="calico-system/calico-node-9dwkq" Aug 12 23:59:23.750255 kubelet[2620]: I0812 23:59:23.750055 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e8eca26-db0c-40c8-968b-abe1d2942b48-var-lib-calico\") pod \"calico-node-9dwkq\" (UID: \"5e8eca26-db0c-40c8-968b-abe1d2942b48\") " pod="calico-system/calico-node-9dwkq" Aug 12 23:59:23.750255 kubelet[2620]: I0812 23:59:23.750074 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5e8eca26-db0c-40c8-968b-abe1d2942b48-node-certs\") pod \"calico-node-9dwkq\" (UID: \"5e8eca26-db0c-40c8-968b-abe1d2942b48\") " pod="calico-system/calico-node-9dwkq" Aug 12 23:59:23.750255 kubelet[2620]: I0812 23:59:23.750090 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dht6\" (UniqueName: \"kubernetes.io/projected/5e8eca26-db0c-40c8-968b-abe1d2942b48-kube-api-access-2dht6\") pod \"calico-node-9dwkq\" (UID: \"5e8eca26-db0c-40c8-968b-abe1d2942b48\") " pod="calico-system/calico-node-9dwkq" Aug 12 23:59:23.750255 kubelet[2620]: I0812 23:59:23.750107 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5e8eca26-db0c-40c8-968b-abe1d2942b48-cni-net-dir\") pod \"calico-node-9dwkq\" (UID: \"5e8eca26-db0c-40c8-968b-abe1d2942b48\") " pod="calico-system/calico-node-9dwkq" Aug 12 23:59:23.750363 kubelet[2620]: I0812 23:59:23.750123 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5e8eca26-db0c-40c8-968b-abe1d2942b48-cni-bin-dir\") pod \"calico-node-9dwkq\" (UID: \"5e8eca26-db0c-40c8-968b-abe1d2942b48\") " pod="calico-system/calico-node-9dwkq" Aug 12 23:59:23.750363 kubelet[2620]: I0812 23:59:23.750139 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e8eca26-db0c-40c8-968b-abe1d2942b48-lib-modules\") pod \"calico-node-9dwkq\" (UID: \"5e8eca26-db0c-40c8-968b-abe1d2942b48\") " pod="calico-system/calico-node-9dwkq" Aug 12 23:59:23.789052 kubelet[2620]: E0812 23:59:23.789018 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:23.789812 containerd[1504]: time="2025-08-12T23:59:23.789761922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d9bd785b6-8mxhl,Uid:450edd61-fdaa-4ec8-84c8-e0bd7897ca9a,Namespace:calico-system,Attempt:0,}" Aug 12 23:59:23.869003 containerd[1504]: time="2025-08-12T23:59:23.868761311Z" level=info msg="connecting to shim ece4e7dde634b519e5707f2550698057465b383181965c03998c038f2dc52b37" address="unix:///run/containerd/s/f797dabeb041df10a0da9edd3ca967df9be1a316ce7f1f93d77ebc0473f90a71" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:59:23.901524 kubelet[2620]: E0812 23:59:23.900904 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8g89f" podUID="dee9ceaa-e81b-402b-8ee3-70bd7c437460" Aug 12 23:59:23.935649 kubelet[2620]: E0812 23:59:23.935583 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.935649 kubelet[2620]: W0812 23:59:23.935607 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.935649 kubelet[2620]: E0812 23:59:23.935629 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.936109 kubelet[2620]: E0812 23:59:23.936081 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.936109 kubelet[2620]: W0812 23:59:23.936096 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.936109 kubelet[2620]: E0812 23:59:23.936108 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.936775 kubelet[2620]: E0812 23:59:23.936742 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.936775 kubelet[2620]: W0812 23:59:23.936758 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.936775 kubelet[2620]: E0812 23:59:23.936769 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.937139 kubelet[2620]: E0812 23:59:23.937109 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.937139 kubelet[2620]: W0812 23:59:23.937125 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.937139 kubelet[2620]: E0812 23:59:23.937136 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.938742 kubelet[2620]: E0812 23:59:23.938708 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.938742 kubelet[2620]: W0812 23:59:23.938732 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.938863 kubelet[2620]: E0812 23:59:23.938747 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.939576 kubelet[2620]: E0812 23:59:23.939024 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.939576 kubelet[2620]: W0812 23:59:23.939036 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.939576 kubelet[2620]: E0812 23:59:23.939046 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.939576 kubelet[2620]: E0812 23:59:23.939215 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.939576 kubelet[2620]: W0812 23:59:23.939231 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.939576 kubelet[2620]: E0812 23:59:23.939239 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.939783 kubelet[2620]: E0812 23:59:23.939705 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.939783 kubelet[2620]: W0812 23:59:23.939718 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.939783 kubelet[2620]: E0812 23:59:23.939732 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.940940 kubelet[2620]: E0812 23:59:23.940586 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.940940 kubelet[2620]: W0812 23:59:23.940604 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.940940 kubelet[2620]: E0812 23:59:23.940617 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.941799 kubelet[2620]: E0812 23:59:23.941766 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.941799 kubelet[2620]: W0812 23:59:23.941791 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.941871 kubelet[2620]: E0812 23:59:23.941808 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.942151 kubelet[2620]: E0812 23:59:23.942119 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.942151 kubelet[2620]: W0812 23:59:23.942134 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.942151 kubelet[2620]: E0812 23:59:23.942146 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.942573 kubelet[2620]: E0812 23:59:23.942546 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.942606 kubelet[2620]: W0812 23:59:23.942576 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.942606 kubelet[2620]: E0812 23:59:23.942592 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.942849 kubelet[2620]: E0812 23:59:23.942800 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.942849 kubelet[2620]: W0812 23:59:23.942836 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.942849 kubelet[2620]: E0812 23:59:23.942849 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.943066 kubelet[2620]: E0812 23:59:23.943015 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.943066 kubelet[2620]: W0812 23:59:23.943039 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.943121 kubelet[2620]: E0812 23:59:23.943078 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.943323 kubelet[2620]: E0812 23:59:23.943306 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.943357 kubelet[2620]: W0812 23:59:23.943322 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.943357 kubelet[2620]: E0812 23:59:23.943333 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.943636 kubelet[2620]: E0812 23:59:23.943604 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.943636 kubelet[2620]: W0812 23:59:23.943623 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.943636 kubelet[2620]: E0812 23:59:23.943636 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.944144 kubelet[2620]: E0812 23:59:23.944120 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.944144 kubelet[2620]: W0812 23:59:23.944138 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.944759 kubelet[2620]: E0812 23:59:23.944732 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.945693 kubelet[2620]: E0812 23:59:23.945553 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.945951 systemd[1]: Started cri-containerd-ece4e7dde634b519e5707f2550698057465b383181965c03998c038f2dc52b37.scope - libcontainer container ece4e7dde634b519e5707f2550698057465b383181965c03998c038f2dc52b37. Aug 12 23:59:23.946125 kubelet[2620]: W0812 23:59:23.946099 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.946157 kubelet[2620]: E0812 23:59:23.946131 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.947615 kubelet[2620]: E0812 23:59:23.947574 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.948015 kubelet[2620]: W0812 23:59:23.947974 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.948015 kubelet[2620]: E0812 23:59:23.948006 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.948881 kubelet[2620]: E0812 23:59:23.948851 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.948881 kubelet[2620]: W0812 23:59:23.948874 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.948978 kubelet[2620]: E0812 23:59:23.948889 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.952613 kubelet[2620]: E0812 23:59:23.952582 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.952613 kubelet[2620]: W0812 23:59:23.952602 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.952613 kubelet[2620]: E0812 23:59:23.952621 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.952813 kubelet[2620]: I0812 23:59:23.952669 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dee9ceaa-e81b-402b-8ee3-70bd7c437460-registration-dir\") pod \"csi-node-driver-8g89f\" (UID: \"dee9ceaa-e81b-402b-8ee3-70bd7c437460\") " pod="calico-system/csi-node-driver-8g89f" Aug 12 23:59:23.952926 kubelet[2620]: E0812 23:59:23.952872 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.952926 kubelet[2620]: W0812 23:59:23.952890 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.952926 kubelet[2620]: E0812 23:59:23.952905 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.953078 kubelet[2620]: E0812 23:59:23.953064 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.953104 kubelet[2620]: W0812 23:59:23.953093 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.953126 kubelet[2620]: E0812 23:59:23.953110 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.953357 kubelet[2620]: E0812 23:59:23.953337 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.953393 kubelet[2620]: W0812 23:59:23.953356 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.953393 kubelet[2620]: E0812 23:59:23.953369 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.953728 kubelet[2620]: I0812 23:59:23.953708 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dee9ceaa-e81b-402b-8ee3-70bd7c437460-socket-dir\") pod \"csi-node-driver-8g89f\" (UID: \"dee9ceaa-e81b-402b-8ee3-70bd7c437460\") " pod="calico-system/csi-node-driver-8g89f" Aug 12 23:59:23.954637 kubelet[2620]: E0812 23:59:23.954597 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.954637 kubelet[2620]: W0812 23:59:23.954618 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.955724 kubelet[2620]: E0812 23:59:23.955687 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.955791 kubelet[2620]: I0812 23:59:23.955726 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dee9ceaa-e81b-402b-8ee3-70bd7c437460-kubelet-dir\") pod \"csi-node-driver-8g89f\" (UID: \"dee9ceaa-e81b-402b-8ee3-70bd7c437460\") " pod="calico-system/csi-node-driver-8g89f" Aug 12 23:59:23.955990 kubelet[2620]: E0812 23:59:23.955954 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.955990 kubelet[2620]: W0812 23:59:23.955970 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.956073 kubelet[2620]: E0812 23:59:23.956044 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.956098 kubelet[2620]: I0812 23:59:23.956088 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f7vc\" (UniqueName: \"kubernetes.io/projected/dee9ceaa-e81b-402b-8ee3-70bd7c437460-kube-api-access-9f7vc\") pod \"csi-node-driver-8g89f\" (UID: \"dee9ceaa-e81b-402b-8ee3-70bd7c437460\") " pod="calico-system/csi-node-driver-8g89f" Aug 12 23:59:23.956290 kubelet[2620]: E0812 23:59:23.956261 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.956290 kubelet[2620]: W0812 23:59:23.956276 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.956361 kubelet[2620]: E0812 23:59:23.956317 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.956454 kubelet[2620]: E0812 23:59:23.956431 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.956454 kubelet[2620]: W0812 23:59:23.956445 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.956508 kubelet[2620]: E0812 23:59:23.956456 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.956722 kubelet[2620]: E0812 23:59:23.956678 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.956722 kubelet[2620]: W0812 23:59:23.956693 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.956722 kubelet[2620]: E0812 23:59:23.956710 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.956933 kubelet[2620]: I0812 23:59:23.956729 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dee9ceaa-e81b-402b-8ee3-70bd7c437460-varrun\") pod \"csi-node-driver-8g89f\" (UID: \"dee9ceaa-e81b-402b-8ee3-70bd7c437460\") " pod="calico-system/csi-node-driver-8g89f" Aug 12 23:59:23.957815 kubelet[2620]: E0812 23:59:23.957785 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.957815 kubelet[2620]: W0812 23:59:23.957809 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.957907 kubelet[2620]: E0812 23:59:23.957844 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.958467 kubelet[2620]: E0812 23:59:23.958436 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.958467 kubelet[2620]: W0812 23:59:23.958456 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.958523 kubelet[2620]: E0812 23:59:23.958474 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.958842 kubelet[2620]: E0812 23:59:23.958817 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.958842 kubelet[2620]: W0812 23:59:23.958842 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.958897 kubelet[2620]: E0812 23:59:23.958859 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.959005 kubelet[2620]: E0812 23:59:23.958991 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.959005 kubelet[2620]: W0812 23:59:23.959003 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.959057 kubelet[2620]: E0812 23:59:23.959012 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.959172 kubelet[2620]: E0812 23:59:23.959160 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.959202 kubelet[2620]: W0812 23:59:23.959172 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.959202 kubelet[2620]: E0812 23:59:23.959181 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.959351 kubelet[2620]: E0812 23:59:23.959339 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:23.959381 kubelet[2620]: W0812 23:59:23.959351 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:23.959381 kubelet[2620]: E0812 23:59:23.959360 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:23.974692 containerd[1504]: time="2025-08-12T23:59:23.974518429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9dwkq,Uid:5e8eca26-db0c-40c8-968b-abe1d2942b48,Namespace:calico-system,Attempt:0,}" Aug 12 23:59:23.998533 containerd[1504]: time="2025-08-12T23:59:23.998477787Z" level=info msg="connecting to shim 076a5165af5798c255707a95b42f7e4b68e15377845a9322990e297ab71003cd" address="unix:///run/containerd/s/a9fcfa5056c25776c8e6aba0a8a9bdcb8adcfec8991d6ec4437b69e80ce990f6" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:59:24.034394 containerd[1504]: time="2025-08-12T23:59:24.033361220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d9bd785b6-8mxhl,Uid:450edd61-fdaa-4ec8-84c8-e0bd7897ca9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ece4e7dde634b519e5707f2550698057465b383181965c03998c038f2dc52b37\"" Aug 12 23:59:24.035422 kubelet[2620]: E0812 23:59:24.035322 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:24.037583 containerd[1504]: time="2025-08-12T23:59:24.037533018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 12 23:59:24.048105 systemd[1]: Started cri-containerd-076a5165af5798c255707a95b42f7e4b68e15377845a9322990e297ab71003cd.scope - libcontainer container 076a5165af5798c255707a95b42f7e4b68e15377845a9322990e297ab71003cd. Aug 12 23:59:24.060848 kubelet[2620]: E0812 23:59:24.060795 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.060848 kubelet[2620]: W0812 23:59:24.060821 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.060991 kubelet[2620]: E0812 23:59:24.060865 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.061168 kubelet[2620]: E0812 23:59:24.061141 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.061168 kubelet[2620]: W0812 23:59:24.061156 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.061232 kubelet[2620]: E0812 23:59:24.061173 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.061440 kubelet[2620]: E0812 23:59:24.061417 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.061440 kubelet[2620]: W0812 23:59:24.061434 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.061508 kubelet[2620]: E0812 23:59:24.061449 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.061716 kubelet[2620]: E0812 23:59:24.061691 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.061716 kubelet[2620]: W0812 23:59:24.061708 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.061790 kubelet[2620]: E0812 23:59:24.061727 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.061977 kubelet[2620]: E0812 23:59:24.061951 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.061977 kubelet[2620]: W0812 23:59:24.061965 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.062033 kubelet[2620]: E0812 23:59:24.061983 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.062177 kubelet[2620]: E0812 23:59:24.062155 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.062217 kubelet[2620]: W0812 23:59:24.062181 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.062217 kubelet[2620]: E0812 23:59:24.062198 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.062396 kubelet[2620]: E0812 23:59:24.062372 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.062396 kubelet[2620]: W0812 23:59:24.062384 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.062396 kubelet[2620]: E0812 23:59:24.062397 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.062623 kubelet[2620]: E0812 23:59:24.062596 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.062623 kubelet[2620]: W0812 23:59:24.062611 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.062698 kubelet[2620]: E0812 23:59:24.062638 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.062838 kubelet[2620]: E0812 23:59:24.062803 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.062838 kubelet[2620]: W0812 23:59:24.062829 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.062908 kubelet[2620]: E0812 23:59:24.062859 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.063031 kubelet[2620]: E0812 23:59:24.063007 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.063031 kubelet[2620]: W0812 23:59:24.063020 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.063083 kubelet[2620]: E0812 23:59:24.063040 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.064194 kubelet[2620]: E0812 23:59:24.064027 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.064194 kubelet[2620]: W0812 23:59:24.064046 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.064194 kubelet[2620]: E0812 23:59:24.064098 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.064468 kubelet[2620]: E0812 23:59:24.064449 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.064468 kubelet[2620]: W0812 23:59:24.064466 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.064708 kubelet[2620]: E0812 23:59:24.064687 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.065036 kubelet[2620]: E0812 23:59:24.065014 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.065036 kubelet[2620]: W0812 23:59:24.065034 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.065128 kubelet[2620]: E0812 23:59:24.065110 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.065337 kubelet[2620]: E0812 23:59:24.065292 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.065366 kubelet[2620]: W0812 23:59:24.065338 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.065510 kubelet[2620]: E0812 23:59:24.065470 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.065556 kubelet[2620]: E0812 23:59:24.065544 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.065703 kubelet[2620]: W0812 23:59:24.065583 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.066088 kubelet[2620]: E0812 23:59:24.066062 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.067043 kubelet[2620]: E0812 23:59:24.067016 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.067043 kubelet[2620]: W0812 23:59:24.067037 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.067132 kubelet[2620]: E0812 23:59:24.067095 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.067246 kubelet[2620]: E0812 23:59:24.067230 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.067278 kubelet[2620]: W0812 23:59:24.067247 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.067326 kubelet[2620]: E0812 23:59:24.067306 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.067429 kubelet[2620]: E0812 23:59:24.067415 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.067429 kubelet[2620]: W0812 23:59:24.067427 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.067479 kubelet[2620]: E0812 23:59:24.067455 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.067737 kubelet[2620]: E0812 23:59:24.067683 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.067737 kubelet[2620]: W0812 23:59:24.067699 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.067901 kubelet[2620]: E0812 23:59:24.067757 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.067901 kubelet[2620]: E0812 23:59:24.067868 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.067901 kubelet[2620]: W0812 23:59:24.067877 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.067901 kubelet[2620]: E0812 23:59:24.067894 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.068132 kubelet[2620]: E0812 23:59:24.068116 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.068132 kubelet[2620]: W0812 23:59:24.068129 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.068179 kubelet[2620]: E0812 23:59:24.068145 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.069470 kubelet[2620]: E0812 23:59:24.068310 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.069470 kubelet[2620]: W0812 23:59:24.068323 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.069470 kubelet[2620]: E0812 23:59:24.068332 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.069470 kubelet[2620]: E0812 23:59:24.069381 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.069470 kubelet[2620]: W0812 23:59:24.069404 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.069470 kubelet[2620]: E0812 23:59:24.069445 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.069708 kubelet[2620]: E0812 23:59:24.069681 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.069708 kubelet[2620]: W0812 23:59:24.069692 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.070025 kubelet[2620]: E0812 23:59:24.069763 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.070025 kubelet[2620]: E0812 23:59:24.069853 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.070025 kubelet[2620]: W0812 23:59:24.069863 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.070025 kubelet[2620]: E0812 23:59:24.069873 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.089315 kubelet[2620]: E0812 23:59:24.089283 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:24.089315 kubelet[2620]: W0812 23:59:24.089303 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:24.089315 kubelet[2620]: E0812 23:59:24.089325 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:24.104185 containerd[1504]: time="2025-08-12T23:59:24.104135312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9dwkq,Uid:5e8eca26-db0c-40c8-968b-abe1d2942b48,Namespace:calico-system,Attempt:0,} returns sandbox id \"076a5165af5798c255707a95b42f7e4b68e15377845a9322990e297ab71003cd\"" Aug 12 23:59:24.960112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959249233.mount: Deactivated successfully. Aug 12 23:59:25.221399 kubelet[2620]: E0812 23:59:25.221196 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8g89f" podUID="dee9ceaa-e81b-402b-8ee3-70bd7c437460" Aug 12 23:59:25.431114 containerd[1504]: time="2025-08-12T23:59:25.431053773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:25.431674 containerd[1504]: time="2025-08-12T23:59:25.431596828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 12 23:59:25.432309 containerd[1504]: time="2025-08-12T23:59:25.432258542Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:25.434462 containerd[1504]: time="2025-08-12T23:59:25.434060975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:25.434896 containerd[1504]: time="2025-08-12T23:59:25.434868915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.397282568s" Aug 12 23:59:25.434935 containerd[1504]: time="2025-08-12T23:59:25.434903401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 12 23:59:25.436887 containerd[1504]: time="2025-08-12T23:59:25.436567050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 12 23:59:25.451176 containerd[1504]: time="2025-08-12T23:59:25.451121295Z" level=info msg="CreateContainer within sandbox \"ece4e7dde634b519e5707f2550698057465b383181965c03998c038f2dc52b37\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 12 23:59:25.458909 containerd[1504]: time="2025-08-12T23:59:25.458851876Z" level=info msg="Container a88812f9b0ae3c962bc8008dec0fbe12a2fd4dc5337d9731999d920ca9e80fbe: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:25.474489 containerd[1504]: time="2025-08-12T23:59:25.474380570Z" level=info msg="CreateContainer within sandbox \"ece4e7dde634b519e5707f2550698057465b383181965c03998c038f2dc52b37\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a88812f9b0ae3c962bc8008dec0fbe12a2fd4dc5337d9731999d920ca9e80fbe\"" Aug 12 23:59:25.475432 containerd[1504]: time="2025-08-12T23:59:25.475398426Z" level=info msg="StartContainer for \"a88812f9b0ae3c962bc8008dec0fbe12a2fd4dc5337d9731999d920ca9e80fbe\"" Aug 12 23:59:25.478110 containerd[1504]: time="2025-08-12T23:59:25.478066249Z" level=info msg="connecting to shim a88812f9b0ae3c962bc8008dec0fbe12a2fd4dc5337d9731999d920ca9e80fbe" address="unix:///run/containerd/s/f797dabeb041df10a0da9edd3ca967df9be1a316ce7f1f93d77ebc0473f90a71" protocol=ttrpc version=3 Aug 12 23:59:25.501902 systemd[1]: Started cri-containerd-a88812f9b0ae3c962bc8008dec0fbe12a2fd4dc5337d9731999d920ca9e80fbe.scope - libcontainer container a88812f9b0ae3c962bc8008dec0fbe12a2fd4dc5337d9731999d920ca9e80fbe. Aug 12 23:59:25.628555 containerd[1504]: time="2025-08-12T23:59:25.628480663Z" level=info msg="StartContainer for \"a88812f9b0ae3c962bc8008dec0fbe12a2fd4dc5337d9731999d920ca9e80fbe\" returns successfully" Aug 12 23:59:26.297531 kubelet[2620]: E0812 23:59:26.297051 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:26.369008 kubelet[2620]: E0812 23:59:26.368973 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.369287 kubelet[2620]: W0812 23:59:26.369154 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.369287 kubelet[2620]: E0812 23:59:26.369182 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.369516 kubelet[2620]: E0812 23:59:26.369355 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.369516 kubelet[2620]: W0812 23:59:26.369365 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.369516 kubelet[2620]: E0812 23:59:26.369374 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.369686 kubelet[2620]: E0812 23:59:26.369674 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.369843 kubelet[2620]: W0812 23:59:26.369740 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.369843 kubelet[2620]: E0812 23:59:26.369756 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.369970 kubelet[2620]: E0812 23:59:26.369958 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.370108 kubelet[2620]: W0812 23:59:26.370016 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.373468 kubelet[2620]: E0812 23:59:26.373438 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.373912 kubelet[2620]: E0812 23:59:26.373809 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.373912 kubelet[2620]: W0812 23:59:26.373824 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.373912 kubelet[2620]: E0812 23:59:26.373836 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.374082 kubelet[2620]: E0812 23:59:26.374071 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.374151 kubelet[2620]: W0812 23:59:26.374138 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.374205 kubelet[2620]: E0812 23:59:26.374194 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.374515 kubelet[2620]: E0812 23:59:26.374412 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.374515 kubelet[2620]: W0812 23:59:26.374423 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.374515 kubelet[2620]: E0812 23:59:26.374436 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.374684 kubelet[2620]: E0812 23:59:26.374672 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.374755 kubelet[2620]: W0812 23:59:26.374743 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.374827 kubelet[2620]: E0812 23:59:26.374817 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.375049 kubelet[2620]: E0812 23:59:26.375035 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.375121 kubelet[2620]: W0812 23:59:26.375109 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.375175 kubelet[2620]: E0812 23:59:26.375164 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.375467 kubelet[2620]: E0812 23:59:26.375376 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.375467 kubelet[2620]: W0812 23:59:26.375388 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.375467 kubelet[2620]: E0812 23:59:26.375397 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.375625 kubelet[2620]: E0812 23:59:26.375613 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.375724 kubelet[2620]: W0812 23:59:26.375711 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.375788 kubelet[2620]: E0812 23:59:26.375776 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.375982 kubelet[2620]: E0812 23:59:26.375970 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.376157 kubelet[2620]: W0812 23:59:26.376041 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.376157 kubelet[2620]: E0812 23:59:26.376056 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.376298 kubelet[2620]: E0812 23:59:26.376285 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.376353 kubelet[2620]: W0812 23:59:26.376342 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.376504 kubelet[2620]: E0812 23:59:26.376402 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.376638 kubelet[2620]: E0812 23:59:26.376625 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.376800 kubelet[2620]: W0812 23:59:26.376784 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.376949 kubelet[2620]: E0812 23:59:26.376850 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.377048 kubelet[2620]: E0812 23:59:26.377037 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.377126 kubelet[2620]: W0812 23:59:26.377095 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.377126 kubelet[2620]: E0812 23:59:26.377109 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.377529 kubelet[2620]: E0812 23:59:26.377486 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.377529 kubelet[2620]: W0812 23:59:26.377499 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.377529 kubelet[2620]: E0812 23:59:26.377510 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.378235 kubelet[2620]: E0812 23:59:26.378203 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.378235 kubelet[2620]: W0812 23:59:26.378218 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.378430 kubelet[2620]: E0812 23:59:26.378311 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.378869 kubelet[2620]: E0812 23:59:26.378834 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.378869 kubelet[2620]: W0812 23:59:26.378851 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.379067 kubelet[2620]: E0812 23:59:26.378971 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.379314 kubelet[2620]: E0812 23:59:26.379283 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.379314 kubelet[2620]: W0812 23:59:26.379297 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.379463 kubelet[2620]: E0812 23:59:26.379404 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.379747 kubelet[2620]: E0812 23:59:26.379717 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.379747 kubelet[2620]: W0812 23:59:26.379730 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.379938 kubelet[2620]: E0812 23:59:26.379911 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.380189 kubelet[2620]: E0812 23:59:26.380159 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.380189 kubelet[2620]: W0812 23:59:26.380173 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.380404 kubelet[2620]: E0812 23:59:26.380349 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.380559 kubelet[2620]: E0812 23:59:26.380514 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.380559 kubelet[2620]: W0812 23:59:26.380527 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.380559 kubelet[2620]: E0812 23:59:26.380553 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.380908 kubelet[2620]: E0812 23:59:26.380894 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.380987 kubelet[2620]: W0812 23:59:26.380967 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.381059 kubelet[2620]: E0812 23:59:26.381046 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.381328 kubelet[2620]: E0812 23:59:26.381314 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.381400 kubelet[2620]: W0812 23:59:26.381387 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.381460 kubelet[2620]: E0812 23:59:26.381449 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.381760 kubelet[2620]: E0812 23:59:26.381746 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.381841 kubelet[2620]: W0812 23:59:26.381829 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.381904 kubelet[2620]: E0812 23:59:26.381895 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.382156 kubelet[2620]: E0812 23:59:26.382144 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.382232 kubelet[2620]: W0812 23:59:26.382219 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.382475 kubelet[2620]: E0812 23:59:26.382459 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.382829 kubelet[2620]: E0812 23:59:26.382815 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.383023 kubelet[2620]: W0812 23:59:26.382888 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.383023 kubelet[2620]: E0812 23:59:26.382908 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.383168 kubelet[2620]: E0812 23:59:26.383154 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.383260 kubelet[2620]: W0812 23:59:26.383234 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.383376 kubelet[2620]: E0812 23:59:26.383350 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.383847 kubelet[2620]: E0812 23:59:26.383828 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.383847 kubelet[2620]: W0812 23:59:26.383844 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.383998 kubelet[2620]: E0812 23:59:26.383913 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.384068 kubelet[2620]: E0812 23:59:26.383996 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.384068 kubelet[2620]: W0812 23:59:26.384005 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.384163 kubelet[2620]: E0812 23:59:26.384138 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.384163 kubelet[2620]: W0812 23:59:26.384145 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.384207 kubelet[2620]: E0812 23:59:26.384162 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.384291 kubelet[2620]: E0812 23:59:26.384042 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.384374 kubelet[2620]: E0812 23:59:26.384358 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.384406 kubelet[2620]: W0812 23:59:26.384373 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.384406 kubelet[2620]: E0812 23:59:26.384385 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.384746 kubelet[2620]: E0812 23:59:26.384734 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:59:26.384782 kubelet[2620]: W0812 23:59:26.384747 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:59:26.384782 kubelet[2620]: E0812 23:59:26.384757 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:59:26.914236 containerd[1504]: time="2025-08-12T23:59:26.914078310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:26.915450 containerd[1504]: time="2025-08-12T23:59:26.915423853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 12 23:59:26.917605 containerd[1504]: time="2025-08-12T23:59:26.917037201Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:26.927671 containerd[1504]: time="2025-08-12T23:59:26.927590872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:26.929174 containerd[1504]: time="2025-08-12T23:59:26.929114884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.492512268s" Aug 12 23:59:26.929174 containerd[1504]: time="2025-08-12T23:59:26.929156051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 12 23:59:26.931576 containerd[1504]: time="2025-08-12T23:59:26.931511602Z" level=info msg="CreateContainer within sandbox \"076a5165af5798c255707a95b42f7e4b68e15377845a9322990e297ab71003cd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 12 23:59:26.974133 containerd[1504]: time="2025-08-12T23:59:26.974090105Z" level=info msg="Container 991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:26.984216 containerd[1504]: time="2025-08-12T23:59:26.984163936Z" level=info msg="CreateContainer within sandbox \"076a5165af5798c255707a95b42f7e4b68e15377845a9322990e297ab71003cd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00\"" Aug 12 23:59:26.984938 containerd[1504]: time="2025-08-12T23:59:26.984905899Z" level=info msg="StartContainer for \"991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00\"" Aug 12 23:59:26.986365 containerd[1504]: time="2025-08-12T23:59:26.986301770Z" level=info msg="connecting to shim 991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00" address="unix:///run/containerd/s/a9fcfa5056c25776c8e6aba0a8a9bdcb8adcfec8991d6ec4437b69e80ce990f6" protocol=ttrpc version=3 Aug 12 23:59:27.025737 systemd[1]: Started cri-containerd-991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00.scope - libcontainer container 991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00. Aug 12 23:59:27.139183 systemd[1]: cri-containerd-991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00.scope: Deactivated successfully. Aug 12 23:59:27.197920 containerd[1504]: time="2025-08-12T23:59:27.197870507Z" level=info msg="StartContainer for \"991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00\" returns successfully" Aug 12 23:59:27.201028 containerd[1504]: time="2025-08-12T23:59:27.200917951Z" level=info msg="received exit event container_id:\"991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00\" id:\"991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00\" pid:3311 exited_at:{seconds:1755043167 nanos:141790324}" Aug 12 23:59:27.206779 containerd[1504]: time="2025-08-12T23:59:27.206720712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00\" id:\"991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00\" pid:3311 exited_at:{seconds:1755043167 nanos:141790324}" Aug 12 23:59:27.221451 kubelet[2620]: E0812 23:59:27.221366 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8g89f" podUID="dee9ceaa-e81b-402b-8ee3-70bd7c437460" Aug 12 23:59:27.290631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-991476dc4bb2b01796323f44b4a17e314e5631971c10ca94b94dcacc28c1eb00-rootfs.mount: Deactivated successfully. Aug 12 23:59:27.300763 kubelet[2620]: I0812 23:59:27.300712 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:59:27.302192 kubelet[2620]: E0812 23:59:27.302159 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:27.317491 kubelet[2620]: I0812 23:59:27.317071 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d9bd785b6-8mxhl" podStartSLOduration=2.9181905009999998 podStartE2EDuration="4.317053507s" podCreationTimestamp="2025-08-12 23:59:23 +0000 UTC" firstStartedPulling="2025-08-12 23:59:24.037076295 +0000 UTC m=+20.909653357" lastFinishedPulling="2025-08-12 23:59:25.435939301 +0000 UTC m=+22.308516363" observedRunningTime="2025-08-12 23:59:26.42069931 +0000 UTC m=+23.293276372" watchObservedRunningTime="2025-08-12 23:59:27.317053507 +0000 UTC m=+24.189630569" Aug 12 23:59:28.306676 containerd[1504]: time="2025-08-12T23:59:28.306608235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 12 23:59:29.221594 kubelet[2620]: E0812 23:59:29.221393 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8g89f" podUID="dee9ceaa-e81b-402b-8ee3-70bd7c437460" Aug 12 23:59:31.221044 kubelet[2620]: E0812 23:59:31.220973 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8g89f" podUID="dee9ceaa-e81b-402b-8ee3-70bd7c437460" Aug 12 23:59:31.898780 containerd[1504]: time="2025-08-12T23:59:31.898728157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:31.899661 containerd[1504]: time="2025-08-12T23:59:31.899221304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 12 23:59:31.900262 containerd[1504]: time="2025-08-12T23:59:31.900217758Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:31.902811 containerd[1504]: time="2025-08-12T23:59:31.902773501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:31.903559 containerd[1504]: time="2025-08-12T23:59:31.903188717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.596541997s" Aug 12 23:59:31.903710 containerd[1504]: time="2025-08-12T23:59:31.903682543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 12 23:59:31.906999 containerd[1504]: time="2025-08-12T23:59:31.906966145Z" level=info msg="CreateContainer within sandbox \"076a5165af5798c255707a95b42f7e4b68e15377845a9322990e297ab71003cd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 12 23:59:31.921831 containerd[1504]: time="2025-08-12T23:59:31.921780976Z" level=info msg="Container 7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:31.935721 containerd[1504]: time="2025-08-12T23:59:31.935627438Z" level=info msg="CreateContainer within sandbox \"076a5165af5798c255707a95b42f7e4b68e15377845a9322990e297ab71003cd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f\"" Aug 12 23:59:31.936982 containerd[1504]: time="2025-08-12T23:59:31.936906610Z" level=info msg="StartContainer for \"7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f\"" Aug 12 23:59:31.938837 containerd[1504]: time="2025-08-12T23:59:31.938807585Z" level=info msg="connecting to shim 7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f" address="unix:///run/containerd/s/a9fcfa5056c25776c8e6aba0a8a9bdcb8adcfec8991d6ec4437b69e80ce990f6" protocol=ttrpc version=3 Aug 12 23:59:31.960867 systemd[1]: Started cri-containerd-7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f.scope - libcontainer container 7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f. Aug 12 23:59:32.006582 containerd[1504]: time="2025-08-12T23:59:32.006528864Z" level=info msg="StartContainer for \"7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f\" returns successfully" Aug 12 23:59:32.787635 systemd[1]: cri-containerd-7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f.scope: Deactivated successfully. Aug 12 23:59:32.788252 systemd[1]: cri-containerd-7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f.scope: Consumed 492ms CPU time, 177.5M memory peak, 3M read from disk, 165.8M written to disk. Aug 12 23:59:32.792110 containerd[1504]: time="2025-08-12T23:59:32.791961912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f\" id:\"7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f\" pid:3370 exited_at:{seconds:1755043172 nanos:791416162}" Aug 12 23:59:32.792110 containerd[1504]: time="2025-08-12T23:59:32.791966233Z" level=info msg="received exit event container_id:\"7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f\" id:\"7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f\" pid:3370 exited_at:{seconds:1755043172 nanos:791416162}" Aug 12 23:59:32.812579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c838cd16256bc0d8b7321d6f04a83b96c0c0f38190271077428c7b0c64d002f-rootfs.mount: Deactivated successfully. Aug 12 23:59:32.885679 kubelet[2620]: I0812 23:59:32.885620 2620 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 12 23:59:32.939981 systemd[1]: Created slice kubepods-burstable-pod8d519033_ce80_4be9_912c_282f7fa65f2e.slice - libcontainer container kubepods-burstable-pod8d519033_ce80_4be9_912c_282f7fa65f2e.slice. Aug 12 23:59:32.951339 systemd[1]: Created slice kubepods-besteffort-pod147f626b_1fdc_432d_8d5b_31ba3a08beb0.slice - libcontainer container kubepods-besteffort-pod147f626b_1fdc_432d_8d5b_31ba3a08beb0.slice. Aug 12 23:59:32.958776 systemd[1]: Created slice kubepods-burstable-podf51ad2a9_7d82_464f_b860_89072ace43dc.slice - libcontainer container kubepods-burstable-podf51ad2a9_7d82_464f_b860_89072ace43dc.slice. Aug 12 23:59:32.964646 systemd[1]: Created slice kubepods-besteffort-pod5d3aca1f_d4ad_48e2_b0ea_719d09abba45.slice - libcontainer container kubepods-besteffort-pod5d3aca1f_d4ad_48e2_b0ea_719d09abba45.slice. Aug 12 23:59:32.972821 systemd[1]: Created slice kubepods-besteffort-pode33239bc_c1d6_445e_9a48_5a5bfcb0c23a.slice - libcontainer container kubepods-besteffort-pode33239bc_c1d6_445e_9a48_5a5bfcb0c23a.slice. Aug 12 23:59:32.978196 systemd[1]: Created slice kubepods-besteffort-pod4f657772_708d_4136_9792_4741b9966e02.slice - libcontainer container kubepods-besteffort-pod4f657772_708d_4136_9792_4741b9966e02.slice. Aug 12 23:59:32.984452 systemd[1]: Created slice kubepods-besteffort-pode3337cb5_1d9b_4c57_aeca_598477706d1c.slice - libcontainer container kubepods-besteffort-pode3337cb5_1d9b_4c57_aeca_598477706d1c.slice. Aug 12 23:59:33.023608 kubelet[2620]: I0812 23:59:33.023148 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mv2m\" (UniqueName: \"kubernetes.io/projected/8d519033-ce80-4be9-912c-282f7fa65f2e-kube-api-access-4mv2m\") pod \"coredns-7c65d6cfc9-kts9j\" (UID: \"8d519033-ce80-4be9-912c-282f7fa65f2e\") " pod="kube-system/coredns-7c65d6cfc9-kts9j" Aug 12 23:59:33.023608 kubelet[2620]: I0812 23:59:33.023222 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d519033-ce80-4be9-912c-282f7fa65f2e-config-volume\") pod \"coredns-7c65d6cfc9-kts9j\" (UID: \"8d519033-ce80-4be9-912c-282f7fa65f2e\") " pod="kube-system/coredns-7c65d6cfc9-kts9j" Aug 12 23:59:33.124452 kubelet[2620]: I0812 23:59:33.124332 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4f657772-708d-4136-9792-4741b9966e02-whisker-backend-key-pair\") pod \"whisker-7bbbd9bf4-v5wpf\" (UID: \"4f657772-708d-4136-9792-4741b9966e02\") " pod="calico-system/whisker-7bbbd9bf4-v5wpf" Aug 12 23:59:33.124452 kubelet[2620]: I0812 23:59:33.124382 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f51ad2a9-7d82-464f-b860-89072ace43dc-config-volume\") pod \"coredns-7c65d6cfc9-hb2f2\" (UID: \"f51ad2a9-7d82-464f-b860-89072ace43dc\") " pod="kube-system/coredns-7c65d6cfc9-hb2f2" Aug 12 23:59:33.124452 kubelet[2620]: I0812 23:59:33.124405 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fz8h\" (UniqueName: \"kubernetes.io/projected/f51ad2a9-7d82-464f-b860-89072ace43dc-kube-api-access-8fz8h\") pod \"coredns-7c65d6cfc9-hb2f2\" (UID: \"f51ad2a9-7d82-464f-b860-89072ace43dc\") " pod="kube-system/coredns-7c65d6cfc9-hb2f2" Aug 12 23:59:33.124452 kubelet[2620]: I0812 23:59:33.124421 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt8tw\" (UniqueName: \"kubernetes.io/projected/147f626b-1fdc-432d-8d5b-31ba3a08beb0-kube-api-access-tt8tw\") pod \"goldmane-58fd7646b9-ndrj5\" (UID: \"147f626b-1fdc-432d-8d5b-31ba3a08beb0\") " pod="calico-system/goldmane-58fd7646b9-ndrj5" Aug 12 23:59:33.124452 kubelet[2620]: I0812 23:59:33.124438 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p58xg\" (UniqueName: \"kubernetes.io/projected/4f657772-708d-4136-9792-4741b9966e02-kube-api-access-p58xg\") pod \"whisker-7bbbd9bf4-v5wpf\" (UID: \"4f657772-708d-4136-9792-4741b9966e02\") " pod="calico-system/whisker-7bbbd9bf4-v5wpf" Aug 12 23:59:33.124675 kubelet[2620]: I0812 23:59:33.124467 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/147f626b-1fdc-432d-8d5b-31ba3a08beb0-config\") pod \"goldmane-58fd7646b9-ndrj5\" (UID: \"147f626b-1fdc-432d-8d5b-31ba3a08beb0\") " pod="calico-system/goldmane-58fd7646b9-ndrj5" Aug 12 23:59:33.124675 kubelet[2620]: I0812 23:59:33.124495 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/147f626b-1fdc-432d-8d5b-31ba3a08beb0-goldmane-key-pair\") pod \"goldmane-58fd7646b9-ndrj5\" (UID: \"147f626b-1fdc-432d-8d5b-31ba3a08beb0\") " pod="calico-system/goldmane-58fd7646b9-ndrj5" Aug 12 23:59:33.124675 kubelet[2620]: I0812 23:59:33.124511 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e33239bc-c1d6-445e-9a48-5a5bfcb0c23a-calico-apiserver-certs\") pod \"calico-apiserver-58f7d5ddc9-29pmx\" (UID: \"e33239bc-c1d6-445e-9a48-5a5bfcb0c23a\") " pod="calico-apiserver/calico-apiserver-58f7d5ddc9-29pmx" Aug 12 23:59:33.124675 kubelet[2620]: I0812 23:59:33.124528 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d3aca1f-d4ad-48e2-b0ea-719d09abba45-tigera-ca-bundle\") pod \"calico-kube-controllers-d68f5454c-8df5x\" (UID: \"5d3aca1f-d4ad-48e2-b0ea-719d09abba45\") " pod="calico-system/calico-kube-controllers-d68f5454c-8df5x" Aug 12 23:59:33.124675 kubelet[2620]: I0812 23:59:33.124547 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f657772-708d-4136-9792-4741b9966e02-whisker-ca-bundle\") pod \"whisker-7bbbd9bf4-v5wpf\" (UID: \"4f657772-708d-4136-9792-4741b9966e02\") " pod="calico-system/whisker-7bbbd9bf4-v5wpf" Aug 12 23:59:33.124792 kubelet[2620]: I0812 23:59:33.124565 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9ltg\" (UniqueName: \"kubernetes.io/projected/5d3aca1f-d4ad-48e2-b0ea-719d09abba45-kube-api-access-h9ltg\") pod \"calico-kube-controllers-d68f5454c-8df5x\" (UID: \"5d3aca1f-d4ad-48e2-b0ea-719d09abba45\") " pod="calico-system/calico-kube-controllers-d68f5454c-8df5x" Aug 12 23:59:33.124792 kubelet[2620]: I0812 23:59:33.124584 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/147f626b-1fdc-432d-8d5b-31ba3a08beb0-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-ndrj5\" (UID: \"147f626b-1fdc-432d-8d5b-31ba3a08beb0\") " pod="calico-system/goldmane-58fd7646b9-ndrj5" Aug 12 23:59:33.124792 kubelet[2620]: I0812 23:59:33.124603 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85gds\" (UniqueName: \"kubernetes.io/projected/e33239bc-c1d6-445e-9a48-5a5bfcb0c23a-kube-api-access-85gds\") pod \"calico-apiserver-58f7d5ddc9-29pmx\" (UID: \"e33239bc-c1d6-445e-9a48-5a5bfcb0c23a\") " pod="calico-apiserver/calico-apiserver-58f7d5ddc9-29pmx" Aug 12 23:59:33.124792 kubelet[2620]: I0812 23:59:33.124622 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e3337cb5-1d9b-4c57-aeca-598477706d1c-calico-apiserver-certs\") pod \"calico-apiserver-58f7d5ddc9-qfwms\" (UID: \"e3337cb5-1d9b-4c57-aeca-598477706d1c\") " pod="calico-apiserver/calico-apiserver-58f7d5ddc9-qfwms" Aug 12 23:59:33.124792 kubelet[2620]: I0812 23:59:33.124639 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84n4\" (UniqueName: \"kubernetes.io/projected/e3337cb5-1d9b-4c57-aeca-598477706d1c-kube-api-access-c84n4\") pod \"calico-apiserver-58f7d5ddc9-qfwms\" (UID: \"e3337cb5-1d9b-4c57-aeca-598477706d1c\") " pod="calico-apiserver/calico-apiserver-58f7d5ddc9-qfwms" Aug 12 23:59:33.230195 systemd[1]: Created slice kubepods-besteffort-poddee9ceaa_e81b_402b_8ee3_70bd7c437460.slice - libcontainer container kubepods-besteffort-poddee9ceaa_e81b_402b_8ee3_70bd7c437460.slice. Aug 12 23:59:33.249573 kubelet[2620]: E0812 23:59:33.247985 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:33.251965 containerd[1504]: time="2025-08-12T23:59:33.251132619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kts9j,Uid:8d519033-ce80-4be9-912c-282f7fa65f2e,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:33.258713 containerd[1504]: time="2025-08-12T23:59:33.257932385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8g89f,Uid:dee9ceaa-e81b-402b-8ee3-70bd7c437460,Namespace:calico-system,Attempt:0,}" Aug 12 23:59:33.264428 kubelet[2620]: E0812 23:59:33.264384 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:33.265904 containerd[1504]: time="2025-08-12T23:59:33.265851811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hb2f2,Uid:f51ad2a9-7d82-464f-b860-89072ace43dc,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:33.280083 containerd[1504]: time="2025-08-12T23:59:33.276294870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d68f5454c-8df5x,Uid:5d3aca1f-d4ad-48e2-b0ea-719d09abba45,Namespace:calico-system,Attempt:0,}" Aug 12 23:59:33.280083 containerd[1504]: time="2025-08-12T23:59:33.276690799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7d5ddc9-29pmx,Uid:e33239bc-c1d6-445e-9a48-5a5bfcb0c23a,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:59:33.297779 containerd[1504]: time="2025-08-12T23:59:33.297489987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7d5ddc9-qfwms,Uid:e3337cb5-1d9b-4c57-aeca-598477706d1c,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:59:33.298135 containerd[1504]: time="2025-08-12T23:59:33.298110625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bbbd9bf4-v5wpf,Uid:4f657772-708d-4136-9792-4741b9966e02,Namespace:calico-system,Attempt:0,}" Aug 12 23:59:33.401542 containerd[1504]: time="2025-08-12T23:59:33.401401717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 12 23:59:33.562172 containerd[1504]: time="2025-08-12T23:59:33.562042344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ndrj5,Uid:147f626b-1fdc-432d-8d5b-31ba3a08beb0,Namespace:calico-system,Attempt:0,}" Aug 12 23:59:33.985274 containerd[1504]: time="2025-08-12T23:59:33.984875676Z" level=error msg="Failed to destroy network for sandbox \"50a648f9d5ef74cd7bfd1dd0fe44a9e4af935acc9d7bcfc7552d1018d72da8e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:33.992222 containerd[1504]: time="2025-08-12T23:59:33.992156542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7d5ddc9-29pmx,Uid:e33239bc-c1d6-445e-9a48-5a5bfcb0c23a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a648f9d5ef74cd7bfd1dd0fe44a9e4af935acc9d7bcfc7552d1018d72da8e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:33.996921 kubelet[2620]: E0812 23:59:33.996858 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a648f9d5ef74cd7bfd1dd0fe44a9e4af935acc9d7bcfc7552d1018d72da8e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.001640 kubelet[2620]: E0812 23:59:34.001570 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a648f9d5ef74cd7bfd1dd0fe44a9e4af935acc9d7bcfc7552d1018d72da8e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58f7d5ddc9-29pmx" Aug 12 23:59:34.001640 kubelet[2620]: E0812 23:59:34.001645 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a648f9d5ef74cd7bfd1dd0fe44a9e4af935acc9d7bcfc7552d1018d72da8e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58f7d5ddc9-29pmx" Aug 12 23:59:34.001882 kubelet[2620]: E0812 23:59:34.001715 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58f7d5ddc9-29pmx_calico-apiserver(e33239bc-c1d6-445e-9a48-5a5bfcb0c23a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58f7d5ddc9-29pmx_calico-apiserver(e33239bc-c1d6-445e-9a48-5a5bfcb0c23a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50a648f9d5ef74cd7bfd1dd0fe44a9e4af935acc9d7bcfc7552d1018d72da8e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58f7d5ddc9-29pmx" podUID="e33239bc-c1d6-445e-9a48-5a5bfcb0c23a" Aug 12 23:59:34.008060 containerd[1504]: time="2025-08-12T23:59:34.008011165Z" level=error msg="Failed to destroy network for sandbox \"c71fd59f718094950510bb2f8049fc5a9d11d2e7c9570498979a98d69b2048a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.009085 containerd[1504]: time="2025-08-12T23:59:34.009044449Z" level=error msg="Failed to destroy network for sandbox \"ace008661fc2495333aa9f224c997523f360fc5abaa02072989bcbfebc63aed6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.010363 containerd[1504]: time="2025-08-12T23:59:34.010319802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8g89f,Uid:dee9ceaa-e81b-402b-8ee3-70bd7c437460,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71fd59f718094950510bb2f8049fc5a9d11d2e7c9570498979a98d69b2048a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.010598 kubelet[2620]: E0812 23:59:34.010563 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71fd59f718094950510bb2f8049fc5a9d11d2e7c9570498979a98d69b2048a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.010666 kubelet[2620]: E0812 23:59:34.010629 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71fd59f718094950510bb2f8049fc5a9d11d2e7c9570498979a98d69b2048a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8g89f" Aug 12 23:59:34.010721 kubelet[2620]: E0812 23:59:34.010672 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71fd59f718094950510bb2f8049fc5a9d11d2e7c9570498979a98d69b2048a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8g89f" Aug 12 23:59:34.010833 kubelet[2620]: E0812 23:59:34.010717 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8g89f_calico-system(dee9ceaa-e81b-402b-8ee3-70bd7c437460)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8g89f_calico-system(dee9ceaa-e81b-402b-8ee3-70bd7c437460)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c71fd59f718094950510bb2f8049fc5a9d11d2e7c9570498979a98d69b2048a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8g89f" podUID="dee9ceaa-e81b-402b-8ee3-70bd7c437460" Aug 12 23:59:34.010960 containerd[1504]: time="2025-08-12T23:59:34.010921274Z" level=error msg="Failed to destroy network for sandbox \"8e1cc1bcf78249827d52f3b6cb735ca15fc521f75327e18adfb46b432354ceb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.011826 containerd[1504]: time="2025-08-12T23:59:34.011733692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hb2f2,Uid:f51ad2a9-7d82-464f-b860-89072ace43dc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace008661fc2495333aa9f224c997523f360fc5abaa02072989bcbfebc63aed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.012088 kubelet[2620]: E0812 23:59:34.012049 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace008661fc2495333aa9f224c997523f360fc5abaa02072989bcbfebc63aed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.012163 kubelet[2620]: E0812 23:59:34.012108 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace008661fc2495333aa9f224c997523f360fc5abaa02072989bcbfebc63aed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hb2f2" Aug 12 23:59:34.012163 kubelet[2620]: E0812 23:59:34.012128 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace008661fc2495333aa9f224c997523f360fc5abaa02072989bcbfebc63aed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hb2f2" Aug 12 23:59:34.012225 kubelet[2620]: E0812 23:59:34.012172 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hb2f2_kube-system(f51ad2a9-7d82-464f-b860-89072ace43dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hb2f2_kube-system(f51ad2a9-7d82-464f-b860-89072ace43dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ace008661fc2495333aa9f224c997523f360fc5abaa02072989bcbfebc63aed6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hb2f2" podUID="f51ad2a9-7d82-464f-b860-89072ace43dc" Aug 12 23:59:34.015901 containerd[1504]: time="2025-08-12T23:59:34.015843464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7d5ddc9-qfwms,Uid:e3337cb5-1d9b-4c57-aeca-598477706d1c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e1cc1bcf78249827d52f3b6cb735ca15fc521f75327e18adfb46b432354ceb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.017458 kubelet[2620]: E0812 23:59:34.017239 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e1cc1bcf78249827d52f3b6cb735ca15fc521f75327e18adfb46b432354ceb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.017458 kubelet[2620]: E0812 23:59:34.017316 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e1cc1bcf78249827d52f3b6cb735ca15fc521f75327e18adfb46b432354ceb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58f7d5ddc9-qfwms" Aug 12 23:59:34.017458 kubelet[2620]: E0812 23:59:34.017338 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e1cc1bcf78249827d52f3b6cb735ca15fc521f75327e18adfb46b432354ceb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58f7d5ddc9-qfwms" Aug 12 23:59:34.017609 kubelet[2620]: E0812 23:59:34.017377 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58f7d5ddc9-qfwms_calico-apiserver(e3337cb5-1d9b-4c57-aeca-598477706d1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58f7d5ddc9-qfwms_calico-apiserver(e3337cb5-1d9b-4c57-aeca-598477706d1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e1cc1bcf78249827d52f3b6cb735ca15fc521f75327e18adfb46b432354ceb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58f7d5ddc9-qfwms" podUID="e3337cb5-1d9b-4c57-aeca-598477706d1c" Aug 12 23:59:34.018411 containerd[1504]: time="2025-08-12T23:59:34.018292438Z" level=error msg="Failed to destroy network for sandbox \"d1d90b7431451a9d4d0aab6b71d4752c378def020873e056b8a91b30ea65f18d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.020457 containerd[1504]: time="2025-08-12T23:59:34.020394450Z" level=error msg="Failed to destroy network for sandbox \"6074ddb9289fb4bd2388e6ece1d5492bc114cfdf1dfc283f1b949e08257a484b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.022181 containerd[1504]: time="2025-08-12T23:59:34.022012364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d68f5454c-8df5x,Uid:5d3aca1f-d4ad-48e2-b0ea-719d09abba45,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1d90b7431451a9d4d0aab6b71d4752c378def020873e056b8a91b30ea65f18d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.022374 kubelet[2620]: E0812 23:59:34.022277 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1d90b7431451a9d4d0aab6b71d4752c378def020873e056b8a91b30ea65f18d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.022438 kubelet[2620]: E0812 23:59:34.022395 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1d90b7431451a9d4d0aab6b71d4752c378def020873e056b8a91b30ea65f18d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d68f5454c-8df5x" Aug 12 23:59:34.022438 kubelet[2620]: E0812 23:59:34.022414 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1d90b7431451a9d4d0aab6b71d4752c378def020873e056b8a91b30ea65f18d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d68f5454c-8df5x" Aug 12 23:59:34.022524 kubelet[2620]: E0812 23:59:34.022496 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d68f5454c-8df5x_calico-system(5d3aca1f-d4ad-48e2-b0ea-719d09abba45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d68f5454c-8df5x_calico-system(5d3aca1f-d4ad-48e2-b0ea-719d09abba45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1d90b7431451a9d4d0aab6b71d4752c378def020873e056b8a91b30ea65f18d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d68f5454c-8df5x" podUID="5d3aca1f-d4ad-48e2-b0ea-719d09abba45" Aug 12 23:59:34.023728 containerd[1504]: time="2025-08-12T23:59:34.023541907Z" level=error msg="Failed to destroy network for sandbox \"04e5fd2118fdb5ede84902c9055a18466a3c1d44cd8238edfdf834652cd794a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.024319 containerd[1504]: time="2025-08-12T23:59:34.024264594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kts9j,Uid:8d519033-ce80-4be9-912c-282f7fa65f2e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6074ddb9289fb4bd2388e6ece1d5492bc114cfdf1dfc283f1b949e08257a484b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.025643 kubelet[2620]: E0812 23:59:34.024480 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6074ddb9289fb4bd2388e6ece1d5492bc114cfdf1dfc283f1b949e08257a484b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.025643 kubelet[2620]: E0812 23:59:34.024531 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6074ddb9289fb4bd2388e6ece1d5492bc114cfdf1dfc283f1b949e08257a484b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kts9j" Aug 12 23:59:34.025643 kubelet[2620]: E0812 23:59:34.024587 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6074ddb9289fb4bd2388e6ece1d5492bc114cfdf1dfc283f1b949e08257a484b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kts9j" Aug 12 23:59:34.025855 kubelet[2620]: E0812 23:59:34.024640 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kts9j_kube-system(8d519033-ce80-4be9-912c-282f7fa65f2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kts9j_kube-system(8d519033-ce80-4be9-912c-282f7fa65f2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6074ddb9289fb4bd2388e6ece1d5492bc114cfdf1dfc283f1b949e08257a484b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kts9j" podUID="8d519033-ce80-4be9-912c-282f7fa65f2e" Aug 12 23:59:34.026070 containerd[1504]: time="2025-08-12T23:59:34.026010883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bbbd9bf4-v5wpf,Uid:4f657772-708d-4136-9792-4741b9966e02,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"04e5fd2118fdb5ede84902c9055a18466a3c1d44cd8238edfdf834652cd794a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.026325 kubelet[2620]: E0812 23:59:34.026287 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04e5fd2118fdb5ede84902c9055a18466a3c1d44cd8238edfdf834652cd794a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.026374 kubelet[2620]: E0812 23:59:34.026341 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04e5fd2118fdb5ede84902c9055a18466a3c1d44cd8238edfdf834652cd794a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bbbd9bf4-v5wpf" Aug 12 23:59:34.026374 kubelet[2620]: E0812 23:59:34.026359 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04e5fd2118fdb5ede84902c9055a18466a3c1d44cd8238edfdf834652cd794a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bbbd9bf4-v5wpf" Aug 12 23:59:34.026417 kubelet[2620]: E0812 23:59:34.026400 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bbbd9bf4-v5wpf_calico-system(4f657772-708d-4136-9792-4741b9966e02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bbbd9bf4-v5wpf_calico-system(4f657772-708d-4136-9792-4741b9966e02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04e5fd2118fdb5ede84902c9055a18466a3c1d44cd8238edfdf834652cd794a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bbbd9bf4-v5wpf" podUID="4f657772-708d-4136-9792-4741b9966e02" Aug 12 23:59:34.032672 containerd[1504]: time="2025-08-12T23:59:34.032582591Z" level=error msg="Failed to destroy network for sandbox \"f866160d3ce97dfae304488a3174570d83a18beeb2c1bbac737a4d5efa0999fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.034999 containerd[1504]: time="2025-08-12T23:59:34.034924712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ndrj5,Uid:147f626b-1fdc-432d-8d5b-31ba3a08beb0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f866160d3ce97dfae304488a3174570d83a18beeb2c1bbac737a4d5efa0999fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.035405 kubelet[2620]: E0812 23:59:34.035361 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f866160d3ce97dfae304488a3174570d83a18beeb2c1bbac737a4d5efa0999fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:59:34.035468 kubelet[2620]: E0812 23:59:34.035428 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f866160d3ce97dfae304488a3174570d83a18beeb2c1bbac737a4d5efa0999fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ndrj5" Aug 12 23:59:34.035468 kubelet[2620]: E0812 23:59:34.035453 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f866160d3ce97dfae304488a3174570d83a18beeb2c1bbac737a4d5efa0999fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ndrj5" Aug 12 23:59:34.035672 kubelet[2620]: E0812 23:59:34.035515 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-ndrj5_calico-system(147f626b-1fdc-432d-8d5b-31ba3a08beb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-ndrj5_calico-system(147f626b-1fdc-432d-8d5b-31ba3a08beb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f866160d3ce97dfae304488a3174570d83a18beeb2c1bbac737a4d5efa0999fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-ndrj5" podUID="147f626b-1fdc-432d-8d5b-31ba3a08beb0" Aug 12 23:59:34.140630 systemd[1]: run-netns-cni\x2d9b4cdddc\x2da511\x2ded60\x2dc156\x2dd3911add66de.mount: Deactivated successfully. Aug 12 23:59:35.321728 kubelet[2620]: I0812 23:59:35.321679 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:59:35.322306 kubelet[2620]: E0812 23:59:35.322052 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:35.420058 kubelet[2620]: E0812 23:59:35.420022 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:36.963756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049968227.mount: Deactivated successfully. Aug 12 23:59:37.226704 containerd[1504]: time="2025-08-12T23:59:37.226558723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 12 23:59:37.230348 containerd[1504]: time="2025-08-12T23:59:37.230301007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.828838682s" Aug 12 23:59:37.230348 containerd[1504]: time="2025-08-12T23:59:37.230343451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 12 23:59:37.240917 containerd[1504]: time="2025-08-12T23:59:37.240870187Z" level=info msg="CreateContainer within sandbox \"076a5165af5798c255707a95b42f7e4b68e15377845a9322990e297ab71003cd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 12 23:59:37.247661 containerd[1504]: time="2025-08-12T23:59:37.247597673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:37.248421 containerd[1504]: time="2025-08-12T23:59:37.248372237Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:37.248954 containerd[1504]: time="2025-08-12T23:59:37.248925456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:37.267587 containerd[1504]: time="2025-08-12T23:59:37.266790664Z" level=info msg="Container e5139e4b9e09fc0c787009f4bb14486c03d010280ecc0b96ea5ae0863784ecdc: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:37.287506 containerd[1504]: time="2025-08-12T23:59:37.287437212Z" level=info msg="CreateContainer within sandbox \"076a5165af5798c255707a95b42f7e4b68e15377845a9322990e297ab71003cd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e5139e4b9e09fc0c787009f4bb14486c03d010280ecc0b96ea5ae0863784ecdc\"" Aug 12 23:59:37.288746 containerd[1504]: time="2025-08-12T23:59:37.288329989Z" level=info msg="StartContainer for \"e5139e4b9e09fc0c787009f4bb14486c03d010280ecc0b96ea5ae0863784ecdc\"" Aug 12 23:59:37.290453 containerd[1504]: time="2025-08-12T23:59:37.290410493Z" level=info msg="connecting to shim e5139e4b9e09fc0c787009f4bb14486c03d010280ecc0b96ea5ae0863784ecdc" address="unix:///run/containerd/s/a9fcfa5056c25776c8e6aba0a8a9bdcb8adcfec8991d6ec4437b69e80ce990f6" protocol=ttrpc version=3 Aug 12 23:59:37.315859 systemd[1]: Started cri-containerd-e5139e4b9e09fc0c787009f4bb14486c03d010280ecc0b96ea5ae0863784ecdc.scope - libcontainer container e5139e4b9e09fc0c787009f4bb14486c03d010280ecc0b96ea5ae0863784ecdc. Aug 12 23:59:37.365165 containerd[1504]: time="2025-08-12T23:59:37.365118555Z" level=info msg="StartContainer for \"e5139e4b9e09fc0c787009f4bb14486c03d010280ecc0b96ea5ae0863784ecdc\" returns successfully" Aug 12 23:59:37.625682 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 12 23:59:37.625809 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 12 23:59:37.634151 containerd[1504]: time="2025-08-12T23:59:37.634108542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5139e4b9e09fc0c787009f4bb14486c03d010280ecc0b96ea5ae0863784ecdc\" id:\"a79f1c89efce7c4735ce6683b8e24fa4e0cbb5b4cf9ba1a8bb0fb28d9ce320b8\" pid:3723 exit_status:1 exited_at:{seconds:1755043177 nanos:633637131}" Aug 12 23:59:37.775184 kubelet[2620]: I0812 23:59:37.774329 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9dwkq" podStartSLOduration=1.6499387140000001 podStartE2EDuration="14.774307951s" podCreationTimestamp="2025-08-12 23:59:23 +0000 UTC" firstStartedPulling="2025-08-12 23:59:24.106636886 +0000 UTC m=+20.979213908" lastFinishedPulling="2025-08-12 23:59:37.231006123 +0000 UTC m=+34.103583145" observedRunningTime="2025-08-12 23:59:37.444742227 +0000 UTC m=+34.317319289" watchObservedRunningTime="2025-08-12 23:59:37.774307951 +0000 UTC m=+34.646885013" Aug 12 23:59:37.877913 kubelet[2620]: I0812 23:59:37.877875 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p58xg\" (UniqueName: \"kubernetes.io/projected/4f657772-708d-4136-9792-4741b9966e02-kube-api-access-p58xg\") pod \"4f657772-708d-4136-9792-4741b9966e02\" (UID: \"4f657772-708d-4136-9792-4741b9966e02\") " Aug 12 23:59:37.877913 kubelet[2620]: I0812 23:59:37.877927 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f657772-708d-4136-9792-4741b9966e02-whisker-ca-bundle\") pod \"4f657772-708d-4136-9792-4741b9966e02\" (UID: \"4f657772-708d-4136-9792-4741b9966e02\") " Aug 12 23:59:37.878086 kubelet[2620]: I0812 23:59:37.877956 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4f657772-708d-4136-9792-4741b9966e02-whisker-backend-key-pair\") pod \"4f657772-708d-4136-9792-4741b9966e02\" (UID: \"4f657772-708d-4136-9792-4741b9966e02\") " Aug 12 23:59:37.879593 kubelet[2620]: I0812 23:59:37.879522 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f657772-708d-4136-9792-4741b9966e02-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4f657772-708d-4136-9792-4741b9966e02" (UID: "4f657772-708d-4136-9792-4741b9966e02"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 12 23:59:37.882190 kubelet[2620]: I0812 23:59:37.882147 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f657772-708d-4136-9792-4741b9966e02-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4f657772-708d-4136-9792-4741b9966e02" (UID: "4f657772-708d-4136-9792-4741b9966e02"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 12 23:59:37.882571 kubelet[2620]: I0812 23:59:37.882524 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f657772-708d-4136-9792-4741b9966e02-kube-api-access-p58xg" (OuterVolumeSpecName: "kube-api-access-p58xg") pod "4f657772-708d-4136-9792-4741b9966e02" (UID: "4f657772-708d-4136-9792-4741b9966e02"). InnerVolumeSpecName "kube-api-access-p58xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:59:37.965347 systemd[1]: var-lib-kubelet-pods-4f657772\x2d708d\x2d4136\x2d9792\x2d4741b9966e02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp58xg.mount: Deactivated successfully. Aug 12 23:59:37.965443 systemd[1]: var-lib-kubelet-pods-4f657772\x2d708d\x2d4136\x2d9792\x2d4741b9966e02-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 12 23:59:37.979932 kubelet[2620]: I0812 23:59:37.979862 2620 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4f657772-708d-4136-9792-4741b9966e02-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 12 23:59:37.979932 kubelet[2620]: I0812 23:59:37.979899 2620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p58xg\" (UniqueName: \"kubernetes.io/projected/4f657772-708d-4136-9792-4741b9966e02-kube-api-access-p58xg\") on node \"localhost\" DevicePath \"\"" Aug 12 23:59:37.979932 kubelet[2620]: I0812 23:59:37.979910 2620 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f657772-708d-4136-9792-4741b9966e02-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 12 23:59:38.437111 systemd[1]: Removed slice kubepods-besteffort-pod4f657772_708d_4136_9792_4741b9966e02.slice - libcontainer container kubepods-besteffort-pod4f657772_708d_4136_9792_4741b9966e02.slice. Aug 12 23:59:38.510563 systemd[1]: Created slice kubepods-besteffort-podab24df00_dc1d_40f7_a281_75f459c8fa89.slice - libcontainer container kubepods-besteffort-podab24df00_dc1d_40f7_a281_75f459c8fa89.slice. Aug 12 23:59:38.558566 containerd[1504]: time="2025-08-12T23:59:38.558445852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5139e4b9e09fc0c787009f4bb14486c03d010280ecc0b96ea5ae0863784ecdc\" id:\"0dcf430267f9588446ce77617d78d400799d53191bd49fed31c5ba4c4e8a0e8e\" pid:3780 exit_status:1 exited_at:{seconds:1755043178 nanos:558046331}" Aug 12 23:59:38.583630 kubelet[2620]: I0812 23:59:38.583576 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab24df00-dc1d-40f7-a281-75f459c8fa89-whisker-ca-bundle\") pod \"whisker-5d54cb4458-hsddf\" (UID: \"ab24df00-dc1d-40f7-a281-75f459c8fa89\") " pod="calico-system/whisker-5d54cb4458-hsddf" Aug 12 23:59:38.583630 kubelet[2620]: I0812 23:59:38.583627 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kmw9\" (UniqueName: \"kubernetes.io/projected/ab24df00-dc1d-40f7-a281-75f459c8fa89-kube-api-access-5kmw9\") pod \"whisker-5d54cb4458-hsddf\" (UID: \"ab24df00-dc1d-40f7-a281-75f459c8fa89\") " pod="calico-system/whisker-5d54cb4458-hsddf" Aug 12 23:59:38.583826 kubelet[2620]: I0812 23:59:38.583646 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab24df00-dc1d-40f7-a281-75f459c8fa89-whisker-backend-key-pair\") pod \"whisker-5d54cb4458-hsddf\" (UID: \"ab24df00-dc1d-40f7-a281-75f459c8fa89\") " pod="calico-system/whisker-5d54cb4458-hsddf" Aug 12 23:59:38.814815 containerd[1504]: time="2025-08-12T23:59:38.814378653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d54cb4458-hsddf,Uid:ab24df00-dc1d-40f7-a281-75f459c8fa89,Namespace:calico-system,Attempt:0,}" Aug 12 23:59:39.187499 systemd-networkd[1427]: cali12476ad856c: Link UP Aug 12 23:59:39.188796 systemd-networkd[1427]: cali12476ad856c: Gained carrier Aug 12 23:59:39.210582 containerd[1504]: 2025-08-12 23:59:38.864 [INFO][3797] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:59:39.210582 containerd[1504]: 2025-08-12 23:59:38.918 [INFO][3797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5d54cb4458--hsddf-eth0 whisker-5d54cb4458- calico-system ab24df00-dc1d-40f7-a281-75f459c8fa89 885 0 2025-08-12 23:59:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d54cb4458 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5d54cb4458-hsddf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali12476ad856c [] [] }} ContainerID="392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" Namespace="calico-system" Pod="whisker-5d54cb4458-hsddf" WorkloadEndpoint="localhost-k8s-whisker--5d54cb4458--hsddf-" Aug 12 23:59:39.210582 containerd[1504]: 2025-08-12 23:59:38.920 [INFO][3797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" Namespace="calico-system" Pod="whisker-5d54cb4458-hsddf" WorkloadEndpoint="localhost-k8s-whisker--5d54cb4458--hsddf-eth0" Aug 12 23:59:39.210582 containerd[1504]: 2025-08-12 23:59:39.083 [INFO][3811] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" HandleID="k8s-pod-network.392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" Workload="localhost-k8s-whisker--5d54cb4458--hsddf-eth0" Aug 12 23:59:39.211153 containerd[1504]: 2025-08-12 23:59:39.085 [INFO][3811] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" HandleID="k8s-pod-network.392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" Workload="localhost-k8s-whisker--5d54cb4458--hsddf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000117ef0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5d54cb4458-hsddf", "timestamp":"2025-08-12 23:59:39.083935243 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:59:39.211153 containerd[1504]: 2025-08-12 23:59:39.085 [INFO][3811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:39.211153 containerd[1504]: 2025-08-12 23:59:39.086 [INFO][3811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:39.211153 containerd[1504]: 2025-08-12 23:59:39.087 [INFO][3811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:59:39.211153 containerd[1504]: 2025-08-12 23:59:39.111 [INFO][3811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" host="localhost" Aug 12 23:59:39.211153 containerd[1504]: 2025-08-12 23:59:39.137 [INFO][3811] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:59:39.211153 containerd[1504]: 2025-08-12 23:59:39.143 [INFO][3811] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:59:39.211153 containerd[1504]: 2025-08-12 23:59:39.146 [INFO][3811] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:39.211153 containerd[1504]: 2025-08-12 23:59:39.150 [INFO][3811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:39.211153 containerd[1504]: 2025-08-12 23:59:39.153 [INFO][3811] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" host="localhost" Aug 12 23:59:39.211521 containerd[1504]: 2025-08-12 23:59:39.155 [INFO][3811] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074 Aug 12 23:59:39.211521 containerd[1504]: 2025-08-12 23:59:39.160 [INFO][3811] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" host="localhost" Aug 12 23:59:39.211521 containerd[1504]: 2025-08-12 23:59:39.168 [INFO][3811] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" host="localhost" Aug 12 23:59:39.211521 containerd[1504]: 2025-08-12 23:59:39.168 [INFO][3811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" host="localhost" Aug 12 23:59:39.211521 containerd[1504]: 2025-08-12 23:59:39.168 [INFO][3811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:39.211521 containerd[1504]: 2025-08-12 23:59:39.169 [INFO][3811] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" HandleID="k8s-pod-network.392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" Workload="localhost-k8s-whisker--5d54cb4458--hsddf-eth0" Aug 12 23:59:39.211724 containerd[1504]: 2025-08-12 23:59:39.172 [INFO][3797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" Namespace="calico-system" Pod="whisker-5d54cb4458-hsddf" WorkloadEndpoint="localhost-k8s-whisker--5d54cb4458--hsddf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d54cb4458--hsddf-eth0", GenerateName:"whisker-5d54cb4458-", Namespace:"calico-system", SelfLink:"", UID:"ab24df00-dc1d-40f7-a281-75f459c8fa89", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d54cb4458", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5d54cb4458-hsddf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12476ad856c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:39.211724 containerd[1504]: 2025-08-12 23:59:39.173 [INFO][3797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" Namespace="calico-system" Pod="whisker-5d54cb4458-hsddf" WorkloadEndpoint="localhost-k8s-whisker--5d54cb4458--hsddf-eth0" Aug 12 23:59:39.211818 containerd[1504]: 2025-08-12 23:59:39.175 [INFO][3797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12476ad856c ContainerID="392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" Namespace="calico-system" Pod="whisker-5d54cb4458-hsddf" WorkloadEndpoint="localhost-k8s-whisker--5d54cb4458--hsddf-eth0" Aug 12 23:59:39.211818 containerd[1504]: 2025-08-12 23:59:39.191 [INFO][3797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" Namespace="calico-system" Pod="whisker-5d54cb4458-hsddf" WorkloadEndpoint="localhost-k8s-whisker--5d54cb4458--hsddf-eth0" Aug 12 23:59:39.211863 containerd[1504]: 2025-08-12 23:59:39.192 [INFO][3797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" Namespace="calico-system" Pod="whisker-5d54cb4458-hsddf" WorkloadEndpoint="localhost-k8s-whisker--5d54cb4458--hsddf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d54cb4458--hsddf-eth0", GenerateName:"whisker-5d54cb4458-", Namespace:"calico-system", SelfLink:"", UID:"ab24df00-dc1d-40f7-a281-75f459c8fa89", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d54cb4458", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074", Pod:"whisker-5d54cb4458-hsddf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12476ad856c", MAC:"7a:ed:b3:5e:73:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:39.211928 containerd[1504]: 2025-08-12 23:59:39.208 [INFO][3797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" Namespace="calico-system" Pod="whisker-5d54cb4458-hsddf" WorkloadEndpoint="localhost-k8s-whisker--5d54cb4458--hsddf-eth0" Aug 12 23:59:39.230678 kubelet[2620]: I0812 23:59:39.228596 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f657772-708d-4136-9792-4741b9966e02" path="/var/lib/kubelet/pods/4f657772-708d-4136-9792-4741b9966e02/volumes" Aug 12 23:59:39.312119 containerd[1504]: time="2025-08-12T23:59:39.312047790Z" level=info msg="connecting to shim 392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074" address="unix:///run/containerd/s/66ca79f9317fa4257bc72bea2148c7c839e818dfbecb0d5f3690a1a880ba62e7" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:59:39.339859 systemd[1]: Started cri-containerd-392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074.scope - libcontainer container 392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074. Aug 12 23:59:39.352513 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:59:39.382506 containerd[1504]: time="2025-08-12T23:59:39.382467830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d54cb4458-hsddf,Uid:ab24df00-dc1d-40f7-a281-75f459c8fa89,Namespace:calico-system,Attempt:0,} returns sandbox id \"392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074\"" Aug 12 23:59:39.389344 containerd[1504]: time="2025-08-12T23:59:39.389304762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 12 23:59:39.587228 systemd-networkd[1427]: vxlan.calico: Link UP Aug 12 23:59:39.587234 systemd-networkd[1427]: vxlan.calico: Gained carrier Aug 12 23:59:40.355752 containerd[1504]: time="2025-08-12T23:59:40.355702192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:40.356539 containerd[1504]: time="2025-08-12T23:59:40.356493750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 12 23:59:40.357584 containerd[1504]: time="2025-08-12T23:59:40.357540653Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:40.360333 containerd[1504]: time="2025-08-12T23:59:40.360281401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:40.360849 containerd[1504]: time="2025-08-12T23:59:40.360803932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 971.455006ms" Aug 12 23:59:40.360884 containerd[1504]: time="2025-08-12T23:59:40.360848177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 12 23:59:40.366946 containerd[1504]: time="2025-08-12T23:59:40.366792960Z" level=info msg="CreateContainer within sandbox \"392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 12 23:59:40.379174 containerd[1504]: time="2025-08-12T23:59:40.379123729Z" level=info msg="Container 3b856f6dd02b2ce0cbf3f6ffbe76cd292ba1b76f9f8f4f1a366aed695f6d788a: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:40.382742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount7194621.mount: Deactivated successfully. Aug 12 23:59:40.390424 containerd[1504]: time="2025-08-12T23:59:40.390366991Z" level=info msg="CreateContainer within sandbox \"392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3b856f6dd02b2ce0cbf3f6ffbe76cd292ba1b76f9f8f4f1a366aed695f6d788a\"" Aug 12 23:59:40.391069 containerd[1504]: time="2025-08-12T23:59:40.390934846Z" level=info msg="StartContainer for \"3b856f6dd02b2ce0cbf3f6ffbe76cd292ba1b76f9f8f4f1a366aed695f6d788a\"" Aug 12 23:59:40.392473 containerd[1504]: time="2025-08-12T23:59:40.392443674Z" level=info msg="connecting to shim 3b856f6dd02b2ce0cbf3f6ffbe76cd292ba1b76f9f8f4f1a366aed695f6d788a" address="unix:///run/containerd/s/66ca79f9317fa4257bc72bea2148c7c839e818dfbecb0d5f3690a1a880ba62e7" protocol=ttrpc version=3 Aug 12 23:59:40.416877 systemd[1]: Started cri-containerd-3b856f6dd02b2ce0cbf3f6ffbe76cd292ba1b76f9f8f4f1a366aed695f6d788a.scope - libcontainer container 3b856f6dd02b2ce0cbf3f6ffbe76cd292ba1b76f9f8f4f1a366aed695f6d788a. Aug 12 23:59:40.466003 containerd[1504]: time="2025-08-12T23:59:40.465879114Z" level=info msg="StartContainer for \"3b856f6dd02b2ce0cbf3f6ffbe76cd292ba1b76f9f8f4f1a366aed695f6d788a\" returns successfully" Aug 12 23:59:40.471088 containerd[1504]: time="2025-08-12T23:59:40.471007937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 12 23:59:40.747883 systemd-networkd[1427]: vxlan.calico: Gained IPv6LL Aug 12 23:59:41.003864 systemd-networkd[1427]: cali12476ad856c: Gained IPv6LL Aug 12 23:59:42.176312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1388336447.mount: Deactivated successfully. Aug 12 23:59:42.204507 containerd[1504]: time="2025-08-12T23:59:42.204308370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:42.205410 containerd[1504]: time="2025-08-12T23:59:42.205365748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 12 23:59:42.206404 containerd[1504]: time="2025-08-12T23:59:42.206371041Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:42.208121 containerd[1504]: time="2025-08-12T23:59:42.208089440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:42.208979 containerd[1504]: time="2025-08-12T23:59:42.208886034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.737818611s" Aug 12 23:59:42.208979 containerd[1504]: time="2025-08-12T23:59:42.208920317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 12 23:59:42.221325 containerd[1504]: time="2025-08-12T23:59:42.220764732Z" level=info msg="CreateContainer within sandbox \"392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 12 23:59:42.234355 containerd[1504]: time="2025-08-12T23:59:42.233930789Z" level=info msg="Container 76d783f96a319d59eded784b6bafbf907f3b9ee452d6a2c7ed8604c4bc8fdacd: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:42.237423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822410242.mount: Deactivated successfully. Aug 12 23:59:42.245863 containerd[1504]: time="2025-08-12T23:59:42.245820248Z" level=info msg="CreateContainer within sandbox \"392325a375882931998a7f38491cfaabf256f01a5d368232184be773f9efd074\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"76d783f96a319d59eded784b6bafbf907f3b9ee452d6a2c7ed8604c4bc8fdacd\"" Aug 12 23:59:42.246433 containerd[1504]: time="2025-08-12T23:59:42.246402302Z" level=info msg="StartContainer for \"76d783f96a319d59eded784b6bafbf907f3b9ee452d6a2c7ed8604c4bc8fdacd\"" Aug 12 23:59:42.247767 containerd[1504]: time="2025-08-12T23:59:42.247720504Z" level=info msg="connecting to shim 76d783f96a319d59eded784b6bafbf907f3b9ee452d6a2c7ed8604c4bc8fdacd" address="unix:///run/containerd/s/66ca79f9317fa4257bc72bea2148c7c839e818dfbecb0d5f3690a1a880ba62e7" protocol=ttrpc version=3 Aug 12 23:59:42.270865 systemd[1]: Started cri-containerd-76d783f96a319d59eded784b6bafbf907f3b9ee452d6a2c7ed8604c4bc8fdacd.scope - libcontainer container 76d783f96a319d59eded784b6bafbf907f3b9ee452d6a2c7ed8604c4bc8fdacd. Aug 12 23:59:42.310428 containerd[1504]: time="2025-08-12T23:59:42.310390737Z" level=info msg="StartContainer for \"76d783f96a319d59eded784b6bafbf907f3b9ee452d6a2c7ed8604c4bc8fdacd\" returns successfully" Aug 12 23:59:42.463017 kubelet[2620]: I0812 23:59:42.462096 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5d54cb4458-hsddf" podStartSLOduration=1.632574087 podStartE2EDuration="4.46207548s" podCreationTimestamp="2025-08-12 23:59:38 +0000 UTC" firstStartedPulling="2025-08-12 23:59:39.388772428 +0000 UTC m=+36.261349490" lastFinishedPulling="2025-08-12 23:59:42.218273821 +0000 UTC m=+39.090850883" observedRunningTime="2025-08-12 23:59:42.460306236 +0000 UTC m=+39.332883298" watchObservedRunningTime="2025-08-12 23:59:42.46207548 +0000 UTC m=+39.334652542" Aug 12 23:59:45.222394 containerd[1504]: time="2025-08-12T23:59:45.222011395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7d5ddc9-29pmx,Uid:e33239bc-c1d6-445e-9a48-5a5bfcb0c23a,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:59:45.222394 containerd[1504]: time="2025-08-12T23:59:45.222172648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8g89f,Uid:dee9ceaa-e81b-402b-8ee3-70bd7c437460,Namespace:calico-system,Attempt:0,}" Aug 12 23:59:45.430186 systemd-networkd[1427]: calic9f0d817546: Link UP Aug 12 23:59:45.433008 systemd-networkd[1427]: calic9f0d817546: Gained carrier Aug 12 23:59:45.457554 containerd[1504]: 2025-08-12 23:59:45.312 [INFO][4170] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0 calico-apiserver-58f7d5ddc9- calico-apiserver e33239bc-c1d6-445e-9a48-5a5bfcb0c23a 813 0 2025-08-12 23:59:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58f7d5ddc9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-58f7d5ddc9-29pmx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic9f0d817546 [] [] }} ContainerID="e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-29pmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-" Aug 12 23:59:45.457554 containerd[1504]: 2025-08-12 23:59:45.312 [INFO][4170] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-29pmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0" Aug 12 23:59:45.457554 containerd[1504]: 2025-08-12 23:59:45.356 [INFO][4199] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" HandleID="k8s-pod-network.e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" Workload="localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0" Aug 12 23:59:45.457850 containerd[1504]: 2025-08-12 23:59:45.356 [INFO][4199] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" HandleID="k8s-pod-network.e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" Workload="localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033c050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-58f7d5ddc9-29pmx", "timestamp":"2025-08-12 23:59:45.356600995 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:59:45.457850 containerd[1504]: 2025-08-12 23:59:45.357 [INFO][4199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:45.457850 containerd[1504]: 2025-08-12 23:59:45.357 [INFO][4199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:45.457850 containerd[1504]: 2025-08-12 23:59:45.357 [INFO][4199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:59:45.457850 containerd[1504]: 2025-08-12 23:59:45.371 [INFO][4199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" host="localhost" Aug 12 23:59:45.457850 containerd[1504]: 2025-08-12 23:59:45.382 [INFO][4199] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:59:45.457850 containerd[1504]: 2025-08-12 23:59:45.396 [INFO][4199] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:59:45.457850 containerd[1504]: 2025-08-12 23:59:45.405 [INFO][4199] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:45.457850 containerd[1504]: 2025-08-12 23:59:45.407 [INFO][4199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:45.457850 containerd[1504]: 2025-08-12 23:59:45.408 [INFO][4199] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" host="localhost" Aug 12 23:59:45.458089 containerd[1504]: 2025-08-12 23:59:45.410 [INFO][4199] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183 Aug 12 23:59:45.458089 containerd[1504]: 2025-08-12 23:59:45.414 [INFO][4199] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" host="localhost" Aug 12 23:59:45.458089 containerd[1504]: 2025-08-12 23:59:45.420 [INFO][4199] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" host="localhost" Aug 12 23:59:45.458089 containerd[1504]: 2025-08-12 23:59:45.421 [INFO][4199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" host="localhost" Aug 12 23:59:45.458089 containerd[1504]: 2025-08-12 23:59:45.421 [INFO][4199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:45.458089 containerd[1504]: 2025-08-12 23:59:45.421 [INFO][4199] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" HandleID="k8s-pod-network.e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" Workload="localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0" Aug 12 23:59:45.458211 containerd[1504]: 2025-08-12 23:59:45.423 [INFO][4170] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-29pmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0", GenerateName:"calico-apiserver-58f7d5ddc9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e33239bc-c1d6-445e-9a48-5a5bfcb0c23a", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f7d5ddc9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-58f7d5ddc9-29pmx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9f0d817546", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:45.458271 containerd[1504]: 2025-08-12 23:59:45.423 [INFO][4170] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-29pmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0" Aug 12 23:59:45.458271 containerd[1504]: 2025-08-12 23:59:45.423 [INFO][4170] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9f0d817546 ContainerID="e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-29pmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0" Aug 12 23:59:45.458271 containerd[1504]: 2025-08-12 23:59:45.435 [INFO][4170] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-29pmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0" Aug 12 23:59:45.458337 containerd[1504]: 2025-08-12 23:59:45.436 [INFO][4170] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-29pmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0", GenerateName:"calico-apiserver-58f7d5ddc9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e33239bc-c1d6-445e-9a48-5a5bfcb0c23a", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f7d5ddc9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183", Pod:"calico-apiserver-58f7d5ddc9-29pmx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9f0d817546", MAC:"06:83:ed:c4:b7:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:45.458386 containerd[1504]: 2025-08-12 23:59:45.454 [INFO][4170] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-29pmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--29pmx-eth0" Aug 12 23:59:45.526649 systemd-networkd[1427]: cali8c07d057969: Link UP Aug 12 23:59:45.527139 systemd-networkd[1427]: cali8c07d057969: Gained carrier Aug 12 23:59:45.552793 containerd[1504]: 2025-08-12 23:59:45.310 [INFO][4181] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8g89f-eth0 csi-node-driver- calico-system dee9ceaa-e81b-402b-8ee3-70bd7c437460 684 0 2025-08-12 23:59:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8g89f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8c07d057969 [] [] }} ContainerID="88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" Namespace="calico-system" Pod="csi-node-driver-8g89f" WorkloadEndpoint="localhost-k8s-csi--node--driver--8g89f-" Aug 12 23:59:45.552793 containerd[1504]: 2025-08-12 23:59:45.310 [INFO][4181] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" Namespace="calico-system" Pod="csi-node-driver-8g89f" WorkloadEndpoint="localhost-k8s-csi--node--driver--8g89f-eth0" Aug 12 23:59:45.552793 containerd[1504]: 2025-08-12 23:59:45.357 [INFO][4200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" HandleID="k8s-pod-network.88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" Workload="localhost-k8s-csi--node--driver--8g89f-eth0" Aug 12 23:59:45.552999 containerd[1504]: 2025-08-12 23:59:45.357 [INFO][4200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" HandleID="k8s-pod-network.88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" Workload="localhost-k8s-csi--node--driver--8g89f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8g89f", "timestamp":"2025-08-12 23:59:45.35760244 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:59:45.552999 containerd[1504]: 2025-08-12 23:59:45.357 [INFO][4200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:45.552999 containerd[1504]: 2025-08-12 23:59:45.421 [INFO][4200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:45.552999 containerd[1504]: 2025-08-12 23:59:45.421 [INFO][4200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:59:45.552999 containerd[1504]: 2025-08-12 23:59:45.475 [INFO][4200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" host="localhost" Aug 12 23:59:45.552999 containerd[1504]: 2025-08-12 23:59:45.482 [INFO][4200] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:59:45.552999 containerd[1504]: 2025-08-12 23:59:45.495 [INFO][4200] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:59:45.552999 containerd[1504]: 2025-08-12 23:59:45.498 [INFO][4200] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:45.552999 containerd[1504]: 2025-08-12 23:59:45.501 [INFO][4200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:45.552999 containerd[1504]: 2025-08-12 23:59:45.501 [INFO][4200] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" host="localhost" Aug 12 23:59:45.553208 containerd[1504]: 2025-08-12 23:59:45.504 [INFO][4200] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7 Aug 12 23:59:45.553208 containerd[1504]: 2025-08-12 23:59:45.511 [INFO][4200] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" host="localhost" Aug 12 23:59:45.553208 containerd[1504]: 2025-08-12 23:59:45.520 [INFO][4200] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" host="localhost" Aug 12 23:59:45.553208 containerd[1504]: 2025-08-12 23:59:45.521 [INFO][4200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" host="localhost" Aug 12 23:59:45.553208 containerd[1504]: 2025-08-12 23:59:45.521 [INFO][4200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:45.553208 containerd[1504]: 2025-08-12 23:59:45.521 [INFO][4200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" HandleID="k8s-pod-network.88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" Workload="localhost-k8s-csi--node--driver--8g89f-eth0" Aug 12 23:59:45.553316 containerd[1504]: 2025-08-12 23:59:45.523 [INFO][4181] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" Namespace="calico-system" Pod="csi-node-driver-8g89f" WorkloadEndpoint="localhost-k8s-csi--node--driver--8g89f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8g89f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dee9ceaa-e81b-402b-8ee3-70bd7c437460", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8g89f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c07d057969", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:45.553361 containerd[1504]: 2025-08-12 23:59:45.523 [INFO][4181] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" Namespace="calico-system" Pod="csi-node-driver-8g89f" WorkloadEndpoint="localhost-k8s-csi--node--driver--8g89f-eth0" Aug 12 23:59:45.553361 containerd[1504]: 2025-08-12 23:59:45.523 [INFO][4181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c07d057969 ContainerID="88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" Namespace="calico-system" Pod="csi-node-driver-8g89f" WorkloadEndpoint="localhost-k8s-csi--node--driver--8g89f-eth0" Aug 12 23:59:45.553361 containerd[1504]: 2025-08-12 23:59:45.529 [INFO][4181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" Namespace="calico-system" Pod="csi-node-driver-8g89f" WorkloadEndpoint="localhost-k8s-csi--node--driver--8g89f-eth0" Aug 12 23:59:45.553418 containerd[1504]: 2025-08-12 23:59:45.531 [INFO][4181] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" Namespace="calico-system" Pod="csi-node-driver-8g89f" WorkloadEndpoint="localhost-k8s-csi--node--driver--8g89f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8g89f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dee9ceaa-e81b-402b-8ee3-70bd7c437460", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7", Pod:"csi-node-driver-8g89f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c07d057969", MAC:"56:26:98:87:ab:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:45.553474 containerd[1504]: 2025-08-12 23:59:45.548 [INFO][4181] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" Namespace="calico-system" Pod="csi-node-driver-8g89f" WorkloadEndpoint="localhost-k8s-csi--node--driver--8g89f-eth0" Aug 12 23:59:45.556211 containerd[1504]: time="2025-08-12T23:59:45.556160577Z" level=info msg="connecting to shim e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183" address="unix:///run/containerd/s/7182a74af33112c9003d8b57981ae9107b2c9f5fcab27ecf3c77b4fc99ca7242" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:59:45.586664 containerd[1504]: time="2025-08-12T23:59:45.586610534Z" level=info msg="connecting to shim 88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7" address="unix:///run/containerd/s/6d27311c8fc5b497c53242f381d8a1f7bcc806e99edcfed4c511a2d7910a4cb2" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:59:45.596900 systemd[1]: Started cri-containerd-e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183.scope - libcontainer container e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183. Aug 12 23:59:45.610782 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:59:45.630902 systemd[1]: Started cri-containerd-88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7.scope - libcontainer container 88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7. Aug 12 23:59:45.643155 containerd[1504]: time="2025-08-12T23:59:45.643107073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7d5ddc9-29pmx,Uid:e33239bc-c1d6-445e-9a48-5a5bfcb0c23a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183\"" Aug 12 23:59:45.645714 containerd[1504]: time="2025-08-12T23:59:45.645641609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:59:45.646743 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:59:45.661107 containerd[1504]: time="2025-08-12T23:59:45.661067845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8g89f,Uid:dee9ceaa-e81b-402b-8ee3-70bd7c437460,Namespace:calico-system,Attempt:0,} returns sandbox id \"88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7\"" Aug 12 23:59:46.163854 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:42722.service - OpenSSH per-connection server daemon (10.0.0.1:42722). Aug 12 23:59:46.221843 kubelet[2620]: E0812 23:59:46.221814 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:46.223719 containerd[1504]: time="2025-08-12T23:59:46.222426269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d68f5454c-8df5x,Uid:5d3aca1f-d4ad-48e2-b0ea-719d09abba45,Namespace:calico-system,Attempt:0,}" Aug 12 23:59:46.223719 containerd[1504]: time="2025-08-12T23:59:46.223687494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kts9j,Uid:8d519033-ce80-4be9-912c-282f7fa65f2e,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:46.254902 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 42722 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:59:46.259355 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:59:46.267109 systemd-logind[1481]: New session 8 of user core. Aug 12 23:59:46.281925 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 12 23:59:46.436284 systemd-networkd[1427]: calid4e480fae13: Link UP Aug 12 23:59:46.437834 systemd-networkd[1427]: calid4e480fae13: Gained carrier Aug 12 23:59:46.474155 containerd[1504]: 2025-08-12 23:59:46.311 [INFO][4346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0 calico-kube-controllers-d68f5454c- calico-system 5d3aca1f-d4ad-48e2-b0ea-719d09abba45 812 0 2025-08-12 23:59:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d68f5454c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-d68f5454c-8df5x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid4e480fae13 [] [] }} ContainerID="84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" Namespace="calico-system" Pod="calico-kube-controllers-d68f5454c-8df5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-" Aug 12 23:59:46.474155 containerd[1504]: 2025-08-12 23:59:46.311 [INFO][4346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" Namespace="calico-system" Pod="calico-kube-controllers-d68f5454c-8df5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0" Aug 12 23:59:46.474155 containerd[1504]: 2025-08-12 23:59:46.352 [INFO][4369] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" HandleID="k8s-pod-network.84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" Workload="localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0" Aug 12 23:59:46.474478 containerd[1504]: 2025-08-12 23:59:46.353 [INFO][4369] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" HandleID="k8s-pod-network.84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" Workload="localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000594570), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-d68f5454c-8df5x", "timestamp":"2025-08-12 23:59:46.352852081 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:59:46.474478 containerd[1504]: 2025-08-12 23:59:46.353 [INFO][4369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:46.474478 containerd[1504]: 2025-08-12 23:59:46.353 [INFO][4369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:46.474478 containerd[1504]: 2025-08-12 23:59:46.353 [INFO][4369] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:59:46.474478 containerd[1504]: 2025-08-12 23:59:46.375 [INFO][4369] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" host="localhost" Aug 12 23:59:46.474478 containerd[1504]: 2025-08-12 23:59:46.385 [INFO][4369] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:59:46.474478 containerd[1504]: 2025-08-12 23:59:46.392 [INFO][4369] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:59:46.474478 containerd[1504]: 2025-08-12 23:59:46.395 [INFO][4369] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:46.474478 containerd[1504]: 2025-08-12 23:59:46.401 [INFO][4369] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:46.474478 containerd[1504]: 2025-08-12 23:59:46.401 [INFO][4369] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" host="localhost" Aug 12 23:59:46.474752 containerd[1504]: 2025-08-12 23:59:46.403 [INFO][4369] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404 Aug 12 23:59:46.474752 containerd[1504]: 2025-08-12 23:59:46.409 [INFO][4369] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" host="localhost" Aug 12 23:59:46.474752 containerd[1504]: 2025-08-12 23:59:46.431 [INFO][4369] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" host="localhost" Aug 12 23:59:46.474752 containerd[1504]: 2025-08-12 23:59:46.431 [INFO][4369] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" host="localhost" Aug 12 23:59:46.474752 containerd[1504]: 2025-08-12 23:59:46.431 [INFO][4369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:46.474752 containerd[1504]: 2025-08-12 23:59:46.431 [INFO][4369] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" HandleID="k8s-pod-network.84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" Workload="localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0" Aug 12 23:59:46.475141 containerd[1504]: 2025-08-12 23:59:46.433 [INFO][4346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" Namespace="calico-system" Pod="calico-kube-controllers-d68f5454c-8df5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0", GenerateName:"calico-kube-controllers-d68f5454c-", Namespace:"calico-system", SelfLink:"", UID:"5d3aca1f-d4ad-48e2-b0ea-719d09abba45", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d68f5454c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-d68f5454c-8df5x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid4e480fae13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:46.475248 containerd[1504]: 2025-08-12 23:59:46.433 [INFO][4346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" Namespace="calico-system" Pod="calico-kube-controllers-d68f5454c-8df5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0" Aug 12 23:59:46.475248 containerd[1504]: 2025-08-12 23:59:46.434 [INFO][4346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4e480fae13 ContainerID="84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" Namespace="calico-system" Pod="calico-kube-controllers-d68f5454c-8df5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0" Aug 12 23:59:46.475248 containerd[1504]: 2025-08-12 23:59:46.438 [INFO][4346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" Namespace="calico-system" Pod="calico-kube-controllers-d68f5454c-8df5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0" Aug 12 23:59:46.475316 containerd[1504]: 2025-08-12 23:59:46.439 [INFO][4346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" Namespace="calico-system" Pod="calico-kube-controllers-d68f5454c-8df5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0", GenerateName:"calico-kube-controllers-d68f5454c-", Namespace:"calico-system", SelfLink:"", UID:"5d3aca1f-d4ad-48e2-b0ea-719d09abba45", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d68f5454c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404", Pod:"calico-kube-controllers-d68f5454c-8df5x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid4e480fae13", MAC:"e6:e6:94:ac:88:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:46.475416 containerd[1504]: 2025-08-12 23:59:46.470 [INFO][4346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" Namespace="calico-system" Pod="calico-kube-controllers-d68f5454c-8df5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d68f5454c--8df5x-eth0" Aug 12 23:59:46.576230 containerd[1504]: time="2025-08-12T23:59:46.576163982Z" level=info msg="connecting to shim 84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404" address="unix:///run/containerd/s/649f9aaf054c759af2feaf70c415e8d08ab6e0c0a56ed870db35d85038a58b49" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:59:46.584686 systemd-networkd[1427]: calia2a52791c86: Link UP Aug 12 23:59:46.586952 systemd-networkd[1427]: calia2a52791c86: Gained carrier Aug 12 23:59:46.607887 containerd[1504]: 2025-08-12 23:59:46.302 [INFO][4336] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0 coredns-7c65d6cfc9- kube-system 8d519033-ce80-4be9-912c-282f7fa65f2e 810 0 2025-08-12 23:59:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-kts9j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia2a52791c86 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kts9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kts9j-" Aug 12 23:59:46.607887 containerd[1504]: 2025-08-12 23:59:46.303 [INFO][4336] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kts9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0" Aug 12 23:59:46.607887 containerd[1504]: 2025-08-12 23:59:46.360 [INFO][4363] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" HandleID="k8s-pod-network.30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" Workload="localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0" Aug 12 23:59:46.608512 containerd[1504]: 2025-08-12 23:59:46.360 [INFO][4363] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" HandleID="k8s-pod-network.30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" Workload="localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-kts9j", "timestamp":"2025-08-12 23:59:46.360465114 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:59:46.608512 containerd[1504]: 2025-08-12 23:59:46.360 [INFO][4363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:46.608512 containerd[1504]: 2025-08-12 23:59:46.431 [INFO][4363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:46.608512 containerd[1504]: 2025-08-12 23:59:46.431 [INFO][4363] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:59:46.608512 containerd[1504]: 2025-08-12 23:59:46.503 [INFO][4363] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" host="localhost" Aug 12 23:59:46.608512 containerd[1504]: 2025-08-12 23:59:46.516 [INFO][4363] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:59:46.608512 containerd[1504]: 2025-08-12 23:59:46.528 [INFO][4363] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:59:46.608512 containerd[1504]: 2025-08-12 23:59:46.536 [INFO][4363] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:46.608512 containerd[1504]: 2025-08-12 23:59:46.544 [INFO][4363] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:46.608512 containerd[1504]: 2025-08-12 23:59:46.544 [INFO][4363] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" host="localhost" Aug 12 23:59:46.608877 containerd[1504]: 2025-08-12 23:59:46.550 [INFO][4363] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc Aug 12 23:59:46.608877 containerd[1504]: 2025-08-12 23:59:46.556 [INFO][4363] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" host="localhost" Aug 12 23:59:46.608877 containerd[1504]: 2025-08-12 23:59:46.570 [INFO][4363] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" host="localhost" Aug 12 23:59:46.608877 containerd[1504]: 2025-08-12 23:59:46.570 [INFO][4363] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" host="localhost" Aug 12 23:59:46.608877 containerd[1504]: 2025-08-12 23:59:46.570 [INFO][4363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:46.608877 containerd[1504]: 2025-08-12 23:59:46.570 [INFO][4363] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" HandleID="k8s-pod-network.30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" Workload="localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0" Aug 12 23:59:46.609007 containerd[1504]: 2025-08-12 23:59:46.575 [INFO][4336] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kts9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8d519033-ce80-4be9-912c-282f7fa65f2e", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-kts9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2a52791c86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:46.609080 containerd[1504]: 2025-08-12 23:59:46.575 [INFO][4336] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kts9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0" Aug 12 23:59:46.609080 containerd[1504]: 2025-08-12 23:59:46.575 [INFO][4336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2a52791c86 ContainerID="30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kts9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0" Aug 12 23:59:46.609080 containerd[1504]: 2025-08-12 23:59:46.585 [INFO][4336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kts9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0" Aug 12 23:59:46.609159 containerd[1504]: 2025-08-12 23:59:46.586 [INFO][4336] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kts9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8d519033-ce80-4be9-912c-282f7fa65f2e", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc", Pod:"coredns-7c65d6cfc9-kts9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2a52791c86", MAC:"76:1c:02:a5:14:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:46.609159 containerd[1504]: 2025-08-12 23:59:46.603 [INFO][4336] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kts9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kts9j-eth0" Aug 12 23:59:46.628940 systemd[1]: Started cri-containerd-84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404.scope - libcontainer container 84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404. Aug 12 23:59:46.635770 systemd-networkd[1427]: calic9f0d817546: Gained IPv6LL Aug 12 23:59:46.649130 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:59:46.657386 sshd[4360]: Connection closed by 10.0.0.1 port 42722 Aug 12 23:59:46.658155 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:46.667327 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:42722.service: Deactivated successfully. Aug 12 23:59:46.670195 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:59:46.672292 containerd[1504]: time="2025-08-12T23:59:46.672201653Z" level=info msg="connecting to shim 30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc" address="unix:///run/containerd/s/72cb6425a5d1935b1709c816ebfd4111bec9526bcb6c43813afb090a4be86a46" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:59:46.672719 systemd-logind[1481]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:59:46.677064 systemd-logind[1481]: Removed session 8. Aug 12 23:59:46.707345 containerd[1504]: time="2025-08-12T23:59:46.707225807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d68f5454c-8df5x,Uid:5d3aca1f-d4ad-48e2-b0ea-719d09abba45,Namespace:calico-system,Attempt:0,} returns sandbox id \"84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404\"" Aug 12 23:59:46.718292 systemd[1]: Started cri-containerd-30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc.scope - libcontainer container 30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc. Aug 12 23:59:46.763581 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:59:46.830982 containerd[1504]: time="2025-08-12T23:59:46.830943221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kts9j,Uid:8d519033-ce80-4be9-912c-282f7fa65f2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc\"" Aug 12 23:59:46.832069 kubelet[2620]: E0812 23:59:46.832031 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:46.835707 containerd[1504]: time="2025-08-12T23:59:46.834355865Z" level=info msg="CreateContainer within sandbox \"30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:59:46.905210 containerd[1504]: time="2025-08-12T23:59:46.905157956Z" level=info msg="Container 89ce915a4feddf341ff2f448b7b6e541d1f14be2ab5a812ee09512e9d88fe252: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:46.927933 containerd[1504]: time="2025-08-12T23:59:46.927889247Z" level=info msg="CreateContainer within sandbox \"30d7294d0eb69212d3db5fa3c321a4f69c14ddbf58ea7cb271f6bb6ce9e189dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"89ce915a4feddf341ff2f448b7b6e541d1f14be2ab5a812ee09512e9d88fe252\"" Aug 12 23:59:46.928750 containerd[1504]: time="2025-08-12T23:59:46.928721716Z" level=info msg="StartContainer for \"89ce915a4feddf341ff2f448b7b6e541d1f14be2ab5a812ee09512e9d88fe252\"" Aug 12 23:59:46.931849 containerd[1504]: time="2025-08-12T23:59:46.931808653Z" level=info msg="connecting to shim 89ce915a4feddf341ff2f448b7b6e541d1f14be2ab5a812ee09512e9d88fe252" address="unix:///run/containerd/s/72cb6425a5d1935b1709c816ebfd4111bec9526bcb6c43813afb090a4be86a46" protocol=ttrpc version=3 Aug 12 23:59:46.962951 systemd[1]: Started cri-containerd-89ce915a4feddf341ff2f448b7b6e541d1f14be2ab5a812ee09512e9d88fe252.scope - libcontainer container 89ce915a4feddf341ff2f448b7b6e541d1f14be2ab5a812ee09512e9d88fe252. Aug 12 23:59:47.016623 containerd[1504]: time="2025-08-12T23:59:47.016580516Z" level=info msg="StartContainer for \"89ce915a4feddf341ff2f448b7b6e541d1f14be2ab5a812ee09512e9d88fe252\" returns successfully" Aug 12 23:59:47.221712 kubelet[2620]: E0812 23:59:47.221463 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:47.222495 containerd[1504]: time="2025-08-12T23:59:47.222377196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ndrj5,Uid:147f626b-1fdc-432d-8d5b-31ba3a08beb0,Namespace:calico-system,Attempt:0,}" Aug 12 23:59:47.222734 containerd[1504]: time="2025-08-12T23:59:47.222709783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hb2f2,Uid:f51ad2a9-7d82-464f-b860-89072ace43dc,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:47.222994 containerd[1504]: time="2025-08-12T23:59:47.222798870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7d5ddc9-qfwms,Uid:e3337cb5-1d9b-4c57-aeca-598477706d1c,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:59:47.468733 systemd-networkd[1427]: cali8c07d057969: Gained IPv6LL Aug 12 23:59:47.513868 kubelet[2620]: E0812 23:59:47.513528 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:47.623994 systemd-networkd[1427]: calic9c3e9ab75c: Link UP Aug 12 23:59:47.628303 systemd-networkd[1427]: calic9c3e9ab75c: Gained carrier Aug 12 23:59:47.651903 kubelet[2620]: I0812 23:59:47.651285 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kts9j" podStartSLOduration=37.65126496 podStartE2EDuration="37.65126496s" podCreationTimestamp="2025-08-12 23:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:59:47.554955456 +0000 UTC m=+44.427532518" watchObservedRunningTime="2025-08-12 23:59:47.65126496 +0000 UTC m=+44.523842023" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.349 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0 coredns-7c65d6cfc9- kube-system f51ad2a9-7d82-464f-b860-89072ace43dc 815 0 2025-08-12 23:59:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-hb2f2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic9c3e9ab75c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hb2f2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hb2f2-" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.349 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hb2f2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.480 [INFO][4589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" HandleID="k8s-pod-network.e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" Workload="localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.480 [INFO][4589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" HandleID="k8s-pod-network.e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" Workload="localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f0a30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-hb2f2", "timestamp":"2025-08-12 23:59:47.480265588 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.480 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.480 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.480 [INFO][4589] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.505 [INFO][4589] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" host="localhost" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.516 [INFO][4589] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.535 [INFO][4589] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.555 [INFO][4589] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.563 [INFO][4589] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.563 [INFO][4589] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" host="localhost" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.569 [INFO][4589] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0 Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.589 [INFO][4589] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" host="localhost" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.604 [INFO][4589] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" host="localhost" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.604 [INFO][4589] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" host="localhost" Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.605 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:47.662387 containerd[1504]: 2025-08-12 23:59:47.606 [INFO][4589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" HandleID="k8s-pod-network.e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" Workload="localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0" Aug 12 23:59:47.669305 containerd[1504]: 2025-08-12 23:59:47.615 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hb2f2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f51ad2a9-7d82-464f-b860-89072ace43dc", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-hb2f2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9c3e9ab75c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:47.669305 containerd[1504]: 2025-08-12 23:59:47.615 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hb2f2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0" Aug 12 23:59:47.669305 containerd[1504]: 2025-08-12 23:59:47.615 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9c3e9ab75c ContainerID="e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hb2f2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0" Aug 12 23:59:47.669305 containerd[1504]: 2025-08-12 23:59:47.631 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hb2f2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0" Aug 12 23:59:47.669305 containerd[1504]: 2025-08-12 23:59:47.635 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hb2f2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f51ad2a9-7d82-464f-b860-89072ace43dc", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0", Pod:"coredns-7c65d6cfc9-hb2f2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9c3e9ab75c", MAC:"7a:8e:cc:0b:41:de", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:47.669305 containerd[1504]: 2025-08-12 23:59:47.652 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hb2f2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hb2f2-eth0" Aug 12 23:59:47.733244 systemd-networkd[1427]: calid1d72157385: Link UP Aug 12 23:59:47.733522 systemd-networkd[1427]: calid1d72157385: Gained carrier Aug 12 23:59:47.782693 containerd[1504]: time="2025-08-12T23:59:47.782492142Z" level=info msg="connecting to shim e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0" address="unix:///run/containerd/s/3a5c9a178d1332890bd7035a11194dbab93d2c5fc39c6f03ac533f116d5eaae9" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.356 [INFO][4542] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0 goldmane-58fd7646b9- calico-system 147f626b-1fdc-432d-8d5b-31ba3a08beb0 811 0 2025-08-12 23:59:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-ndrj5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid1d72157385 [] [] }} ContainerID="f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" Namespace="calico-system" Pod="goldmane-58fd7646b9-ndrj5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ndrj5-" Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.356 [INFO][4542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" Namespace="calico-system" Pod="goldmane-58fd7646b9-ndrj5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0" Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.494 [INFO][4594] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" HandleID="k8s-pod-network.f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" Workload="localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0" Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.494 [INFO][4594] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" HandleID="k8s-pod-network.f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" Workload="localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ca000), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-ndrj5", "timestamp":"2025-08-12 23:59:47.49432845 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.494 [INFO][4594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.604 [INFO][4594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.604 [INFO][4594] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.628 [INFO][4594] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" host="localhost" Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.646 [INFO][4594] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.661 [INFO][4594] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.667 [INFO][4594] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.672 [INFO][4594] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.672 [INFO][4594] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" host="localhost" Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.675 [INFO][4594] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834 Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.681 [INFO][4594] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" host="localhost" Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.695 [INFO][4594] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" host="localhost" Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.695 [INFO][4594] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" host="localhost" Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.695 [INFO][4594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:47.783946 containerd[1504]: 2025-08-12 23:59:47.695 [INFO][4594] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" HandleID="k8s-pod-network.f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" Workload="localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0" Aug 12 23:59:47.787105 containerd[1504]: 2025-08-12 23:59:47.701 [INFO][4542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" Namespace="calico-system" Pod="goldmane-58fd7646b9-ndrj5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"147f626b-1fdc-432d-8d5b-31ba3a08beb0", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-ndrj5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid1d72157385", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:47.787105 containerd[1504]: 2025-08-12 23:59:47.703 [INFO][4542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" Namespace="calico-system" Pod="goldmane-58fd7646b9-ndrj5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0" Aug 12 23:59:47.787105 containerd[1504]: 2025-08-12 23:59:47.703 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1d72157385 ContainerID="f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" Namespace="calico-system" Pod="goldmane-58fd7646b9-ndrj5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0" Aug 12 23:59:47.787105 containerd[1504]: 2025-08-12 23:59:47.736 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" Namespace="calico-system" Pod="goldmane-58fd7646b9-ndrj5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0" Aug 12 23:59:47.787105 containerd[1504]: 2025-08-12 23:59:47.743 [INFO][4542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" Namespace="calico-system" Pod="goldmane-58fd7646b9-ndrj5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"147f626b-1fdc-432d-8d5b-31ba3a08beb0", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834", Pod:"goldmane-58fd7646b9-ndrj5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid1d72157385", MAC:"ba:79:78:7a:52:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:47.787105 containerd[1504]: 2025-08-12 23:59:47.776 [INFO][4542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" Namespace="calico-system" Pod="goldmane-58fd7646b9-ndrj5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ndrj5-eth0" Aug 12 23:59:47.862612 containerd[1504]: time="2025-08-12T23:59:47.862560447Z" level=info msg="connecting to shim f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834" address="unix:///run/containerd/s/1364158501c66369b85c4dfa39e46dcb99b3e2f8caad0dded8028fb1cb30bfbf" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:59:47.867891 systemd[1]: Started cri-containerd-e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0.scope - libcontainer container e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0. Aug 12 23:59:47.870392 systemd-networkd[1427]: cali0adaf79e4d2: Link UP Aug 12 23:59:47.875276 systemd-networkd[1427]: cali0adaf79e4d2: Gained carrier Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.359 [INFO][4566] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0 calico-apiserver-58f7d5ddc9- calico-apiserver e3337cb5-1d9b-4c57-aeca-598477706d1c 814 0 2025-08-12 23:59:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58f7d5ddc9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-58f7d5ddc9-qfwms eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0adaf79e4d2 [] [] }} ContainerID="5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-qfwms" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-" Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.359 [INFO][4566] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-qfwms" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0" Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.501 [INFO][4591] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" HandleID="k8s-pod-network.5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" Workload="localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0" Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.502 [INFO][4591] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" HandleID="k8s-pod-network.5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" Workload="localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032b5d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-58f7d5ddc9-qfwms", "timestamp":"2025-08-12 23:59:47.501739412 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.502 [INFO][4591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.695 [INFO][4591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.696 [INFO][4591] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.730 [INFO][4591] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" host="localhost" Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.746 [INFO][4591] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.776 [INFO][4591] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.778 [INFO][4591] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.783 [INFO][4591] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.784 [INFO][4591] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" host="localhost" Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.786 [INFO][4591] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9 Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.794 [INFO][4591] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" host="localhost" Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.843 [INFO][4591] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" host="localhost" Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.843 [INFO][4591] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" host="localhost" Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.843 [INFO][4591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:59:47.908226 containerd[1504]: 2025-08-12 23:59:47.843 [INFO][4591] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" HandleID="k8s-pod-network.5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" Workload="localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0" Aug 12 23:59:47.912126 containerd[1504]: 2025-08-12 23:59:47.857 [INFO][4566] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-qfwms" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0", GenerateName:"calico-apiserver-58f7d5ddc9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3337cb5-1d9b-4c57-aeca-598477706d1c", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f7d5ddc9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-58f7d5ddc9-qfwms", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0adaf79e4d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:47.912126 containerd[1504]: 2025-08-12 23:59:47.858 [INFO][4566] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-qfwms" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0" Aug 12 23:59:47.912126 containerd[1504]: 2025-08-12 23:59:47.858 [INFO][4566] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0adaf79e4d2 ContainerID="5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-qfwms" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0" Aug 12 23:59:47.912126 containerd[1504]: 2025-08-12 23:59:47.883 [INFO][4566] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-qfwms" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0" Aug 12 23:59:47.912126 containerd[1504]: 2025-08-12 23:59:47.885 [INFO][4566] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-qfwms" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0", GenerateName:"calico-apiserver-58f7d5ddc9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3337cb5-1d9b-4c57-aeca-598477706d1c", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 59, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f7d5ddc9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9", Pod:"calico-apiserver-58f7d5ddc9-qfwms", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0adaf79e4d2", MAC:"92:70:a6:c1:5e:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:59:47.912126 containerd[1504]: 2025-08-12 23:59:47.899 [INFO][4566] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" Namespace="calico-apiserver" Pod="calico-apiserver-58f7d5ddc9-qfwms" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f7d5ddc9--qfwms-eth0" Aug 12 23:59:47.915134 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:59:47.942623 systemd[1]: Started cri-containerd-f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834.scope - libcontainer container f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834. Aug 12 23:59:47.972908 containerd[1504]: time="2025-08-12T23:59:47.972863729Z" level=info msg="connecting to shim 5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9" address="unix:///run/containerd/s/e36a7c324e8c1621645dd2d4ebdff0426dcf2e302d6585808d4e6bf814da4367" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:59:47.975233 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:59:47.979844 systemd-networkd[1427]: calia2a52791c86: Gained IPv6LL Aug 12 23:59:48.015142 containerd[1504]: time="2025-08-12T23:59:48.015031488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ndrj5,Uid:147f626b-1fdc-432d-8d5b-31ba3a08beb0,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834\"" Aug 12 23:59:48.022903 systemd[1]: Started cri-containerd-5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9.scope - libcontainer container 5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9. Aug 12 23:59:48.026346 containerd[1504]: time="2025-08-12T23:59:48.025762460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hb2f2,Uid:f51ad2a9-7d82-464f-b860-89072ace43dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0\"" Aug 12 23:59:48.027943 kubelet[2620]: E0812 23:59:48.027597 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:48.033028 containerd[1504]: time="2025-08-12T23:59:48.032921389Z" level=info msg="CreateContainer within sandbox \"e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:59:48.051915 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:59:48.064468 containerd[1504]: time="2025-08-12T23:59:48.064412769Z" level=info msg="Container 334977da22d21cbdf3dc77e7377b94ac6c8a76b622fd889d3e2cc3a680350c76: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:48.064884 containerd[1504]: time="2025-08-12T23:59:48.064845924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:48.066868 containerd[1504]: time="2025-08-12T23:59:48.066825161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 12 23:59:48.080228 containerd[1504]: time="2025-08-12T23:59:48.080165700Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:48.091173 containerd[1504]: time="2025-08-12T23:59:48.091109809Z" level=info msg="CreateContainer within sandbox \"e509faae040bfc587815a09c94ec910e39730721b77a14c770dab166fcf05ba0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"334977da22d21cbdf3dc77e7377b94ac6c8a76b622fd889d3e2cc3a680350c76\"" Aug 12 23:59:48.091721 containerd[1504]: time="2025-08-12T23:59:48.091692976Z" level=info msg="StartContainer for \"334977da22d21cbdf3dc77e7377b94ac6c8a76b622fd889d3e2cc3a680350c76\"" Aug 12 23:59:48.091835 containerd[1504]: time="2025-08-12T23:59:48.091805265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7d5ddc9-qfwms,Uid:e3337cb5-1d9b-4c57-aeca-598477706d1c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9\"" Aug 12 23:59:48.092927 containerd[1504]: time="2025-08-12T23:59:48.092870709Z" level=info msg="connecting to shim 334977da22d21cbdf3dc77e7377b94ac6c8a76b622fd889d3e2cc3a680350c76" address="unix:///run/containerd/s/3a5c9a178d1332890bd7035a11194dbab93d2c5fc39c6f03ac533f116d5eaae9" protocol=ttrpc version=3 Aug 12 23:59:48.096160 containerd[1504]: time="2025-08-12T23:59:48.096119287Z" level=info msg="CreateContainer within sandbox \"5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:59:48.096748 containerd[1504]: time="2025-08-12T23:59:48.096712934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:48.097399 containerd[1504]: time="2025-08-12T23:59:48.097248417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.451526201s" Aug 12 23:59:48.097399 containerd[1504]: time="2025-08-12T23:59:48.097288340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:59:48.098370 containerd[1504]: time="2025-08-12T23:59:48.098313221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 12 23:59:48.100065 containerd[1504]: time="2025-08-12T23:59:48.100014596Z" level=info msg="CreateContainer within sandbox \"e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:59:48.120901 systemd[1]: Started cri-containerd-334977da22d21cbdf3dc77e7377b94ac6c8a76b622fd889d3e2cc3a680350c76.scope - libcontainer container 334977da22d21cbdf3dc77e7377b94ac6c8a76b622fd889d3e2cc3a680350c76. Aug 12 23:59:48.128808 containerd[1504]: time="2025-08-12T23:59:48.128727757Z" level=info msg="Container 0c8f27de72d281e1335bbc5f989d8ab4dbdabeb49ea9b32cf57cf82a2ec53949: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:48.155300 containerd[1504]: time="2025-08-12T23:59:48.150801189Z" level=info msg="CreateContainer within sandbox \"5ef4c0fa0f1eff6b667b6835e39b66cfedcd023212ca45d0fced70186ad10eb9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0c8f27de72d281e1335bbc5f989d8ab4dbdabeb49ea9b32cf57cf82a2ec53949\"" Aug 12 23:59:48.155300 containerd[1504]: time="2025-08-12T23:59:48.151692540Z" level=info msg="StartContainer for \"0c8f27de72d281e1335bbc5f989d8ab4dbdabeb49ea9b32cf57cf82a2ec53949\"" Aug 12 23:59:48.155300 containerd[1504]: time="2025-08-12T23:59:48.153128654Z" level=info msg="connecting to shim 0c8f27de72d281e1335bbc5f989d8ab4dbdabeb49ea9b32cf57cf82a2ec53949" address="unix:///run/containerd/s/e36a7c324e8c1621645dd2d4ebdff0426dcf2e302d6585808d4e6bf814da4367" protocol=ttrpc version=3 Aug 12 23:59:48.156348 containerd[1504]: time="2025-08-12T23:59:48.155716860Z" level=info msg="Container 73fd2f265cb364782a6751f63c729108add2dd8a85dada6860ce57175c6275ff: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:48.162705 containerd[1504]: time="2025-08-12T23:59:48.162495958Z" level=info msg="StartContainer for \"334977da22d21cbdf3dc77e7377b94ac6c8a76b622fd889d3e2cc3a680350c76\" returns successfully" Aug 12 23:59:48.171548 containerd[1504]: time="2025-08-12T23:59:48.171493752Z" level=info msg="CreateContainer within sandbox \"e1f7bac90e2621013e1a4a2f3fe3227a4ca22d723141a0d6f5b2d8aac8911183\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"73fd2f265cb364782a6751f63c729108add2dd8a85dada6860ce57175c6275ff\"" Aug 12 23:59:48.172228 containerd[1504]: time="2025-08-12T23:59:48.172198608Z" level=info msg="StartContainer for \"73fd2f265cb364782a6751f63c729108add2dd8a85dada6860ce57175c6275ff\"" Aug 12 23:59:48.174132 containerd[1504]: time="2025-08-12T23:59:48.173879262Z" level=info msg="connecting to shim 73fd2f265cb364782a6751f63c729108add2dd8a85dada6860ce57175c6275ff" address="unix:///run/containerd/s/7182a74af33112c9003d8b57981ae9107b2c9f5fcab27ecf3c77b4fc99ca7242" protocol=ttrpc version=3 Aug 12 23:59:48.187257 systemd[1]: Started cri-containerd-0c8f27de72d281e1335bbc5f989d8ab4dbdabeb49ea9b32cf57cf82a2ec53949.scope - libcontainer container 0c8f27de72d281e1335bbc5f989d8ab4dbdabeb49ea9b32cf57cf82a2ec53949. Aug 12 23:59:48.200971 systemd[1]: Started cri-containerd-73fd2f265cb364782a6751f63c729108add2dd8a85dada6860ce57175c6275ff.scope - libcontainer container 73fd2f265cb364782a6751f63c729108add2dd8a85dada6860ce57175c6275ff. Aug 12 23:59:48.305242 containerd[1504]: time="2025-08-12T23:59:48.305045277Z" level=info msg="StartContainer for \"73fd2f265cb364782a6751f63c729108add2dd8a85dada6860ce57175c6275ff\" returns successfully" Aug 12 23:59:48.307042 containerd[1504]: time="2025-08-12T23:59:48.307002353Z" level=info msg="StartContainer for \"0c8f27de72d281e1335bbc5f989d8ab4dbdabeb49ea9b32cf57cf82a2ec53949\" returns successfully" Aug 12 23:59:48.368844 systemd-networkd[1427]: calid4e480fae13: Gained IPv6LL Aug 12 23:59:48.530078 kubelet[2620]: E0812 23:59:48.529928 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:48.531837 kubelet[2620]: E0812 23:59:48.530567 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:48.567038 kubelet[2620]: I0812 23:59:48.565543 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58f7d5ddc9-29pmx" podStartSLOduration=26.112200255 podStartE2EDuration="28.565036162s" podCreationTimestamp="2025-08-12 23:59:20 +0000 UTC" firstStartedPulling="2025-08-12 23:59:45.645367626 +0000 UTC m=+42.517944688" lastFinishedPulling="2025-08-12 23:59:48.098203413 +0000 UTC m=+44.970780595" observedRunningTime="2025-08-12 23:59:48.549214386 +0000 UTC m=+45.421791608" watchObservedRunningTime="2025-08-12 23:59:48.565036162 +0000 UTC m=+45.437613224" Aug 12 23:59:48.567038 kubelet[2620]: I0812 23:59:48.566810 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58f7d5ddc9-qfwms" podStartSLOduration=28.566793182 podStartE2EDuration="28.566793182s" podCreationTimestamp="2025-08-12 23:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:59:48.565850427 +0000 UTC m=+45.438427529" watchObservedRunningTime="2025-08-12 23:59:48.566793182 +0000 UTC m=+45.439370244" Aug 12 23:59:48.811817 systemd-networkd[1427]: calic9c3e9ab75c: Gained IPv6LL Aug 12 23:59:48.875914 systemd-networkd[1427]: calid1d72157385: Gained IPv6LL Aug 12 23:59:49.068132 systemd-networkd[1427]: cali0adaf79e4d2: Gained IPv6LL Aug 12 23:59:49.285977 containerd[1504]: time="2025-08-12T23:59:49.285922874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:49.288711 containerd[1504]: time="2025-08-12T23:59:49.288667887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 12 23:59:49.290442 containerd[1504]: time="2025-08-12T23:59:49.290401302Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:49.294821 containerd[1504]: time="2025-08-12T23:59:49.294770081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:49.295764 containerd[1504]: time="2025-08-12T23:59:49.295677312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.197308446s" Aug 12 23:59:49.295927 containerd[1504]: time="2025-08-12T23:59:49.295859406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 12 23:59:49.297097 containerd[1504]: time="2025-08-12T23:59:49.297064100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 12 23:59:49.298647 containerd[1504]: time="2025-08-12T23:59:49.298610820Z" level=info msg="CreateContainer within sandbox \"88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 12 23:59:49.326236 containerd[1504]: time="2025-08-12T23:59:49.324907943Z" level=info msg="Container 4236f6df4dc5d0d425d3544efa3850a4c920028cbed1f692d64b41865b684465: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:49.341935 containerd[1504]: time="2025-08-12T23:59:49.341891022Z" level=info msg="CreateContainer within sandbox \"88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4236f6df4dc5d0d425d3544efa3850a4c920028cbed1f692d64b41865b684465\"" Aug 12 23:59:49.342793 containerd[1504]: time="2025-08-12T23:59:49.342722247Z" level=info msg="StartContainer for \"4236f6df4dc5d0d425d3544efa3850a4c920028cbed1f692d64b41865b684465\"" Aug 12 23:59:49.344640 containerd[1504]: time="2025-08-12T23:59:49.344602273Z" level=info msg="connecting to shim 4236f6df4dc5d0d425d3544efa3850a4c920028cbed1f692d64b41865b684465" address="unix:///run/containerd/s/6d27311c8fc5b497c53242f381d8a1f7bcc806e99edcfed4c511a2d7910a4cb2" protocol=ttrpc version=3 Aug 12 23:59:49.373904 systemd[1]: Started cri-containerd-4236f6df4dc5d0d425d3544efa3850a4c920028cbed1f692d64b41865b684465.scope - libcontainer container 4236f6df4dc5d0d425d3544efa3850a4c920028cbed1f692d64b41865b684465. Aug 12 23:59:49.432696 containerd[1504]: time="2025-08-12T23:59:49.432561066Z" level=info msg="StartContainer for \"4236f6df4dc5d0d425d3544efa3850a4c920028cbed1f692d64b41865b684465\" returns successfully" Aug 12 23:59:49.537325 kubelet[2620]: E0812 23:59:49.537223 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:49.537918 kubelet[2620]: E0812 23:59:49.537324 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:49.541684 kubelet[2620]: I0812 23:59:49.539847 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:59:49.541684 kubelet[2620]: I0812 23:59:49.540313 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:59:50.539315 kubelet[2620]: E0812 23:59:50.539259 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:50.539315 kubelet[2620]: E0812 23:59:50.539279 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:50.991870 containerd[1504]: time="2025-08-12T23:59:50.991807672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:50.992900 containerd[1504]: time="2025-08-12T23:59:50.992698740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 12 23:59:50.993793 containerd[1504]: time="2025-08-12T23:59:50.993751020Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:50.997522 containerd[1504]: time="2025-08-12T23:59:50.997482704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:50.998145 containerd[1504]: time="2025-08-12T23:59:50.998068148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.700967726s" Aug 12 23:59:50.998145 containerd[1504]: time="2025-08-12T23:59:50.998102311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 12 23:59:50.999708 containerd[1504]: time="2025-08-12T23:59:50.999614906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 12 23:59:51.016424 containerd[1504]: time="2025-08-12T23:59:51.015418046Z" level=info msg="CreateContainer within sandbox \"84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 12 23:59:51.027519 containerd[1504]: time="2025-08-12T23:59:51.025323904Z" level=info msg="Container 74d27828570affcf4bdebeb92fc0dd65a69cda8780ac8772d5ac70d480b5b387: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:51.041742 containerd[1504]: time="2025-08-12T23:59:51.041694725Z" level=info msg="CreateContainer within sandbox \"84bc7838a3167b702f5f4118697e7707357cb55e817116fcca0e79c0e694a404\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"74d27828570affcf4bdebeb92fc0dd65a69cda8780ac8772d5ac70d480b5b387\"" Aug 12 23:59:51.044049 containerd[1504]: time="2025-08-12T23:59:51.043994656Z" level=info msg="StartContainer for \"74d27828570affcf4bdebeb92fc0dd65a69cda8780ac8772d5ac70d480b5b387\"" Aug 12 23:59:51.045801 containerd[1504]: time="2025-08-12T23:59:51.045761308Z" level=info msg="connecting to shim 74d27828570affcf4bdebeb92fc0dd65a69cda8780ac8772d5ac70d480b5b387" address="unix:///run/containerd/s/649f9aaf054c759af2feaf70c415e8d08ab6e0c0a56ed870db35d85038a58b49" protocol=ttrpc version=3 Aug 12 23:59:51.074862 systemd[1]: Started cri-containerd-74d27828570affcf4bdebeb92fc0dd65a69cda8780ac8772d5ac70d480b5b387.scope - libcontainer container 74d27828570affcf4bdebeb92fc0dd65a69cda8780ac8772d5ac70d480b5b387. Aug 12 23:59:51.130829 containerd[1504]: time="2025-08-12T23:59:51.130778206Z" level=info msg="StartContainer for \"74d27828570affcf4bdebeb92fc0dd65a69cda8780ac8772d5ac70d480b5b387\" returns successfully" Aug 12 23:59:51.582348 kubelet[2620]: I0812 23:59:51.581721 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hb2f2" podStartSLOduration=41.581258951 podStartE2EDuration="41.581258951s" podCreationTimestamp="2025-08-12 23:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:59:48.608154986 +0000 UTC m=+45.480732048" watchObservedRunningTime="2025-08-12 23:59:51.581258951 +0000 UTC m=+48.453836093" Aug 12 23:59:51.582348 kubelet[2620]: I0812 23:59:51.582039 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d68f5454c-8df5x" podStartSLOduration=23.293356136 podStartE2EDuration="27.582033049s" podCreationTimestamp="2025-08-12 23:59:24 +0000 UTC" firstStartedPulling="2025-08-12 23:59:46.710533722 +0000 UTC m=+43.583110744" lastFinishedPulling="2025-08-12 23:59:50.999210595 +0000 UTC m=+47.871787657" observedRunningTime="2025-08-12 23:59:51.581871597 +0000 UTC m=+48.454448659" watchObservedRunningTime="2025-08-12 23:59:51.582033049 +0000 UTC m=+48.454610111" Aug 12 23:59:51.600846 containerd[1504]: time="2025-08-12T23:59:51.600808849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74d27828570affcf4bdebeb92fc0dd65a69cda8780ac8772d5ac70d480b5b387\" id:\"0615b17e7aaeb4cc1c3ecbe1a9326a13a144b637fa8a148f3659c248e0681358\" pid:5005 exited_at:{seconds:1755043191 nanos:600356055}" Aug 12 23:59:51.677100 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:42732.service - OpenSSH per-connection server daemon (10.0.0.1:42732). Aug 12 23:59:51.782393 sshd[5017]: Accepted publickey for core from 10.0.0.1 port 42732 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:59:51.785474 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:59:51.791113 systemd-logind[1481]: New session 9 of user core. Aug 12 23:59:51.799963 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 12 23:59:52.094436 sshd[5019]: Connection closed by 10.0.0.1 port 42732 Aug 12 23:59:52.095464 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:52.099142 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:42732.service: Deactivated successfully. Aug 12 23:59:52.106914 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:59:52.114189 systemd-logind[1481]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:59:52.119069 systemd-logind[1481]: Removed session 9. Aug 12 23:59:52.709813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1747823918.mount: Deactivated successfully. Aug 12 23:59:53.132382 containerd[1504]: time="2025-08-12T23:59:53.132266764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:53.136469 containerd[1504]: time="2025-08-12T23:59:53.136427263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 12 23:59:53.137472 containerd[1504]: time="2025-08-12T23:59:53.137444296Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:53.139620 containerd[1504]: time="2025-08-12T23:59:53.139579929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:53.140621 containerd[1504]: time="2025-08-12T23:59:53.140582841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.140932292s" Aug 12 23:59:53.140621 containerd[1504]: time="2025-08-12T23:59:53.140619484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 12 23:59:53.141859 containerd[1504]: time="2025-08-12T23:59:53.141807169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 12 23:59:53.144161 containerd[1504]: time="2025-08-12T23:59:53.144130896Z" level=info msg="CreateContainer within sandbox \"f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 12 23:59:53.152008 containerd[1504]: time="2025-08-12T23:59:53.150678846Z" level=info msg="Container 574ddc17470c645f3ef86ed4695e1aeb1ebff915be0c55419970a6a6dfa1bd61: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:53.161404 containerd[1504]: time="2025-08-12T23:59:53.161353052Z" level=info msg="CreateContainer within sandbox \"f9de069f7fb03d7ab8fe1777a7edc87fed45256f31a66138f07567b38e40f834\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"574ddc17470c645f3ef86ed4695e1aeb1ebff915be0c55419970a6a6dfa1bd61\"" Aug 12 23:59:53.161883 containerd[1504]: time="2025-08-12T23:59:53.161856728Z" level=info msg="StartContainer for \"574ddc17470c645f3ef86ed4695e1aeb1ebff915be0c55419970a6a6dfa1bd61\"" Aug 12 23:59:53.163552 containerd[1504]: time="2025-08-12T23:59:53.163510287Z" level=info msg="connecting to shim 574ddc17470c645f3ef86ed4695e1aeb1ebff915be0c55419970a6a6dfa1bd61" address="unix:///run/containerd/s/1364158501c66369b85c4dfa39e46dcb99b3e2f8caad0dded8028fb1cb30bfbf" protocol=ttrpc version=3 Aug 12 23:59:53.197878 systemd[1]: Started cri-containerd-574ddc17470c645f3ef86ed4695e1aeb1ebff915be0c55419970a6a6dfa1bd61.scope - libcontainer container 574ddc17470c645f3ef86ed4695e1aeb1ebff915be0c55419970a6a6dfa1bd61. Aug 12 23:59:53.301453 containerd[1504]: time="2025-08-12T23:59:53.301409669Z" level=info msg="StartContainer for \"574ddc17470c645f3ef86ed4695e1aeb1ebff915be0c55419970a6a6dfa1bd61\" returns successfully" Aug 12 23:59:53.565679 kubelet[2620]: I0812 23:59:53.565353 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-ndrj5" podStartSLOduration=25.440035694 podStartE2EDuration="30.565333459s" podCreationTimestamp="2025-08-12 23:59:23 +0000 UTC" firstStartedPulling="2025-08-12 23:59:48.016302229 +0000 UTC m=+44.888879291" lastFinishedPulling="2025-08-12 23:59:53.141599994 +0000 UTC m=+50.014177056" observedRunningTime="2025-08-12 23:59:53.564711254 +0000 UTC m=+50.437288316" watchObservedRunningTime="2025-08-12 23:59:53.565333459 +0000 UTC m=+50.437910521" Aug 12 23:59:54.287960 containerd[1504]: time="2025-08-12T23:59:54.287905104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:54.289140 containerd[1504]: time="2025-08-12T23:59:54.289108069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 12 23:59:54.290214 containerd[1504]: time="2025-08-12T23:59:54.290187625Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:54.292753 containerd[1504]: time="2025-08-12T23:59:54.292486188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:54.293276 containerd[1504]: time="2025-08-12T23:59:54.293236961Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.151396509s" Aug 12 23:59:54.293490 containerd[1504]: time="2025-08-12T23:59:54.293275523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 12 23:59:54.297871 containerd[1504]: time="2025-08-12T23:59:54.297838325Z" level=info msg="CreateContainer within sandbox \"88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 12 23:59:54.307338 containerd[1504]: time="2025-08-12T23:59:54.306626025Z" level=info msg="Container ba9bc036f58b105e7b833cf5ab362aecd57afe7f587bb5fc0f56bf5070c2d720: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:59:54.318046 containerd[1504]: time="2025-08-12T23:59:54.317997788Z" level=info msg="CreateContainer within sandbox \"88e5678cbd59436e5f99aa643c8c943132e819bdfe4225d380d277acd1ca8ec7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ba9bc036f58b105e7b833cf5ab362aecd57afe7f587bb5fc0f56bf5070c2d720\"" Aug 12 23:59:54.319079 containerd[1504]: time="2025-08-12T23:59:54.319043461Z" level=info msg="StartContainer for \"ba9bc036f58b105e7b833cf5ab362aecd57afe7f587bb5fc0f56bf5070c2d720\"" Aug 12 23:59:54.329662 containerd[1504]: time="2025-08-12T23:59:54.329600366Z" level=info msg="connecting to shim ba9bc036f58b105e7b833cf5ab362aecd57afe7f587bb5fc0f56bf5070c2d720" address="unix:///run/containerd/s/6d27311c8fc5b497c53242f381d8a1f7bcc806e99edcfed4c511a2d7910a4cb2" protocol=ttrpc version=3 Aug 12 23:59:54.360818 systemd[1]: Started cri-containerd-ba9bc036f58b105e7b833cf5ab362aecd57afe7f587bb5fc0f56bf5070c2d720.scope - libcontainer container ba9bc036f58b105e7b833cf5ab362aecd57afe7f587bb5fc0f56bf5070c2d720. Aug 12 23:59:54.410565 containerd[1504]: time="2025-08-12T23:59:54.410497114Z" level=info msg="StartContainer for \"ba9bc036f58b105e7b833cf5ab362aecd57afe7f587bb5fc0f56bf5070c2d720\" returns successfully" Aug 12 23:59:54.696521 containerd[1504]: time="2025-08-12T23:59:54.696472171Z" level=info msg="TaskExit event in podsandbox handler container_id:\"574ddc17470c645f3ef86ed4695e1aeb1ebff915be0c55419970a6a6dfa1bd61\" id:\"15d4b67e55632759a08cc951e8ef4517c597dd230b9d5c96c66aa1712f0890fa\" pid:5139 exit_status:1 exited_at:{seconds:1755043194 nanos:688933679}" Aug 12 23:59:55.334751 kubelet[2620]: I0812 23:59:55.334582 2620 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 12 23:59:55.334751 kubelet[2620]: I0812 23:59:55.334643 2620 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 12 23:59:55.635349 containerd[1504]: time="2025-08-12T23:59:55.635152098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"574ddc17470c645f3ef86ed4695e1aeb1ebff915be0c55419970a6a6dfa1bd61\" id:\"81da52b357019a8bda08ea3b30800e80e306b51ff02325cf0ca3b4bf33c59d3f\" pid:5163 exit_status:1 exited_at:{seconds:1755043195 nanos:634825195}" Aug 12 23:59:57.109018 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:57756.service - OpenSSH per-connection server daemon (10.0.0.1:57756). Aug 12 23:59:57.172958 sshd[5175]: Accepted publickey for core from 10.0.0.1 port 57756 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:59:57.175128 sshd-session[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:59:57.181397 systemd-logind[1481]: New session 10 of user core. Aug 12 23:59:57.197999 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 12 23:59:57.406518 sshd[5177]: Connection closed by 10.0.0.1 port 57756 Aug 12 23:59:57.407484 sshd-session[5175]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:57.426270 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:57756.service: Deactivated successfully. Aug 12 23:59:57.429112 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:59:57.429851 systemd-logind[1481]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:59:57.433740 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:57764.service - OpenSSH per-connection server daemon (10.0.0.1:57764). Aug 12 23:59:57.434423 systemd-logind[1481]: Removed session 10. Aug 12 23:59:57.485505 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 57764 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:59:57.487035 sshd-session[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:59:57.492528 systemd-logind[1481]: New session 11 of user core. Aug 12 23:59:57.500865 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 12 23:59:57.727430 sshd[5194]: Connection closed by 10.0.0.1 port 57764 Aug 12 23:59:57.728276 sshd-session[5192]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:57.738748 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:57764.service: Deactivated successfully. Aug 12 23:59:57.740741 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:59:57.744433 systemd-logind[1481]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:59:57.746989 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:57770.service - OpenSSH per-connection server daemon (10.0.0.1:57770). Aug 12 23:59:57.748940 systemd-logind[1481]: Removed session 11. Aug 12 23:59:57.822839 sshd[5205]: Accepted publickey for core from 10.0.0.1 port 57770 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 12 23:59:57.824338 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:59:57.829862 systemd-logind[1481]: New session 12 of user core. Aug 12 23:59:57.837897 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 12 23:59:58.056176 sshd[5209]: Connection closed by 10.0.0.1 port 57770 Aug 12 23:59:58.056844 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:58.061367 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:57770.service: Deactivated successfully. Aug 12 23:59:58.063368 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:59:58.066395 systemd-logind[1481]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:59:58.068463 systemd-logind[1481]: Removed session 12. Aug 13 00:00:00.068734 containerd[1504]: time="2025-08-13T00:00:00.068665488Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5139e4b9e09fc0c787009f4bb14486c03d010280ecc0b96ea5ae0863784ecdc\" id:\"ce3580870b4f9940af8fbbfdb9bba9cc8339fb2c6adbaf321ac977b05019c34e\" pid:5245 exit_status:1 exited_at:{seconds:1755043200 nanos:68011365}" Aug 13 00:00:01.114499 containerd[1504]: time="2025-08-13T00:00:01.114441376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74d27828570affcf4bdebeb92fc0dd65a69cda8780ac8772d5ac70d480b5b387\" id:\"c1589741f185b2555f6182ae1a3c2cd68d17f9173dffb4ab27137ff722bf936c\" pid:5271 exited_at:{seconds:1755043201 nanos:114095394}" Aug 13 00:00:03.076374 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:57472.service - OpenSSH per-connection server daemon (10.0.0.1:57472). Aug 13 00:00:03.148511 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 57472 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 13 00:00:03.151116 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:03.156444 systemd-logind[1481]: New session 13 of user core. Aug 13 00:00:03.166876 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:00:03.346443 containerd[1504]: time="2025-08-13T00:00:03.346320816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74d27828570affcf4bdebeb92fc0dd65a69cda8780ac8772d5ac70d480b5b387\" id:\"9d3f5936065ef98211735aec12ab7e8c7b6c1323c812f8934d92cd815837881a\" pid:5309 exited_at:{seconds:1755043203 nanos:346013877}" Aug 13 00:00:03.374360 sshd[5284]: Connection closed by 10.0.0.1 port 57472 Aug 13 00:00:03.374936 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:03.379184 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:57472.service: Deactivated successfully. Aug 13 00:00:03.381268 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:00:03.382196 systemd-logind[1481]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:00:03.384129 systemd-logind[1481]: Removed session 13. Aug 13 00:00:03.640558 containerd[1504]: time="2025-08-13T00:00:03.640347896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"574ddc17470c645f3ef86ed4695e1aeb1ebff915be0c55419970a6a6dfa1bd61\" id:\"182aa0e48a29b9919b515d2181d29f9fe867661aeadf5ab4398ce9e747929eb5\" pid:5334 exited_at:{seconds:1755043203 nanos:640027956}" Aug 13 00:00:03.673000 kubelet[2620]: I0813 00:00:03.672928 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8g89f" podStartSLOduration=32.040872999 podStartE2EDuration="40.672909205s" podCreationTimestamp="2025-08-12 23:59:23 +0000 UTC" firstStartedPulling="2025-08-12 23:59:45.662439642 +0000 UTC m=+42.535016704" lastFinishedPulling="2025-08-12 23:59:54.294475888 +0000 UTC m=+51.167052910" observedRunningTime="2025-08-12 23:59:54.587848067 +0000 UTC m=+51.460425129" watchObservedRunningTime="2025-08-13 00:00:03.672909205 +0000 UTC m=+60.545486267" Aug 13 00:00:08.390120 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:57478.service - OpenSSH per-connection server daemon (10.0.0.1:57478). Aug 13 00:00:08.475456 sshd[5351]: Accepted publickey for core from 10.0.0.1 port 57478 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 13 00:00:08.478142 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:08.484343 systemd-logind[1481]: New session 14 of user core. Aug 13 00:00:08.496957 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:00:08.717809 sshd[5353]: Connection closed by 10.0.0.1 port 57478 Aug 13 00:00:08.718198 sshd-session[5351]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:08.722227 systemd-logind[1481]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:00:08.722476 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:57478.service: Deactivated successfully. Aug 13 00:00:08.724827 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:00:08.727830 systemd-logind[1481]: Removed session 14. Aug 13 00:00:13.737123 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:53300.service - OpenSSH per-connection server daemon (10.0.0.1:53300). Aug 13 00:00:13.796528 sshd[5370]: Accepted publickey for core from 10.0.0.1 port 53300 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 13 00:00:13.799204 sshd-session[5370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:13.815276 systemd-logind[1481]: New session 15 of user core. Aug 13 00:00:13.823957 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:00:14.005857 sshd[5372]: Connection closed by 10.0.0.1 port 53300 Aug 13 00:00:14.005450 sshd-session[5370]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:14.010344 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:53300.service: Deactivated successfully. Aug 13 00:00:14.012602 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:00:14.014811 systemd-logind[1481]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:00:14.017216 systemd-logind[1481]: Removed session 15. Aug 13 00:00:15.406063 kubelet[2620]: I0813 00:00:15.405503 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:00:19.027024 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:53312.service - OpenSSH per-connection server daemon (10.0.0.1:53312). Aug 13 00:00:19.111475 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 53312 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 13 00:00:19.113297 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:19.124584 systemd-logind[1481]: New session 16 of user core. Aug 13 00:00:19.132941 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:00:19.229329 kubelet[2620]: E0813 00:00:19.229283 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:19.378553 sshd[5393]: Connection closed by 10.0.0.1 port 53312 Aug 13 00:00:19.379274 sshd-session[5391]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:19.384192 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:53312.service: Deactivated successfully. Aug 13 00:00:19.386359 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:00:19.387187 systemd-logind[1481]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:00:19.388763 systemd-logind[1481]: Removed session 16. Aug 13 00:00:23.692334 kubelet[2620]: I0813 00:00:23.692298 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:00:24.221722 kubelet[2620]: E0813 00:00:24.221632 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:24.391543 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:46230.service - OpenSSH per-connection server daemon (10.0.0.1:46230). Aug 13 00:00:24.457860 sshd[5414]: Accepted publickey for core from 10.0.0.1 port 46230 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 13 00:00:24.460210 sshd-session[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:24.470230 systemd-logind[1481]: New session 17 of user core. Aug 13 00:00:24.483849 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:00:24.658717 sshd[5416]: Connection closed by 10.0.0.1 port 46230 Aug 13 00:00:24.659888 sshd-session[5414]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:24.670444 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:46230.service: Deactivated successfully. Aug 13 00:00:24.675324 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:00:24.676614 systemd-logind[1481]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:00:24.681754 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:46240.service - OpenSSH per-connection server daemon (10.0.0.1:46240). Aug 13 00:00:24.682387 systemd-logind[1481]: Removed session 17. Aug 13 00:00:24.742394 sshd[5429]: Accepted publickey for core from 10.0.0.1 port 46240 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 13 00:00:24.743895 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:24.749353 systemd-logind[1481]: New session 18 of user core. Aug 13 00:00:24.753839 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:00:25.025786 sshd[5431]: Connection closed by 10.0.0.1 port 46240 Aug 13 00:00:25.025992 sshd-session[5429]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:25.039383 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:46240.service: Deactivated successfully. Aug 13 00:00:25.043543 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:00:25.045851 systemd-logind[1481]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:00:25.051677 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:46248.service - OpenSSH per-connection server daemon (10.0.0.1:46248). Aug 13 00:00:25.053800 systemd-logind[1481]: Removed session 18. Aug 13 00:00:25.120759 sshd[5442]: Accepted publickey for core from 10.0.0.1 port 46248 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 13 00:00:25.122641 sshd-session[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:25.131209 systemd-logind[1481]: New session 19 of user core. Aug 13 00:00:25.138302 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:00:26.898065 containerd[1504]: time="2025-08-13T00:00:26.896900947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"574ddc17470c645f3ef86ed4695e1aeb1ebff915be0c55419970a6a6dfa1bd61\" id:\"ee7e5c5fe27f77772e59608deefd6a939588097c2c82a743d8ae47f5c61e3ce0\" pid:5467 exited_at:{seconds:1755043226 nanos:896233006}" Aug 13 00:00:27.193399 sshd[5444]: Connection closed by 10.0.0.1 port 46248 Aug 13 00:00:27.193822 sshd-session[5442]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:27.207426 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:46264.service - OpenSSH per-connection server daemon (10.0.0.1:46264). Aug 13 00:00:27.208202 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:46248.service: Deactivated successfully. Aug 13 00:00:27.211061 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:00:27.212618 systemd[1]: session-19.scope: Consumed 591ms CPU time, 73.3M memory peak. Aug 13 00:00:27.217045 systemd-logind[1481]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:00:27.220321 systemd-logind[1481]: Removed session 19. Aug 13 00:00:27.222457 kubelet[2620]: E0813 00:00:27.222091 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:27.281998 sshd[5481]: Accepted publickey for core from 10.0.0.1 port 46264 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 13 00:00:27.283625 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:27.292847 systemd-logind[1481]: New session 20 of user core. Aug 13 00:00:27.300910 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:00:27.765179 sshd[5488]: Connection closed by 10.0.0.1 port 46264 Aug 13 00:00:27.774960 sshd-session[5481]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:27.792504 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:46264.service: Deactivated successfully. Aug 13 00:00:27.797057 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:00:27.800529 systemd-logind[1481]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:00:27.811055 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:46268.service - OpenSSH per-connection server daemon (10.0.0.1:46268). Aug 13 00:00:27.812452 systemd-logind[1481]: Removed session 20. Aug 13 00:00:27.895873 sshd[5500]: Accepted publickey for core from 10.0.0.1 port 46268 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 13 00:00:27.898429 sshd-session[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:27.906943 systemd-logind[1481]: New session 21 of user core. Aug 13 00:00:27.916904 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:00:28.147594 sshd[5502]: Connection closed by 10.0.0.1 port 46268 Aug 13 00:00:28.148020 sshd-session[5500]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:28.155648 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:46268.service: Deactivated successfully. Aug 13 00:00:28.158043 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:00:28.159190 systemd-logind[1481]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:00:28.160870 systemd-logind[1481]: Removed session 21. Aug 13 00:00:30.073294 containerd[1504]: time="2025-08-13T00:00:30.073217134Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5139e4b9e09fc0c787009f4bb14486c03d010280ecc0b96ea5ae0863784ecdc\" id:\"e2ad79f434addf3078417d39d02464fda02245911793c64135ef500418ff74a1\" pid:5528 exited_at:{seconds:1755043230 nanos:72834942}" Aug 13 00:00:30.221320 kubelet[2620]: E0813 00:00:30.221287 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:33.162051 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:58408.service - OpenSSH per-connection server daemon (10.0.0.1:58408). Aug 13 00:00:33.233274 sshd[5545]: Accepted publickey for core from 10.0.0.1 port 58408 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 13 00:00:33.237597 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:33.249057 systemd-logind[1481]: New session 22 of user core. Aug 13 00:00:33.255015 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:00:33.352261 containerd[1504]: time="2025-08-13T00:00:33.352209039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74d27828570affcf4bdebeb92fc0dd65a69cda8780ac8772d5ac70d480b5b387\" id:\"6c85f958e5f8de1d3961a7a2cede3c61796213d0efda90c2b443f18c710f2262\" pid:5559 exited_at:{seconds:1755043233 nanos:350851940}" Aug 13 00:00:33.489299 sshd[5547]: Connection closed by 10.0.0.1 port 58408 Aug 13 00:00:33.491233 sshd-session[5545]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:33.502148 systemd-logind[1481]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:00:33.503269 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:58408.service: Deactivated successfully. Aug 13 00:00:33.510231 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:00:33.514496 systemd-logind[1481]: Removed session 22. Aug 13 00:00:33.683528 containerd[1504]: time="2025-08-13T00:00:33.683487179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"574ddc17470c645f3ef86ed4695e1aeb1ebff915be0c55419970a6a6dfa1bd61\" id:\"ecdef7ca8785fe1428b589c05d53025b5c445debca36aaf7481e5fac0ab1678b\" pid:5592 exited_at:{seconds:1755043233 nanos:683092345}" Aug 13 00:00:38.506350 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:58412.service - OpenSSH per-connection server daemon (10.0.0.1:58412). Aug 13 00:00:38.566389 sshd[5604]: Accepted publickey for core from 10.0.0.1 port 58412 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 13 00:00:38.567487 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:38.573440 systemd-logind[1481]: New session 23 of user core. Aug 13 00:00:38.591443 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:00:38.751724 sshd[5606]: Connection closed by 10.0.0.1 port 58412 Aug 13 00:00:38.752158 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:38.759688 systemd-logind[1481]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:00:38.760027 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:58412.service: Deactivated successfully. Aug 13 00:00:38.764985 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:00:38.768136 systemd-logind[1481]: Removed session 23. Aug 13 00:00:43.772648 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:44416.service - OpenSSH per-connection server daemon (10.0.0.1:44416). Aug 13 00:00:43.862291 sshd[5622]: Accepted publickey for core from 10.0.0.1 port 44416 ssh2: RSA SHA256:Y8XhXUp+e0cOxNBsLQ9X5uWw2r4VA0fiDKDQJi7Y+pU Aug 13 00:00:43.865526 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:43.871774 systemd-logind[1481]: New session 24 of user core. Aug 13 00:00:43.878058 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:00:44.104563 sshd[5624]: Connection closed by 10.0.0.1 port 44416 Aug 13 00:00:44.105327 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:44.114097 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:44416.service: Deactivated successfully. Aug 13 00:00:44.118790 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:00:44.120450 systemd-logind[1481]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:00:44.123293 systemd-logind[1481]: Removed session 24. Aug 13 00:00:44.480681 update_engine[1484]: I20250813 00:00:44.479123 1484 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 00:00:44.480681 update_engine[1484]: I20250813 00:00:44.479714 1484 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 00:00:44.480681 update_engine[1484]: I20250813 00:00:44.480302 1484 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 00:00:44.481149 update_engine[1484]: I20250813 00:00:44.481105 1484 omaha_request_params.cc:62] Current group set to beta Aug 13 00:00:44.481811 update_engine[1484]: I20250813 00:00:44.481781 1484 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 00:00:44.481811 update_engine[1484]: I20250813 00:00:44.481797 1484 update_attempter.cc:643] Scheduling an action processor start. Aug 13 00:00:44.481880 update_engine[1484]: I20250813 00:00:44.481819 1484 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:00:44.482257 update_engine[1484]: I20250813 00:00:44.482230 1484 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 00:00:44.482409 update_engine[1484]: I20250813 00:00:44.482386 1484 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:00:44.482409 update_engine[1484]: I20250813 00:00:44.482399 1484 omaha_request_action.cc:272] Request: Aug 13 00:00:44.482409 update_engine[1484]: Aug 13 00:00:44.482409 update_engine[1484]: Aug 13 00:00:44.482409 update_engine[1484]: Aug 13 00:00:44.482409 update_engine[1484]: Aug 13 00:00:44.482409 update_engine[1484]: Aug 13 00:00:44.482409 update_engine[1484]: Aug 13 00:00:44.482409 update_engine[1484]: Aug 13 00:00:44.482409 update_engine[1484]: Aug 13 00:00:44.482409 update_engine[1484]: I20250813 00:00:44.482407 1484 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:00:44.501672 update_engine[1484]: I20250813 00:00:44.497403 1484 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:00:44.501672 update_engine[1484]: I20250813 00:00:44.498205 1484 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:00:44.507417 locksmithd[1537]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 00:00:44.542725 update_engine[1484]: E20250813 00:00:44.542593 1484 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:00:44.542725 update_engine[1484]: I20250813 00:00:44.542732 1484 libcurl_http_fetcher.cc:283] No HTTP response, retry 1