Aug 19 00:19:06.815833 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 19 00:19:06.815853 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Mon Aug 18 22:15:14 -00 2025 Aug 19 00:19:06.815862 kernel: KASLR enabled Aug 19 00:19:06.815868 kernel: efi: EFI v2.7 by EDK II Aug 19 00:19:06.815874 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Aug 19 00:19:06.815879 kernel: random: crng init done Aug 19 00:19:06.815886 kernel: secureboot: Secure boot disabled Aug 19 00:19:06.815891 kernel: ACPI: Early table checksum verification disabled Aug 19 00:19:06.815897 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Aug 19 00:19:06.815904 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 19 00:19:06.815910 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:19:06.815916 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:19:06.815922 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:19:06.815928 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:19:06.815935 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:19:06.815943 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:19:06.815949 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:19:06.815956 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:19:06.815962 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:19:06.815968 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 19 00:19:06.815974 kernel: ACPI: Use ACPI SPCR as default console: Yes Aug 19 00:19:06.815980 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 19 00:19:06.815986 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Aug 19 00:19:06.815992 kernel: Zone ranges: Aug 19 00:19:06.815998 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 19 00:19:06.816005 kernel: DMA32 empty Aug 19 00:19:06.816011 kernel: Normal empty Aug 19 00:19:06.816017 kernel: Device empty Aug 19 00:19:06.816023 kernel: Movable zone start for each node Aug 19 00:19:06.816029 kernel: Early memory node ranges Aug 19 00:19:06.816035 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Aug 19 00:19:06.816041 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Aug 19 00:19:06.816047 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Aug 19 00:19:06.816053 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Aug 19 00:19:06.816059 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Aug 19 00:19:06.816065 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Aug 19 00:19:06.816071 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Aug 19 00:19:06.816078 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Aug 19 00:19:06.816084 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Aug 19 00:19:06.816090 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 19 00:19:06.816099 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 19 00:19:06.816105 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 19 00:19:06.816112 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 19 00:19:06.816119 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 19 00:19:06.816126 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 19 00:19:06.816132 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Aug 19 00:19:06.816139 kernel: psci: probing for conduit method from ACPI. Aug 19 00:19:06.816145 kernel: psci: PSCIv1.1 detected in firmware. Aug 19 00:19:06.816151 kernel: psci: Using standard PSCI v0.2 function IDs Aug 19 00:19:06.816162 kernel: psci: Trusted OS migration not required Aug 19 00:19:06.816168 kernel: psci: SMC Calling Convention v1.1 Aug 19 00:19:06.816175 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 19 00:19:06.816181 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Aug 19 00:19:06.816189 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Aug 19 00:19:06.816195 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 19 00:19:06.816202 kernel: Detected PIPT I-cache on CPU0 Aug 19 00:19:06.816208 kernel: CPU features: detected: GIC system register CPU interface Aug 19 00:19:06.816214 kernel: CPU features: detected: Spectre-v4 Aug 19 00:19:06.816221 kernel: CPU features: detected: Spectre-BHB Aug 19 00:19:06.816227 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 19 00:19:06.816233 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 19 00:19:06.816240 kernel: CPU features: detected: ARM erratum 1418040 Aug 19 00:19:06.816246 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 19 00:19:06.816253 kernel: alternatives: applying boot alternatives Aug 19 00:19:06.816260 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a868ccde263e96e0a18737fdbf04ca04bbf30dfe23963f1ae3994966e8fc9468 Aug 19 00:19:06.816268 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 00:19:06.816275 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 19 00:19:06.816281 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 19 00:19:06.816288 kernel: Fallback order for Node 0: 0 Aug 19 00:19:06.816294 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Aug 19 00:19:06.816300 kernel: Policy zone: DMA Aug 19 00:19:06.816307 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 00:19:06.816314 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Aug 19 00:19:06.816320 kernel: software IO TLB: area num 4. Aug 19 00:19:06.816326 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Aug 19 00:19:06.816333 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Aug 19 00:19:06.816340 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 19 00:19:06.816347 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 00:19:06.816354 kernel: rcu: RCU event tracing is enabled. Aug 19 00:19:06.816360 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 19 00:19:06.816367 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 00:19:06.816373 kernel: Tracing variant of Tasks RCU enabled. Aug 19 00:19:06.816380 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 00:19:06.816386 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 19 00:19:06.816393 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 00:19:06.816399 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 00:19:06.816411 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 19 00:19:06.816420 kernel: GICv3: 256 SPIs implemented Aug 19 00:19:06.816427 kernel: GICv3: 0 Extended SPIs implemented Aug 19 00:19:06.816433 kernel: Root IRQ handler: gic_handle_irq Aug 19 00:19:06.816439 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 19 00:19:06.816446 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Aug 19 00:19:06.816452 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 19 00:19:06.816458 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 19 00:19:06.816465 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Aug 19 00:19:06.816472 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Aug 19 00:19:06.816478 kernel: GICv3: using LPI property table @0x0000000040130000 Aug 19 00:19:06.816484 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Aug 19 00:19:06.816491 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 00:19:06.816499 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:19:06.816505 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 19 00:19:06.816512 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 19 00:19:06.816519 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 19 00:19:06.816525 kernel: arm-pv: using stolen time PV Aug 19 00:19:06.816532 kernel: Console: colour dummy device 80x25 Aug 19 00:19:06.816539 kernel: ACPI: Core revision 20240827 Aug 19 00:19:06.816546 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 19 00:19:06.816553 kernel: pid_max: default: 32768 minimum: 301 Aug 19 00:19:06.816559 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 00:19:06.816567 kernel: landlock: Up and running. Aug 19 00:19:06.816574 kernel: SELinux: Initializing. Aug 19 00:19:06.816580 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 00:19:06.816587 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 00:19:06.816594 kernel: rcu: Hierarchical SRCU implementation. Aug 19 00:19:06.816601 kernel: rcu: Max phase no-delay instances is 400. Aug 19 00:19:06.816607 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 00:19:06.816614 kernel: Remapping and enabling EFI services. Aug 19 00:19:06.816621 kernel: smp: Bringing up secondary CPUs ... Aug 19 00:19:06.816633 kernel: Detected PIPT I-cache on CPU1 Aug 19 00:19:06.816640 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 19 00:19:06.816647 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Aug 19 00:19:06.816655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:19:06.816662 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 19 00:19:06.816670 kernel: Detected PIPT I-cache on CPU2 Aug 19 00:19:06.816677 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 19 00:19:06.816684 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Aug 19 00:19:06.816692 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:19:06.816699 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 19 00:19:06.816706 kernel: Detected PIPT I-cache on CPU3 Aug 19 00:19:06.816713 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 19 00:19:06.816720 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Aug 19 00:19:06.816727 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:19:06.816734 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 19 00:19:06.816741 kernel: smp: Brought up 1 node, 4 CPUs Aug 19 00:19:06.816748 kernel: SMP: Total of 4 processors activated. Aug 19 00:19:06.816756 kernel: CPU: All CPU(s) started at EL1 Aug 19 00:19:06.816763 kernel: CPU features: detected: 32-bit EL0 Support Aug 19 00:19:06.816779 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 19 00:19:06.816786 kernel: CPU features: detected: Common not Private translations Aug 19 00:19:06.816808 kernel: CPU features: detected: CRC32 instructions Aug 19 00:19:06.816815 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 19 00:19:06.816822 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 19 00:19:06.816829 kernel: CPU features: detected: LSE atomic instructions Aug 19 00:19:06.816836 kernel: CPU features: detected: Privileged Access Never Aug 19 00:19:06.816843 kernel: CPU features: detected: RAS Extension Support Aug 19 00:19:06.816853 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 19 00:19:06.816860 kernel: alternatives: applying system-wide alternatives Aug 19 00:19:06.816867 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Aug 19 00:19:06.816875 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Aug 19 00:19:06.816882 kernel: devtmpfs: initialized Aug 19 00:19:06.816889 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 00:19:06.816896 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 19 00:19:06.816904 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 19 00:19:06.816912 kernel: 0 pages in range for non-PLT usage Aug 19 00:19:06.816919 kernel: 508576 pages in range for PLT usage Aug 19 00:19:06.816926 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 00:19:06.816933 kernel: SMBIOS 3.0.0 present. Aug 19 00:19:06.816941 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Aug 19 00:19:06.816948 kernel: DMI: Memory slots populated: 1/1 Aug 19 00:19:06.816955 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 00:19:06.816962 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 19 00:19:06.816969 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 19 00:19:06.816978 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 19 00:19:06.816987 kernel: audit: initializing netlink subsys (disabled) Aug 19 00:19:06.816994 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Aug 19 00:19:06.817001 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 00:19:06.817008 kernel: cpuidle: using governor menu Aug 19 00:19:06.817015 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 19 00:19:06.817023 kernel: ASID allocator initialised with 32768 entries Aug 19 00:19:06.817030 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 00:19:06.817037 kernel: Serial: AMBA PL011 UART driver Aug 19 00:19:06.817045 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 19 00:19:06.817052 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 19 00:19:06.817060 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 19 00:19:06.817067 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 19 00:19:06.817074 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 00:19:06.817081 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 00:19:06.817088 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 19 00:19:06.817095 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 19 00:19:06.817104 kernel: ACPI: Added _OSI(Module Device) Aug 19 00:19:06.817111 kernel: ACPI: Added _OSI(Processor Device) Aug 19 00:19:06.817119 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 00:19:06.817126 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 19 00:19:06.817133 kernel: ACPI: Interpreter enabled Aug 19 00:19:06.817140 kernel: ACPI: Using GIC for interrupt routing Aug 19 00:19:06.817149 kernel: ACPI: MCFG table detected, 1 entries Aug 19 00:19:06.817158 kernel: ACPI: CPU0 has been hot-added Aug 19 00:19:06.817167 kernel: ACPI: CPU1 has been hot-added Aug 19 00:19:06.817174 kernel: ACPI: CPU2 has been hot-added Aug 19 00:19:06.817181 kernel: ACPI: CPU3 has been hot-added Aug 19 00:19:06.817189 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 19 00:19:06.817197 kernel: printk: legacy console [ttyAMA0] enabled Aug 19 00:19:06.817204 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 19 00:19:06.817346 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 19 00:19:06.817426 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 19 00:19:06.817490 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 19 00:19:06.817550 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 19 00:19:06.817646 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 19 00:19:06.817656 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 19 00:19:06.817664 kernel: PCI host bridge to bus 0000:00 Aug 19 00:19:06.817734 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 19 00:19:06.817811 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 19 00:19:06.817867 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 19 00:19:06.817921 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 19 00:19:06.818009 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Aug 19 00:19:06.818082 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 19 00:19:06.818145 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Aug 19 00:19:06.818206 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Aug 19 00:19:06.818266 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Aug 19 00:19:06.818326 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Aug 19 00:19:06.818388 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Aug 19 00:19:06.818463 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Aug 19 00:19:06.818520 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 19 00:19:06.818575 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 19 00:19:06.818629 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 19 00:19:06.818639 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 19 00:19:06.818646 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 19 00:19:06.818653 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 19 00:19:06.818663 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 19 00:19:06.818670 kernel: iommu: Default domain type: Translated Aug 19 00:19:06.818677 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 19 00:19:06.818685 kernel: efivars: Registered efivars operations Aug 19 00:19:06.818692 kernel: vgaarb: loaded Aug 19 00:19:06.818699 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 19 00:19:06.818706 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 00:19:06.818714 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 00:19:06.818721 kernel: pnp: PnP ACPI init Aug 19 00:19:06.818803 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 19 00:19:06.818814 kernel: pnp: PnP ACPI: found 1 devices Aug 19 00:19:06.818822 kernel: NET: Registered PF_INET protocol family Aug 19 00:19:06.818829 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 19 00:19:06.818837 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 19 00:19:06.818844 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 00:19:06.818852 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 19 00:19:06.818859 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 19 00:19:06.818868 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 19 00:19:06.818875 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 00:19:06.818883 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 00:19:06.818890 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 00:19:06.818898 kernel: PCI: CLS 0 bytes, default 64 Aug 19 00:19:06.818905 kernel: kvm [1]: HYP mode not available Aug 19 00:19:06.818912 kernel: Initialise system trusted keyrings Aug 19 00:19:06.818919 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 19 00:19:06.818926 kernel: Key type asymmetric registered Aug 19 00:19:06.818933 kernel: Asymmetric key parser 'x509' registered Aug 19 00:19:06.818941 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 19 00:19:06.818949 kernel: io scheduler mq-deadline registered Aug 19 00:19:06.818956 kernel: io scheduler kyber registered Aug 19 00:19:06.818963 kernel: io scheduler bfq registered Aug 19 00:19:06.818970 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 19 00:19:06.818977 kernel: ACPI: button: Power Button [PWRB] Aug 19 00:19:06.818985 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 19 00:19:06.819048 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 19 00:19:06.819057 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 00:19:06.819066 kernel: thunder_xcv, ver 1.0 Aug 19 00:19:06.819073 kernel: thunder_bgx, ver 1.0 Aug 19 00:19:06.819081 kernel: nicpf, ver 1.0 Aug 19 00:19:06.819088 kernel: nicvf, ver 1.0 Aug 19 00:19:06.819156 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 19 00:19:06.819214 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-19T00:19:06 UTC (1755562746) Aug 19 00:19:06.819224 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 19 00:19:06.819231 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Aug 19 00:19:06.819240 kernel: watchdog: NMI not fully supported Aug 19 00:19:06.819247 kernel: watchdog: Hard watchdog permanently disabled Aug 19 00:19:06.819254 kernel: NET: Registered PF_INET6 protocol family Aug 19 00:19:06.819261 kernel: Segment Routing with IPv6 Aug 19 00:19:06.819268 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 00:19:06.819275 kernel: NET: Registered PF_PACKET protocol family Aug 19 00:19:06.819283 kernel: Key type dns_resolver registered Aug 19 00:19:06.819290 kernel: registered taskstats version 1 Aug 19 00:19:06.819297 kernel: Loading compiled-in X.509 certificates Aug 19 00:19:06.819305 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: becc5a61d1c5dcbcd174f4649c64b863031dbaa8' Aug 19 00:19:06.819312 kernel: Demotion targets for Node 0: null Aug 19 00:19:06.819320 kernel: Key type .fscrypt registered Aug 19 00:19:06.819327 kernel: Key type fscrypt-provisioning registered Aug 19 00:19:06.819334 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 19 00:19:06.819342 kernel: ima: Allocated hash algorithm: sha1 Aug 19 00:19:06.819354 kernel: ima: No architecture policies found Aug 19 00:19:06.819361 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 19 00:19:06.819369 kernel: clk: Disabling unused clocks Aug 19 00:19:06.819377 kernel: PM: genpd: Disabling unused power domains Aug 19 00:19:06.819384 kernel: Warning: unable to open an initial console. Aug 19 00:19:06.819391 kernel: Freeing unused kernel memory: 38912K Aug 19 00:19:06.819398 kernel: Run /init as init process Aug 19 00:19:06.819411 kernel: with arguments: Aug 19 00:19:06.819420 kernel: /init Aug 19 00:19:06.819428 kernel: with environment: Aug 19 00:19:06.819435 kernel: HOME=/ Aug 19 00:19:06.819442 kernel: TERM=linux Aug 19 00:19:06.819453 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 00:19:06.819461 systemd[1]: Successfully made /usr/ read-only. Aug 19 00:19:06.819472 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 00:19:06.819480 systemd[1]: Detected virtualization kvm. Aug 19 00:19:06.819488 systemd[1]: Detected architecture arm64. Aug 19 00:19:06.819495 systemd[1]: Running in initrd. Aug 19 00:19:06.819503 systemd[1]: No hostname configured, using default hostname. Aug 19 00:19:06.819512 systemd[1]: Hostname set to . Aug 19 00:19:06.819519 systemd[1]: Initializing machine ID from VM UUID. Aug 19 00:19:06.819527 systemd[1]: Queued start job for default target initrd.target. Aug 19 00:19:06.819535 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:19:06.819543 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:19:06.819551 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 00:19:06.819559 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 00:19:06.819567 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 00:19:06.819577 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 00:19:06.819585 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 00:19:06.819593 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 00:19:06.819601 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:19:06.819608 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:19:06.819616 systemd[1]: Reached target paths.target - Path Units. Aug 19 00:19:06.819624 systemd[1]: Reached target slices.target - Slice Units. Aug 19 00:19:06.819632 systemd[1]: Reached target swap.target - Swaps. Aug 19 00:19:06.819640 systemd[1]: Reached target timers.target - Timer Units. Aug 19 00:19:06.819648 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 00:19:06.819656 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 00:19:06.819663 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 00:19:06.819671 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 00:19:06.819679 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:19:06.819686 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 00:19:06.819696 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:19:06.819704 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 00:19:06.819711 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 00:19:06.819719 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 00:19:06.819727 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 00:19:06.819735 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 00:19:06.819743 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 00:19:06.819751 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 00:19:06.819758 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 00:19:06.819768 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:19:06.819798 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 00:19:06.819807 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:19:06.819815 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 00:19:06.819826 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 00:19:06.819856 systemd-journald[244]: Collecting audit messages is disabled. Aug 19 00:19:06.819876 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 00:19:06.819884 systemd-journald[244]: Journal started Aug 19 00:19:06.819904 systemd-journald[244]: Runtime Journal (/run/log/journal/20920ef7ec324a2a98f6860ae0aaf17f) is 6M, max 48.5M, 42.4M free. Aug 19 00:19:06.825894 kernel: Bridge firewalling registered Aug 19 00:19:06.807034 systemd-modules-load[245]: Inserted module 'overlay' Aug 19 00:19:06.821520 systemd-modules-load[245]: Inserted module 'br_netfilter' Aug 19 00:19:06.830759 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:19:06.830789 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 00:19:06.832124 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 00:19:06.833958 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 00:19:06.837850 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 00:19:06.839631 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:19:06.841682 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 00:19:06.855063 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 00:19:06.862533 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:19:06.863256 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 00:19:06.866256 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:19:06.870157 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 00:19:06.872737 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:19:06.875522 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 00:19:06.878161 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 00:19:06.911132 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a868ccde263e96e0a18737fdbf04ca04bbf30dfe23963f1ae3994966e8fc9468 Aug 19 00:19:06.928732 systemd-resolved[290]: Positive Trust Anchors: Aug 19 00:19:06.928752 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 00:19:06.928862 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 00:19:06.933985 systemd-resolved[290]: Defaulting to hostname 'linux'. Aug 19 00:19:06.934950 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 00:19:06.938100 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:19:06.994809 kernel: SCSI subsystem initialized Aug 19 00:19:07.000794 kernel: Loading iSCSI transport class v2.0-870. Aug 19 00:19:07.014800 kernel: iscsi: registered transport (tcp) Aug 19 00:19:07.027962 kernel: iscsi: registered transport (qla4xxx) Aug 19 00:19:07.028009 kernel: QLogic iSCSI HBA Driver Aug 19 00:19:07.044929 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 00:19:07.065974 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:19:07.068384 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 00:19:07.114144 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 00:19:07.117527 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 00:19:07.192688 kernel: raid6: neonx8 gen() 15713 MB/s Aug 19 00:19:07.210661 kernel: raid6: neonx4 gen() 15331 MB/s Aug 19 00:19:07.225905 kernel: raid6: neonx2 gen() 13133 MB/s Aug 19 00:19:07.243245 kernel: raid6: neonx1 gen() 10447 MB/s Aug 19 00:19:07.259829 kernel: raid6: int64x8 gen() 6851 MB/s Aug 19 00:19:07.276828 kernel: raid6: int64x4 gen() 7283 MB/s Aug 19 00:19:07.293801 kernel: raid6: int64x2 gen() 6054 MB/s Aug 19 00:19:07.310951 kernel: raid6: int64x1 gen() 4996 MB/s Aug 19 00:19:07.310972 kernel: raid6: using algorithm neonx8 gen() 15713 MB/s Aug 19 00:19:07.329022 kernel: raid6: .... xor() 11964 MB/s, rmw enabled Aug 19 00:19:07.329070 kernel: raid6: using neon recovery algorithm Aug 19 00:19:07.334798 kernel: xor: measuring software checksum speed Aug 19 00:19:07.334828 kernel: 8regs : 21601 MB/sec Aug 19 00:19:07.334838 kernel: 32regs : 18722 MB/sec Aug 19 00:19:07.335902 kernel: arm64_neon : 28147 MB/sec Aug 19 00:19:07.335918 kernel: xor: using function: arm64_neon (28147 MB/sec) Aug 19 00:19:07.394806 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 00:19:07.401209 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 00:19:07.405610 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:19:07.433916 systemd-udevd[500]: Using default interface naming scheme 'v255'. Aug 19 00:19:07.438064 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:19:07.440189 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 00:19:07.465531 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Aug 19 00:19:07.494484 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 00:19:07.497957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 00:19:07.548963 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:19:07.555736 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 00:19:07.613625 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 19 00:19:07.613888 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 19 00:19:07.629125 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 19 00:19:07.629187 kernel: GPT:9289727 != 19775487 Aug 19 00:19:07.629198 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 19 00:19:07.629207 kernel: GPT:9289727 != 19775487 Aug 19 00:19:07.630143 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 19 00:19:07.630176 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 00:19:07.632942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 00:19:07.633061 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:19:07.637149 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:19:07.639289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:19:07.663538 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 19 00:19:07.671806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:19:07.679693 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 19 00:19:07.681297 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 00:19:07.698995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 00:19:07.705311 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 19 00:19:07.706606 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 19 00:19:07.709018 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 00:19:07.712063 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:19:07.714194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 00:19:07.716964 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 00:19:07.718832 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 00:19:07.742337 disk-uuid[591]: Primary Header is updated. Aug 19 00:19:07.742337 disk-uuid[591]: Secondary Entries is updated. Aug 19 00:19:07.742337 disk-uuid[591]: Secondary Header is updated. Aug 19 00:19:07.745098 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 00:19:07.751373 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 00:19:08.757090 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 00:19:08.757138 disk-uuid[597]: The operation has completed successfully. Aug 19 00:19:08.786631 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 00:19:08.787857 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 00:19:08.813247 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 00:19:08.826884 sh[612]: Success Aug 19 00:19:08.843976 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 00:19:08.844027 kernel: device-mapper: uevent: version 1.0.3 Aug 19 00:19:08.845636 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 00:19:08.857816 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Aug 19 00:19:08.881731 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 00:19:08.884403 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 00:19:08.906229 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 00:19:08.912799 kernel: BTRFS: device fsid 1e492084-d287-4a43-8dc6-ad086a072625 devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (624) Aug 19 00:19:08.915017 kernel: BTRFS info (device dm-0): first mount of filesystem 1e492084-d287-4a43-8dc6-ad086a072625 Aug 19 00:19:08.915040 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:19:08.915050 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 00:19:08.919852 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 00:19:08.921088 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 00:19:08.922546 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 00:19:08.923303 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 00:19:08.924996 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 00:19:08.948808 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (656) Aug 19 00:19:08.951090 kernel: BTRFS info (device vda6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:19:08.951124 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:19:08.951140 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 00:19:08.958803 kernel: BTRFS info (device vda6): last unmount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:19:08.959872 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 00:19:08.962807 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 00:19:09.034303 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 00:19:09.037916 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 00:19:09.077470 systemd-networkd[797]: lo: Link UP Aug 19 00:19:09.077480 systemd-networkd[797]: lo: Gained carrier Aug 19 00:19:09.078160 systemd-networkd[797]: Enumeration completed Aug 19 00:19:09.078502 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 00:19:09.080192 systemd[1]: Reached target network.target - Network. Aug 19 00:19:09.081296 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:19:09.081300 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 00:19:09.081804 systemd-networkd[797]: eth0: Link UP Aug 19 00:19:09.082132 systemd-networkd[797]: eth0: Gained carrier Aug 19 00:19:09.082142 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:19:09.094817 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 00:19:09.118022 ignition[700]: Ignition 2.21.0 Aug 19 00:19:09.118037 ignition[700]: Stage: fetch-offline Aug 19 00:19:09.118067 ignition[700]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:19:09.118074 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:19:09.118267 ignition[700]: parsed url from cmdline: "" Aug 19 00:19:09.118271 ignition[700]: no config URL provided Aug 19 00:19:09.118275 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 00:19:09.118282 ignition[700]: no config at "/usr/lib/ignition/user.ign" Aug 19 00:19:09.118302 ignition[700]: op(1): [started] loading QEMU firmware config module Aug 19 00:19:09.118306 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 19 00:19:09.130819 ignition[700]: op(1): [finished] loading QEMU firmware config module Aug 19 00:19:09.169539 ignition[700]: parsing config with SHA512: 5a4d35343eb00f382ac1f406de70593bf1aaee118abc9c442f928c4ec1bb6c941a6f369555f10012374dcae5fc639ce3b88c6152951e3b17e389482c7add33a5 Aug 19 00:19:09.173549 unknown[700]: fetched base config from "system" Aug 19 00:19:09.173560 unknown[700]: fetched user config from "qemu" Aug 19 00:19:09.173908 ignition[700]: fetch-offline: fetch-offline passed Aug 19 00:19:09.173958 ignition[700]: Ignition finished successfully Aug 19 00:19:09.177786 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 00:19:09.179063 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 19 00:19:09.179813 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 00:19:09.216502 ignition[810]: Ignition 2.21.0 Aug 19 00:19:09.216518 ignition[810]: Stage: kargs Aug 19 00:19:09.216645 ignition[810]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:19:09.216654 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:19:09.218750 ignition[810]: kargs: kargs passed Aug 19 00:19:09.218824 ignition[810]: Ignition finished successfully Aug 19 00:19:09.222354 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 00:19:09.225665 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 00:19:09.252977 ignition[819]: Ignition 2.21.0 Aug 19 00:19:09.252994 ignition[819]: Stage: disks Aug 19 00:19:09.253135 ignition[819]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:19:09.253144 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:19:09.254801 ignition[819]: disks: disks passed Aug 19 00:19:09.257331 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 00:19:09.254860 ignition[819]: Ignition finished successfully Aug 19 00:19:09.258795 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 00:19:09.261887 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 00:19:09.263658 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 00:19:09.265582 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 00:19:09.267522 systemd[1]: Reached target basic.target - Basic System. Aug 19 00:19:09.270092 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 00:19:09.294712 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 19 00:19:09.299388 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 00:19:09.301682 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 00:19:09.367791 kernel: EXT4-fs (vda9): mounted filesystem 593a9299-85f8-44ab-a00f-cf95b7233713 r/w with ordered data mode. Quota mode: none. Aug 19 00:19:09.368387 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 00:19:09.369744 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 00:19:09.376442 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 00:19:09.378862 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 00:19:09.379870 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 19 00:19:09.379915 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 00:19:09.379942 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 00:19:09.398615 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 00:19:09.400865 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 00:19:09.409888 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (837) Aug 19 00:19:09.412354 kernel: BTRFS info (device vda6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:19:09.412385 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:19:09.412400 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 00:19:09.416458 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 00:19:09.462637 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 00:19:09.467461 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Aug 19 00:19:09.471566 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 00:19:09.475465 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 00:19:09.549109 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 00:19:09.551296 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 00:19:09.553009 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 00:19:09.570794 kernel: BTRFS info (device vda6): last unmount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:19:09.591928 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 00:19:09.603813 ignition[950]: INFO : Ignition 2.21.0 Aug 19 00:19:09.603813 ignition[950]: INFO : Stage: mount Aug 19 00:19:09.603813 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:19:09.603813 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:19:09.607626 ignition[950]: INFO : mount: mount passed Aug 19 00:19:09.607626 ignition[950]: INFO : Ignition finished successfully Aug 19 00:19:09.607007 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 00:19:09.609767 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 00:19:09.911959 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 00:19:09.913495 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 00:19:09.940171 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (964) Aug 19 00:19:09.940224 kernel: BTRFS info (device vda6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:19:09.940235 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:19:09.941789 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 00:19:09.944511 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 00:19:09.974154 ignition[981]: INFO : Ignition 2.21.0 Aug 19 00:19:09.974154 ignition[981]: INFO : Stage: files Aug 19 00:19:09.975809 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:19:09.975809 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:19:09.975809 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Aug 19 00:19:09.979244 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 00:19:09.979244 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 00:19:09.979244 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 00:19:09.979244 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 00:19:09.984503 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 00:19:09.984503 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 19 00:19:09.984503 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 19 00:19:09.979414 unknown[981]: wrote ssh authorized keys file for user: core Aug 19 00:19:10.739454 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 00:19:10.871929 systemd-networkd[797]: eth0: Gained IPv6LL Aug 19 00:19:11.624840 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 19 00:19:11.627092 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 19 00:19:11.627092 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 00:19:11.627092 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 00:19:11.627092 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 00:19:11.627092 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 00:19:11.627092 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 00:19:11.627092 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 00:19:11.627092 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 00:19:11.641596 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 00:19:11.641596 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 00:19:11.641596 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 19 00:19:11.641596 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 19 00:19:11.641596 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 19 00:19:11.641596 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 19 00:19:12.228280 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 19 00:19:12.876042 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 19 00:19:12.876042 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 19 00:19:12.879753 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 00:19:12.882224 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 00:19:12.882224 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 19 00:19:12.882224 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 19 00:19:12.887485 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 00:19:12.887485 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 00:19:12.887485 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 19 00:19:12.887485 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 19 00:19:12.913337 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 00:19:12.918394 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 00:19:12.920858 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 19 00:19:12.920858 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 19 00:19:12.920858 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 00:19:12.920858 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 00:19:12.920858 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 00:19:12.920858 ignition[981]: INFO : files: files passed Aug 19 00:19:12.920858 ignition[981]: INFO : Ignition finished successfully Aug 19 00:19:12.921376 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 00:19:12.927089 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 00:19:12.930351 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 00:19:12.944398 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Aug 19 00:19:12.944695 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 00:19:12.944838 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 00:19:12.950945 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:19:12.950945 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:19:12.950801 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 00:19:12.957532 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:19:12.954191 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 00:19:12.957238 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 00:19:13.012934 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 00:19:13.013083 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 00:19:13.015532 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 00:19:13.017493 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 00:19:13.019454 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 00:19:13.020317 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 00:19:13.043729 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 00:19:13.046384 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 00:19:13.071283 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:19:13.072699 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:19:13.074958 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 00:19:13.076806 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 00:19:13.076947 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 00:19:13.079582 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 00:19:13.081780 systemd[1]: Stopped target basic.target - Basic System. Aug 19 00:19:13.083624 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 00:19:13.085446 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 00:19:13.087490 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 00:19:13.089562 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 00:19:13.091578 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 00:19:13.093578 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 00:19:13.095589 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 00:19:13.098986 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 00:19:13.100716 systemd[1]: Stopped target swap.target - Swaps. Aug 19 00:19:13.102301 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 00:19:13.102452 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 00:19:13.104886 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:19:13.106969 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:19:13.109159 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 00:19:13.109868 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:19:13.111407 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 00:19:13.111533 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 00:19:13.114641 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 00:19:13.114848 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 00:19:13.117013 systemd[1]: Stopped target paths.target - Path Units. Aug 19 00:19:13.118686 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 00:19:13.125806 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:19:13.127131 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 00:19:13.129472 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 00:19:13.131171 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 00:19:13.131309 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 00:19:13.133037 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 00:19:13.133163 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 00:19:13.134888 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 00:19:13.135065 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 00:19:13.136971 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 00:19:13.137129 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 00:19:13.139579 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 00:19:13.142083 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 00:19:13.143358 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 00:19:13.143547 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:19:13.145530 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 00:19:13.145681 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 00:19:13.152981 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 00:19:13.154981 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 00:19:13.163424 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 00:19:13.167858 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 00:19:13.167964 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 00:19:13.170550 ignition[1037]: INFO : Ignition 2.21.0 Aug 19 00:19:13.170550 ignition[1037]: INFO : Stage: umount Aug 19 00:19:13.170550 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:19:13.170550 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:19:13.170550 ignition[1037]: INFO : umount: umount passed Aug 19 00:19:13.170550 ignition[1037]: INFO : Ignition finished successfully Aug 19 00:19:13.171074 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 00:19:13.172245 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 00:19:13.174723 systemd[1]: Stopped target network.target - Network. Aug 19 00:19:13.175756 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 00:19:13.175858 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 00:19:13.178062 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 00:19:13.178212 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 00:19:13.179895 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 00:19:13.179952 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 00:19:13.181779 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 00:19:13.181829 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 00:19:13.183876 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 00:19:13.183932 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 00:19:13.186025 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 00:19:13.187852 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 00:19:13.196673 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 00:19:13.196828 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 00:19:13.199692 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 00:19:13.199995 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 00:19:13.201252 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 00:19:13.201303 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:19:13.203995 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 00:19:13.204887 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 00:19:13.204954 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 00:19:13.206956 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:19:13.209530 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 00:19:13.210740 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 00:19:13.215571 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 00:19:13.216180 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 00:19:13.216262 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:19:13.219029 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 00:19:13.219073 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 00:19:13.221136 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 00:19:13.221182 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:19:13.224798 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 00:19:13.224857 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 00:19:13.229451 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 00:19:13.229591 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:19:13.231337 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 00:19:13.231372 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 00:19:13.232949 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 00:19:13.232976 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:19:13.234729 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 00:19:13.234789 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 00:19:13.237464 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 00:19:13.237511 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 00:19:13.240431 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 00:19:13.240484 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 00:19:13.244708 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 00:19:13.246021 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 00:19:13.246079 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:19:13.249097 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 00:19:13.249140 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:19:13.251328 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 19 00:19:13.251369 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 00:19:13.253469 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 00:19:13.253513 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:19:13.255541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 00:19:13.255583 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:19:13.259137 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 19 00:19:13.259182 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 19 00:19:13.259211 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 19 00:19:13.259241 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 00:19:13.259552 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 00:19:13.259646 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 00:19:13.261199 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 00:19:13.261285 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 00:19:13.263544 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 00:19:13.265763 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 00:19:13.284845 systemd[1]: Switching root. Aug 19 00:19:13.323955 systemd-journald[244]: Journal stopped Aug 19 00:19:14.189225 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Aug 19 00:19:14.189275 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 00:19:14.189287 kernel: SELinux: policy capability open_perms=1 Aug 19 00:19:14.189297 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 00:19:14.189309 kernel: SELinux: policy capability always_check_network=0 Aug 19 00:19:14.189325 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 00:19:14.189340 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 00:19:14.189352 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 00:19:14.189362 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 00:19:14.189371 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 00:19:14.189392 kernel: audit: type=1403 audit(1755562753.567:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 00:19:14.189411 systemd[1]: Successfully loaded SELinux policy in 62.909ms. Aug 19 00:19:14.189428 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.388ms. Aug 19 00:19:14.189439 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 00:19:14.189451 systemd[1]: Detected virtualization kvm. Aug 19 00:19:14.189462 systemd[1]: Detected architecture arm64. Aug 19 00:19:14.189473 systemd[1]: Detected first boot. Aug 19 00:19:14.189483 systemd[1]: Initializing machine ID from VM UUID. Aug 19 00:19:14.189495 zram_generator::config[1083]: No configuration found. Aug 19 00:19:14.189505 kernel: NET: Registered PF_VSOCK protocol family Aug 19 00:19:14.189515 systemd[1]: Populated /etc with preset unit settings. Aug 19 00:19:14.189526 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 00:19:14.189537 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 00:19:14.189550 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 00:19:14.189560 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 00:19:14.189570 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 00:19:14.189579 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 00:19:14.189589 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 00:19:14.189599 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 00:19:14.189609 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 00:19:14.189619 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 00:19:14.189631 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 00:19:14.189641 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 00:19:14.189652 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:19:14.189662 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:19:14.189672 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 00:19:14.189682 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 00:19:14.189693 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 00:19:14.189703 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 00:19:14.189714 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 19 00:19:14.189726 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:19:14.189736 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:19:14.189745 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 00:19:14.189755 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 00:19:14.189766 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 00:19:14.189787 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 00:19:14.189801 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:19:14.189812 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 00:19:14.189825 systemd[1]: Reached target slices.target - Slice Units. Aug 19 00:19:14.189835 systemd[1]: Reached target swap.target - Swaps. Aug 19 00:19:14.189845 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 00:19:14.189855 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 00:19:14.189865 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 00:19:14.189875 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:19:14.189886 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 00:19:14.189897 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:19:14.189907 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 00:19:14.189918 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 00:19:14.189929 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 00:19:14.189939 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 00:19:14.189949 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 00:19:14.189964 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 00:19:14.189974 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 00:19:14.189984 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 00:19:14.189994 systemd[1]: Reached target machines.target - Containers. Aug 19 00:19:14.190004 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 00:19:14.190018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:19:14.190028 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 00:19:14.190038 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 00:19:14.190047 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:19:14.190058 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 00:19:14.190068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:19:14.190078 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 00:19:14.190089 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:19:14.190101 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 00:19:14.190111 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 00:19:14.190121 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 00:19:14.190131 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 00:19:14.190140 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 00:19:14.190151 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:19:14.190161 kernel: fuse: init (API version 7.41) Aug 19 00:19:14.190170 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 00:19:14.190179 kernel: loop: module loaded Aug 19 00:19:14.190191 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 00:19:14.190201 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 00:19:14.190211 kernel: ACPI: bus type drm_connector registered Aug 19 00:19:14.190220 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 00:19:14.190230 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 00:19:14.190239 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 00:19:14.190274 systemd-journald[1155]: Collecting audit messages is disabled. Aug 19 00:19:14.190297 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 00:19:14.190310 systemd-journald[1155]: Journal started Aug 19 00:19:14.190330 systemd-journald[1155]: Runtime Journal (/run/log/journal/20920ef7ec324a2a98f6860ae0aaf17f) is 6M, max 48.5M, 42.4M free. Aug 19 00:19:13.953808 systemd[1]: Queued start job for default target multi-user.target. Aug 19 00:19:13.977228 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 19 00:19:13.977672 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 00:19:14.191577 systemd[1]: Stopped verity-setup.service. Aug 19 00:19:14.196891 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 00:19:14.197633 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 00:19:14.198872 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 00:19:14.200109 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 00:19:14.201227 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 00:19:14.202486 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 00:19:14.204129 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 00:19:14.205491 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 00:19:14.207212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:19:14.208875 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 00:19:14.209071 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 00:19:14.210654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:19:14.210905 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:19:14.212484 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 00:19:14.212668 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 00:19:14.214068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:19:14.214254 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:19:14.215925 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 00:19:14.216100 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 00:19:14.219126 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:19:14.219310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:19:14.221015 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 00:19:14.222489 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:19:14.224128 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 00:19:14.227800 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 00:19:14.244174 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 00:19:14.247317 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 00:19:14.249731 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 00:19:14.251028 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 00:19:14.251064 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 00:19:14.253365 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 00:19:14.277065 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 00:19:14.278407 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:19:14.280918 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 00:19:14.283139 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 00:19:14.284453 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 00:19:14.287988 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 00:19:14.289413 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 00:19:14.292203 systemd-journald[1155]: Time spent on flushing to /var/log/journal/20920ef7ec324a2a98f6860ae0aaf17f is 20.228ms for 884 entries. Aug 19 00:19:14.292203 systemd-journald[1155]: System Journal (/var/log/journal/20920ef7ec324a2a98f6860ae0aaf17f) is 8M, max 195.6M, 187.6M free. Aug 19 00:19:14.329403 systemd-journald[1155]: Received client request to flush runtime journal. Aug 19 00:19:14.292487 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:19:14.296901 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 00:19:14.301042 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 00:19:14.304548 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:19:14.306281 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 00:19:14.307617 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 00:19:14.322803 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 00:19:14.326350 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:19:14.331806 kernel: loop0: detected capacity change from 0 to 100608 Aug 19 00:19:14.333900 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 00:19:14.338018 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 00:19:14.340502 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Aug 19 00:19:14.340796 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Aug 19 00:19:14.343992 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 00:19:14.346825 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 00:19:14.350848 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 00:19:14.351021 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 00:19:14.371810 kernel: loop1: detected capacity change from 0 to 119320 Aug 19 00:19:14.381633 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 00:19:14.384464 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 00:19:14.388590 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 00:19:14.402866 kernel: loop2: detected capacity change from 0 to 207008 Aug 19 00:19:14.409954 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Aug 19 00:19:14.409973 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Aug 19 00:19:14.414318 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:19:14.445853 kernel: loop3: detected capacity change from 0 to 100608 Aug 19 00:19:14.452815 kernel: loop4: detected capacity change from 0 to 119320 Aug 19 00:19:14.459838 kernel: loop5: detected capacity change from 0 to 207008 Aug 19 00:19:14.465807 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 19 00:19:14.466223 (sd-merge)[1227]: Merged extensions into '/usr'. Aug 19 00:19:14.469792 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 00:19:14.469940 systemd[1]: Reloading... Aug 19 00:19:14.542799 zram_generator::config[1256]: No configuration found. Aug 19 00:19:14.600168 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 00:19:14.693471 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 00:19:14.693955 systemd[1]: Reloading finished in 223 ms. Aug 19 00:19:14.710081 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 00:19:14.711600 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 00:19:14.726318 systemd[1]: Starting ensure-sysext.service... Aug 19 00:19:14.728409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 00:19:14.740663 systemd[1]: Reload requested from client PID 1287 ('systemctl') (unit ensure-sysext.service)... Aug 19 00:19:14.740678 systemd[1]: Reloading... Aug 19 00:19:14.744565 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 00:19:14.744891 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 00:19:14.745178 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 00:19:14.745400 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 00:19:14.746052 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 00:19:14.746271 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Aug 19 00:19:14.746322 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Aug 19 00:19:14.749994 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 00:19:14.750006 systemd-tmpfiles[1288]: Skipping /boot Aug 19 00:19:14.755702 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 00:19:14.755717 systemd-tmpfiles[1288]: Skipping /boot Aug 19 00:19:14.786800 zram_generator::config[1315]: No configuration found. Aug 19 00:19:14.919525 systemd[1]: Reloading finished in 178 ms. Aug 19 00:19:14.940406 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 00:19:14.942046 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:19:14.959959 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:19:14.962785 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 00:19:14.966946 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 00:19:14.976348 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 00:19:14.979907 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:19:14.983031 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 00:19:15.001809 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 00:19:15.008561 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:19:15.014573 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:19:15.019027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:19:15.021426 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:19:15.022804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:19:15.022957 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:19:15.034018 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 00:19:15.036961 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 00:19:15.039501 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:19:15.040897 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:19:15.041444 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Aug 19 00:19:15.042861 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 00:19:15.044614 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:19:15.044762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:19:15.046326 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:19:15.046476 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:19:15.049943 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 00:19:15.058453 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:19:15.059878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:19:15.063072 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:19:15.067995 augenrules[1388]: No rules Aug 19 00:19:15.074991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:19:15.077980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:19:15.078129 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:19:15.078229 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 00:19:15.079263 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:19:15.081505 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:19:15.081852 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:19:15.083711 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 00:19:15.085429 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 00:19:15.088912 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:19:15.092767 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:19:15.095026 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:19:15.095203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:19:15.098466 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:19:15.098662 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:19:15.115137 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:19:15.117985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:19:15.119153 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:19:15.123338 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 00:19:15.127231 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:19:15.135034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:19:15.136940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:19:15.136987 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:19:15.143044 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 00:19:15.144202 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 00:19:15.149860 systemd[1]: Finished ensure-sysext.service. Aug 19 00:19:15.151084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:19:15.151274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:19:15.152897 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 00:19:15.153086 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 00:19:15.154654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:19:15.156820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:19:15.159732 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:19:15.159930 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:19:15.163255 augenrules[1432]: /sbin/augenrules: No change Aug 19 00:19:15.172091 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 19 00:19:15.172577 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 00:19:15.172637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 00:19:15.175982 augenrules[1462]: No rules Aug 19 00:19:15.180275 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 19 00:19:15.181903 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:19:15.182107 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:19:15.224991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 00:19:15.230525 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 00:19:15.261830 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 00:19:15.294452 systemd-networkd[1437]: lo: Link UP Aug 19 00:19:15.294464 systemd-networkd[1437]: lo: Gained carrier Aug 19 00:19:15.295486 systemd-networkd[1437]: Enumeration completed Aug 19 00:19:15.295596 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 00:19:15.298849 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 00:19:15.301025 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 00:19:15.303946 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:19:15.304129 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 00:19:15.305668 systemd-networkd[1437]: eth0: Link UP Aug 19 00:19:15.305986 systemd-networkd[1437]: eth0: Gained carrier Aug 19 00:19:15.306007 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:19:15.312813 systemd-resolved[1355]: Positive Trust Anchors: Aug 19 00:19:15.313100 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 00:19:15.313137 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 00:19:15.320336 systemd-resolved[1355]: Defaulting to hostname 'linux'. Aug 19 00:19:15.322010 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 00:19:15.323786 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 19 00:19:15.326065 systemd[1]: Reached target network.target - Network. Aug 19 00:19:15.327033 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:19:15.328335 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 00:19:15.329717 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 00:19:15.331949 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 00:19:15.333312 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 00:19:15.334693 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 00:19:15.334726 systemd[1]: Reached target paths.target - Path Units. Aug 19 00:19:15.335740 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 00:19:15.338012 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 00:19:15.339301 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 00:19:15.340657 systemd[1]: Reached target timers.target - Timer Units. Aug 19 00:19:15.342524 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 00:19:15.345166 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 00:19:15.349759 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 00:19:15.351282 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 00:19:15.353845 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 00:19:15.355860 systemd-networkd[1437]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 00:19:15.356410 systemd-timesyncd[1467]: Network configuration changed, trying to establish connection. Aug 19 00:19:15.357134 systemd-timesyncd[1467]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 19 00:19:15.357197 systemd-timesyncd[1467]: Initial clock synchronization to Tue 2025-08-19 00:19:15.349196 UTC. Aug 19 00:19:15.364425 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 00:19:15.366174 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 00:19:15.370814 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 00:19:15.372322 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 00:19:15.381476 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 00:19:15.382575 systemd[1]: Reached target basic.target - Basic System. Aug 19 00:19:15.383641 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 00:19:15.383672 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 00:19:15.384844 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 00:19:15.386958 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 00:19:15.395092 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 00:19:15.397398 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 00:19:15.399621 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 00:19:15.400804 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 00:19:15.402034 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 00:19:15.405060 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 00:19:15.407526 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 00:19:15.411926 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 00:19:15.413857 jq[1501]: false Aug 19 00:19:15.418863 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 00:19:15.420992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:19:15.423231 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 19 00:19:15.423397 extend-filesystems[1502]: Found /dev/vda6 Aug 19 00:19:15.423684 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 00:19:15.424269 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 00:19:15.430538 extend-filesystems[1502]: Found /dev/vda9 Aug 19 00:19:15.432201 extend-filesystems[1502]: Checking size of /dev/vda9 Aug 19 00:19:15.435012 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 00:19:15.442200 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 00:19:15.444105 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 00:19:15.444851 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 00:19:15.445162 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 00:19:15.445354 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 00:19:15.447715 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 00:19:15.447925 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 00:19:15.448718 jq[1524]: true Aug 19 00:19:15.455792 extend-filesystems[1502]: Resized partition /dev/vda9 Aug 19 00:19:15.457977 extend-filesystems[1533]: resize2fs 1.47.2 (1-Jan-2025) Aug 19 00:19:15.466171 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 19 00:19:15.482263 jq[1532]: true Aug 19 00:19:15.479199 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 00:19:15.485420 update_engine[1517]: I20250819 00:19:15.483839 1517 main.cc:92] Flatcar Update Engine starting Aug 19 00:19:15.500697 tar[1528]: linux-arm64/LICENSE Aug 19 00:19:15.500697 tar[1528]: linux-arm64/helm Aug 19 00:19:15.507146 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 19 00:19:15.517117 dbus-daemon[1499]: [system] SELinux support is enabled Aug 19 00:19:15.520125 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 19 00:19:15.520125 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 19 00:19:15.520125 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 19 00:19:15.517400 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 00:19:15.541132 update_engine[1517]: I20250819 00:19:15.530750 1517 update_check_scheduler.cc:74] Next update check in 2m59s Aug 19 00:19:15.541157 extend-filesystems[1502]: Resized filesystem in /dev/vda9 Aug 19 00:19:15.525159 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 00:19:15.525845 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 00:19:15.549023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:19:15.550655 systemd[1]: Started update-engine.service - Update Engine. Aug 19 00:19:15.555419 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 00:19:15.555458 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 00:19:15.556818 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 00:19:15.556839 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 00:19:15.563986 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 00:19:15.570595 bash[1564]: Updated "/home/core/.ssh/authorized_keys" Aug 19 00:19:15.572290 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 00:19:15.574390 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 19 00:19:15.577359 systemd-logind[1511]: Watching system buttons on /dev/input/event0 (Power Button) Aug 19 00:19:15.578488 systemd-logind[1511]: New seat seat0. Aug 19 00:19:15.580761 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 00:19:15.644262 locksmithd[1567]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 00:19:15.700915 containerd[1542]: time="2025-08-19T00:19:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 00:19:15.701522 containerd[1542]: time="2025-08-19T00:19:15.701476960Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 00:19:15.711798 containerd[1542]: time="2025-08-19T00:19:15.711376040Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.84µs" Aug 19 00:19:15.711798 containerd[1542]: time="2025-08-19T00:19:15.711414960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 00:19:15.711798 containerd[1542]: time="2025-08-19T00:19:15.711433560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 00:19:15.711798 containerd[1542]: time="2025-08-19T00:19:15.711582600Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 00:19:15.711798 containerd[1542]: time="2025-08-19T00:19:15.711599120Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 00:19:15.711798 containerd[1542]: time="2025-08-19T00:19:15.711622920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 00:19:15.711798 containerd[1542]: time="2025-08-19T00:19:15.711671160Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 00:19:15.711798 containerd[1542]: time="2025-08-19T00:19:15.711683080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 00:19:15.712356 containerd[1542]: time="2025-08-19T00:19:15.712327360Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 00:19:15.712428 containerd[1542]: time="2025-08-19T00:19:15.712413880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 00:19:15.712477 containerd[1542]: time="2025-08-19T00:19:15.712464640Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 00:19:15.712534 containerd[1542]: time="2025-08-19T00:19:15.712521080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 00:19:15.712683 containerd[1542]: time="2025-08-19T00:19:15.712663120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 00:19:15.712956 containerd[1542]: time="2025-08-19T00:19:15.712931720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 00:19:15.713040 containerd[1542]: time="2025-08-19T00:19:15.713023880Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 00:19:15.713103 containerd[1542]: time="2025-08-19T00:19:15.713089360Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 00:19:15.713187 containerd[1542]: time="2025-08-19T00:19:15.713172040Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 00:19:15.713633 containerd[1542]: time="2025-08-19T00:19:15.713499440Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 00:19:15.713633 containerd[1542]: time="2025-08-19T00:19:15.713580880Z" level=info msg="metadata content store policy set" policy=shared Aug 19 00:19:15.717791 containerd[1542]: time="2025-08-19T00:19:15.717752280Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 00:19:15.717932 containerd[1542]: time="2025-08-19T00:19:15.717912960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 00:19:15.718008 containerd[1542]: time="2025-08-19T00:19:15.717993920Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 00:19:15.718063 containerd[1542]: time="2025-08-19T00:19:15.718049800Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 00:19:15.718117 containerd[1542]: time="2025-08-19T00:19:15.718103960Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 00:19:15.718166 containerd[1542]: time="2025-08-19T00:19:15.718153600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 00:19:15.718228 containerd[1542]: time="2025-08-19T00:19:15.718215160Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 00:19:15.718280 containerd[1542]: time="2025-08-19T00:19:15.718268120Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 00:19:15.718330 containerd[1542]: time="2025-08-19T00:19:15.718318040Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 00:19:15.718390 containerd[1542]: time="2025-08-19T00:19:15.718375520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 00:19:15.718440 containerd[1542]: time="2025-08-19T00:19:15.718427480Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 00:19:15.718500 containerd[1542]: time="2025-08-19T00:19:15.718487600Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 00:19:15.718663 containerd[1542]: time="2025-08-19T00:19:15.718641320Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 00:19:15.718729 containerd[1542]: time="2025-08-19T00:19:15.718716320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 00:19:15.718806 containerd[1542]: time="2025-08-19T00:19:15.718790480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 00:19:15.718864 containerd[1542]: time="2025-08-19T00:19:15.718851480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 00:19:15.718922 containerd[1542]: time="2025-08-19T00:19:15.718909360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 00:19:15.718975 containerd[1542]: time="2025-08-19T00:19:15.718961720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 00:19:15.719047 containerd[1542]: time="2025-08-19T00:19:15.719031520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 00:19:15.719106 containerd[1542]: time="2025-08-19T00:19:15.719092320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 00:19:15.719158 containerd[1542]: time="2025-08-19T00:19:15.719145880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 00:19:15.719208 containerd[1542]: time="2025-08-19T00:19:15.719194960Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 00:19:15.719259 containerd[1542]: time="2025-08-19T00:19:15.719247000Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 00:19:15.719588 containerd[1542]: time="2025-08-19T00:19:15.719568520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 00:19:15.719645 containerd[1542]: time="2025-08-19T00:19:15.719633560Z" level=info msg="Start snapshots syncer" Aug 19 00:19:15.719725 containerd[1542]: time="2025-08-19T00:19:15.719709200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 00:19:15.720030 containerd[1542]: time="2025-08-19T00:19:15.719991720Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 00:19:15.720195 containerd[1542]: time="2025-08-19T00:19:15.720176560Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 00:19:15.720986 containerd[1542]: time="2025-08-19T00:19:15.720953440Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 00:19:15.721192 containerd[1542]: time="2025-08-19T00:19:15.721169640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 00:19:15.721265 containerd[1542]: time="2025-08-19T00:19:15.721251080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 00:19:15.721321 containerd[1542]: time="2025-08-19T00:19:15.721307240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 00:19:15.721385 containerd[1542]: time="2025-08-19T00:19:15.721359720Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 00:19:15.721438 containerd[1542]: time="2025-08-19T00:19:15.721425880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 00:19:15.721508 containerd[1542]: time="2025-08-19T00:19:15.721493400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 00:19:15.721561 containerd[1542]: time="2025-08-19T00:19:15.721548600Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 00:19:15.721628 containerd[1542]: time="2025-08-19T00:19:15.721614080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 00:19:15.721678 containerd[1542]: time="2025-08-19T00:19:15.721665920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 00:19:15.721729 containerd[1542]: time="2025-08-19T00:19:15.721716560Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 00:19:15.722430 containerd[1542]: time="2025-08-19T00:19:15.722299840Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 00:19:15.722430 containerd[1542]: time="2025-08-19T00:19:15.722339960Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 00:19:15.722430 containerd[1542]: time="2025-08-19T00:19:15.722349000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 00:19:15.722430 containerd[1542]: time="2025-08-19T00:19:15.722358640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 00:19:15.722430 containerd[1542]: time="2025-08-19T00:19:15.722375880Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 00:19:15.722430 containerd[1542]: time="2025-08-19T00:19:15.722389320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 00:19:15.722430 containerd[1542]: time="2025-08-19T00:19:15.722400600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 00:19:15.722607 containerd[1542]: time="2025-08-19T00:19:15.722521840Z" level=info msg="runtime interface created" Aug 19 00:19:15.722607 containerd[1542]: time="2025-08-19T00:19:15.722527440Z" level=info msg="created NRI interface" Aug 19 00:19:15.722607 containerd[1542]: time="2025-08-19T00:19:15.722536040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 00:19:15.722607 containerd[1542]: time="2025-08-19T00:19:15.722548800Z" level=info msg="Connect containerd service" Aug 19 00:19:15.722607 containerd[1542]: time="2025-08-19T00:19:15.722587320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 00:19:15.723574 containerd[1542]: time="2025-08-19T00:19:15.723532200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 00:19:15.824789 containerd[1542]: time="2025-08-19T00:19:15.824664880Z" level=info msg="Start subscribing containerd event" Aug 19 00:19:15.824920 containerd[1542]: time="2025-08-19T00:19:15.824756320Z" level=info msg="Start recovering state" Aug 19 00:19:15.825088 containerd[1542]: time="2025-08-19T00:19:15.825021920Z" level=info msg="Start event monitor" Aug 19 00:19:15.825157 containerd[1542]: time="2025-08-19T00:19:15.825144320Z" level=info msg="Start cni network conf syncer for default" Aug 19 00:19:15.825237 containerd[1542]: time="2025-08-19T00:19:15.825225720Z" level=info msg="Start streaming server" Aug 19 00:19:15.825303 containerd[1542]: time="2025-08-19T00:19:15.825274160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 00:19:15.825352 containerd[1542]: time="2025-08-19T00:19:15.825337000Z" level=info msg="runtime interface starting up..." Aug 19 00:19:15.825525 containerd[1542]: time="2025-08-19T00:19:15.825395240Z" level=info msg="starting plugins..." Aug 19 00:19:15.825525 containerd[1542]: time="2025-08-19T00:19:15.825421720Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 00:19:15.825781 containerd[1542]: time="2025-08-19T00:19:15.825682080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 00:19:15.825931 containerd[1542]: time="2025-08-19T00:19:15.825872520Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 00:19:15.826804 containerd[1542]: time="2025-08-19T00:19:15.826037320Z" level=info msg="containerd successfully booted in 0.125780s" Aug 19 00:19:15.826167 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 00:19:15.838046 tar[1528]: linux-arm64/README.md Aug 19 00:19:15.856379 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 00:19:16.040660 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 00:19:16.060887 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 00:19:16.064845 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 00:19:16.092192 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 00:19:16.092447 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 00:19:16.095299 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 00:19:16.138415 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 00:19:16.141293 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 00:19:16.143538 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 19 00:19:16.144950 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 00:19:17.143967 systemd-networkd[1437]: eth0: Gained IPv6LL Aug 19 00:19:17.146387 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 00:19:17.148229 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 00:19:17.150760 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 19 00:19:17.153276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:19:17.173075 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 00:19:17.196983 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 00:19:17.198929 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 19 00:19:17.199136 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 19 00:19:17.201361 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 00:19:17.748163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:19:17.749881 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 00:19:17.752086 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:19:17.752812 systemd[1]: Startup finished in 2.068s (kernel) + 6.925s (initrd) + 4.253s (userspace) = 13.248s. Aug 19 00:19:18.242018 kubelet[1638]: E0819 00:19:18.241900 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:19:18.244293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:19:18.244447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:19:18.244808 systemd[1]: kubelet.service: Consumed 874ms CPU time, 256.7M memory peak. Aug 19 00:19:19.905151 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 00:19:19.906332 systemd[1]: Started sshd@0-10.0.0.90:22-10.0.0.1:49924.service - OpenSSH per-connection server daemon (10.0.0.1:49924). Aug 19 00:19:19.991597 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 49924 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:19:19.993509 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:19:19.999817 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 00:19:20.000754 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 00:19:20.006113 systemd-logind[1511]: New session 1 of user core. Aug 19 00:19:20.021920 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 00:19:20.024660 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 00:19:20.046815 (systemd)[1656]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 00:19:20.048986 systemd-logind[1511]: New session c1 of user core. Aug 19 00:19:20.176947 systemd[1656]: Queued start job for default target default.target. Aug 19 00:19:20.183790 systemd[1656]: Created slice app.slice - User Application Slice. Aug 19 00:19:20.183822 systemd[1656]: Reached target paths.target - Paths. Aug 19 00:19:20.183860 systemd[1656]: Reached target timers.target - Timers. Aug 19 00:19:20.185142 systemd[1656]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 00:19:20.195087 systemd[1656]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 00:19:20.195158 systemd[1656]: Reached target sockets.target - Sockets. Aug 19 00:19:20.195200 systemd[1656]: Reached target basic.target - Basic System. Aug 19 00:19:20.195233 systemd[1656]: Reached target default.target - Main User Target. Aug 19 00:19:20.195258 systemd[1656]: Startup finished in 140ms. Aug 19 00:19:20.195452 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 00:19:20.196855 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 00:19:20.256590 systemd[1]: Started sshd@1-10.0.0.90:22-10.0.0.1:49938.service - OpenSSH per-connection server daemon (10.0.0.1:49938). Aug 19 00:19:20.333011 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 49938 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:19:20.334469 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:19:20.339392 systemd-logind[1511]: New session 2 of user core. Aug 19 00:19:20.345991 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 00:19:20.398094 sshd[1670]: Connection closed by 10.0.0.1 port 49938 Aug 19 00:19:20.398620 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Aug 19 00:19:20.412178 systemd[1]: sshd@1-10.0.0.90:22-10.0.0.1:49938.service: Deactivated successfully. Aug 19 00:19:20.415373 systemd[1]: session-2.scope: Deactivated successfully. Aug 19 00:19:20.416449 systemd-logind[1511]: Session 2 logged out. Waiting for processes to exit. Aug 19 00:19:20.419271 systemd[1]: Started sshd@2-10.0.0.90:22-10.0.0.1:49950.service - OpenSSH per-connection server daemon (10.0.0.1:49950). Aug 19 00:19:20.420387 systemd-logind[1511]: Removed session 2. Aug 19 00:19:20.493491 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 49950 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:19:20.494990 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:19:20.499942 systemd-logind[1511]: New session 3 of user core. Aug 19 00:19:20.513982 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 00:19:20.566941 sshd[1679]: Connection closed by 10.0.0.1 port 49950 Aug 19 00:19:20.567316 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Aug 19 00:19:20.576021 systemd[1]: sshd@2-10.0.0.90:22-10.0.0.1:49950.service: Deactivated successfully. Aug 19 00:19:20.578684 systemd[1]: session-3.scope: Deactivated successfully. Aug 19 00:19:20.579935 systemd-logind[1511]: Session 3 logged out. Waiting for processes to exit. Aug 19 00:19:20.582724 systemd-logind[1511]: Removed session 3. Aug 19 00:19:20.584394 systemd[1]: Started sshd@3-10.0.0.90:22-10.0.0.1:49964.service - OpenSSH per-connection server daemon (10.0.0.1:49964). Aug 19 00:19:20.654657 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 49964 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:19:20.656276 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:19:20.664499 systemd-logind[1511]: New session 4 of user core. Aug 19 00:19:20.686018 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 00:19:20.739627 sshd[1688]: Connection closed by 10.0.0.1 port 49964 Aug 19 00:19:20.741881 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Aug 19 00:19:20.752438 systemd[1]: sshd@3-10.0.0.90:22-10.0.0.1:49964.service: Deactivated successfully. Aug 19 00:19:20.755514 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 00:19:20.756470 systemd-logind[1511]: Session 4 logged out. Waiting for processes to exit. Aug 19 00:19:20.759576 systemd-logind[1511]: Removed session 4. Aug 19 00:19:20.761546 systemd[1]: Started sshd@4-10.0.0.90:22-10.0.0.1:49968.service - OpenSSH per-connection server daemon (10.0.0.1:49968). Aug 19 00:19:20.830852 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 49968 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:19:20.831427 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:19:20.836754 systemd-logind[1511]: New session 5 of user core. Aug 19 00:19:20.847000 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 00:19:21.002140 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 00:19:21.002419 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:19:21.014968 sudo[1698]: pam_unix(sudo:session): session closed for user root Aug 19 00:19:21.016818 sshd[1697]: Connection closed by 10.0.0.1 port 49968 Aug 19 00:19:21.017127 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Aug 19 00:19:21.031243 systemd[1]: sshd@4-10.0.0.90:22-10.0.0.1:49968.service: Deactivated successfully. Aug 19 00:19:21.033587 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 00:19:21.034679 systemd-logind[1511]: Session 5 logged out. Waiting for processes to exit. Aug 19 00:19:21.038393 systemd[1]: Started sshd@5-10.0.0.90:22-10.0.0.1:49978.service - OpenSSH per-connection server daemon (10.0.0.1:49978). Aug 19 00:19:21.039434 systemd-logind[1511]: Removed session 5. Aug 19 00:19:21.106452 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 49978 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:19:21.107898 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:19:21.113647 systemd-logind[1511]: New session 6 of user core. Aug 19 00:19:21.128975 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 00:19:21.182308 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 00:19:21.183312 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:19:21.189041 sudo[1709]: pam_unix(sudo:session): session closed for user root Aug 19 00:19:21.194674 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 00:19:21.194998 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:19:21.206351 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:19:21.252016 augenrules[1731]: No rules Aug 19 00:19:21.253890 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:19:21.254171 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:19:21.256031 sudo[1708]: pam_unix(sudo:session): session closed for user root Aug 19 00:19:21.257497 sshd[1707]: Connection closed by 10.0.0.1 port 49978 Aug 19 00:19:21.258235 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Aug 19 00:19:21.271404 systemd[1]: sshd@5-10.0.0.90:22-10.0.0.1:49978.service: Deactivated successfully. Aug 19 00:19:21.274607 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 00:19:21.275647 systemd-logind[1511]: Session 6 logged out. Waiting for processes to exit. Aug 19 00:19:21.278936 systemd[1]: Started sshd@6-10.0.0.90:22-10.0.0.1:49990.service - OpenSSH per-connection server daemon (10.0.0.1:49990). Aug 19 00:19:21.279608 systemd-logind[1511]: Removed session 6. Aug 19 00:19:21.337309 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 49990 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:19:21.338441 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:19:21.345838 systemd-logind[1511]: New session 7 of user core. Aug 19 00:19:21.356969 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 00:19:21.411905 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 00:19:21.412180 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:19:21.848499 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 00:19:21.871179 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 00:19:22.202697 dockerd[1765]: time="2025-08-19T00:19:22.202246527Z" level=info msg="Starting up" Aug 19 00:19:22.203409 dockerd[1765]: time="2025-08-19T00:19:22.203372352Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 00:19:22.215350 dockerd[1765]: time="2025-08-19T00:19:22.215299336Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 00:19:22.365579 dockerd[1765]: time="2025-08-19T00:19:22.365516734Z" level=info msg="Loading containers: start." Aug 19 00:19:22.374569 kernel: Initializing XFRM netlink socket Aug 19 00:19:22.580032 systemd-networkd[1437]: docker0: Link UP Aug 19 00:19:22.587060 dockerd[1765]: time="2025-08-19T00:19:22.587016168Z" level=info msg="Loading containers: done." Aug 19 00:19:22.600468 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3254241485-merged.mount: Deactivated successfully. Aug 19 00:19:22.602493 dockerd[1765]: time="2025-08-19T00:19:22.602147803Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 00:19:22.602493 dockerd[1765]: time="2025-08-19T00:19:22.602248130Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 00:19:22.602493 dockerd[1765]: time="2025-08-19T00:19:22.602329023Z" level=info msg="Initializing buildkit" Aug 19 00:19:22.626122 dockerd[1765]: time="2025-08-19T00:19:22.626081544Z" level=info msg="Completed buildkit initialization" Aug 19 00:19:22.631425 dockerd[1765]: time="2025-08-19T00:19:22.631382777Z" level=info msg="Daemon has completed initialization" Aug 19 00:19:22.631610 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 00:19:22.631724 dockerd[1765]: time="2025-08-19T00:19:22.631592907Z" level=info msg="API listen on /run/docker.sock" Aug 19 00:19:23.156443 containerd[1542]: time="2025-08-19T00:19:23.156388267Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Aug 19 00:19:23.802285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070026452.mount: Deactivated successfully. Aug 19 00:19:24.737878 containerd[1542]: time="2025-08-19T00:19:24.737817416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:24.739121 containerd[1542]: time="2025-08-19T00:19:24.738651771Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Aug 19 00:19:24.740137 containerd[1542]: time="2025-08-19T00:19:24.739999976Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:24.742546 containerd[1542]: time="2025-08-19T00:19:24.742501483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:24.743718 containerd[1542]: time="2025-08-19T00:19:24.743562532Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.587129679s" Aug 19 00:19:24.743718 containerd[1542]: time="2025-08-19T00:19:24.743600601Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Aug 19 00:19:24.744323 containerd[1542]: time="2025-08-19T00:19:24.744226138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Aug 19 00:19:25.836278 containerd[1542]: time="2025-08-19T00:19:25.836225000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:25.836827 containerd[1542]: time="2025-08-19T00:19:25.836784527Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Aug 19 00:19:25.837747 containerd[1542]: time="2025-08-19T00:19:25.837723269Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:25.840021 containerd[1542]: time="2025-08-19T00:19:25.839989166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:25.841791 containerd[1542]: time="2025-08-19T00:19:25.841718571Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.097464201s" Aug 19 00:19:25.841791 containerd[1542]: time="2025-08-19T00:19:25.841753682Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Aug 19 00:19:25.842454 containerd[1542]: time="2025-08-19T00:19:25.842196200Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Aug 19 00:19:27.054454 containerd[1542]: time="2025-08-19T00:19:27.053816413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:27.055736 containerd[1542]: time="2025-08-19T00:19:27.055709116Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Aug 19 00:19:27.056423 containerd[1542]: time="2025-08-19T00:19:27.056399670Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:27.059255 containerd[1542]: time="2025-08-19T00:19:27.059230946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:27.060195 containerd[1542]: time="2025-08-19T00:19:27.060164201Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.21793501s" Aug 19 00:19:27.060346 containerd[1542]: time="2025-08-19T00:19:27.060285331Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Aug 19 00:19:27.060977 containerd[1542]: time="2025-08-19T00:19:27.060937934Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Aug 19 00:19:27.997665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598138089.mount: Deactivated successfully. Aug 19 00:19:28.359352 containerd[1542]: time="2025-08-19T00:19:28.358952654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:28.360094 containerd[1542]: time="2025-08-19T00:19:28.359905878Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Aug 19 00:19:28.360946 containerd[1542]: time="2025-08-19T00:19:28.360903892Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:28.362597 containerd[1542]: time="2025-08-19T00:19:28.362567875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:28.363302 containerd[1542]: time="2025-08-19T00:19:28.363267317Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.302279235s" Aug 19 00:19:28.363361 containerd[1542]: time="2025-08-19T00:19:28.363302429Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Aug 19 00:19:28.363925 containerd[1542]: time="2025-08-19T00:19:28.363902413Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 19 00:19:28.494798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 00:19:28.498990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:19:28.636459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:19:28.640620 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:19:28.685816 kubelet[2062]: E0819 00:19:28.685683 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:19:28.689356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:19:28.689487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:19:28.689841 systemd[1]: kubelet.service: Consumed 157ms CPU time, 107.4M memory peak. Aug 19 00:19:29.085211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711731381.mount: Deactivated successfully. Aug 19 00:19:29.830377 containerd[1542]: time="2025-08-19T00:19:29.830308258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:29.831261 containerd[1542]: time="2025-08-19T00:19:29.831222264Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 19 00:19:29.831956 containerd[1542]: time="2025-08-19T00:19:29.831922235Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:29.835035 containerd[1542]: time="2025-08-19T00:19:29.834994423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:29.836458 containerd[1542]: time="2025-08-19T00:19:29.836422440Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.472486674s" Aug 19 00:19:29.836494 containerd[1542]: time="2025-08-19T00:19:29.836462032Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 19 00:19:29.837019 containerd[1542]: time="2025-08-19T00:19:29.836995199Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 00:19:30.284103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount204252115.mount: Deactivated successfully. Aug 19 00:19:30.288537 containerd[1542]: time="2025-08-19T00:19:30.288483768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:19:30.289002 containerd[1542]: time="2025-08-19T00:19:30.288956554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 19 00:19:30.289887 containerd[1542]: time="2025-08-19T00:19:30.289851216Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:19:30.292433 containerd[1542]: time="2025-08-19T00:19:30.292385752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:19:30.293814 containerd[1542]: time="2025-08-19T00:19:30.293750961Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 456.72121ms" Aug 19 00:19:30.293857 containerd[1542]: time="2025-08-19T00:19:30.293811629Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 19 00:19:30.294510 containerd[1542]: time="2025-08-19T00:19:30.294483655Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 19 00:19:30.823118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount107213340.mount: Deactivated successfully. Aug 19 00:19:32.461665 containerd[1542]: time="2025-08-19T00:19:32.461593709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:32.463429 containerd[1542]: time="2025-08-19T00:19:32.463347482Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Aug 19 00:19:32.464052 containerd[1542]: time="2025-08-19T00:19:32.464015925Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:32.467637 containerd[1542]: time="2025-08-19T00:19:32.467592580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:32.468904 containerd[1542]: time="2025-08-19T00:19:32.468864598Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.174345069s" Aug 19 00:19:32.468954 containerd[1542]: time="2025-08-19T00:19:32.468907910Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Aug 19 00:19:37.696514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:19:37.696733 systemd[1]: kubelet.service: Consumed 157ms CPU time, 107.4M memory peak. Aug 19 00:19:37.698936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:19:37.726379 systemd[1]: Reload requested from client PID 2211 ('systemctl') (unit session-7.scope)... Aug 19 00:19:37.726394 systemd[1]: Reloading... Aug 19 00:19:37.807816 zram_generator::config[2260]: No configuration found. Aug 19 00:19:37.990599 systemd[1]: Reloading finished in 263 ms. Aug 19 00:19:38.060320 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 00:19:38.060400 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 00:19:38.061827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:19:38.061878 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95M memory peak. Aug 19 00:19:38.063399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:19:38.183934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:19:38.190012 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 00:19:38.228764 kubelet[2299]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:19:38.228764 kubelet[2299]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 00:19:38.228764 kubelet[2299]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:19:38.229168 kubelet[2299]: I0819 00:19:38.228890 2299 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 00:19:38.732012 kubelet[2299]: I0819 00:19:38.731907 2299 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 19 00:19:38.732012 kubelet[2299]: I0819 00:19:38.731941 2299 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 00:19:38.732294 kubelet[2299]: I0819 00:19:38.732216 2299 server.go:954] "Client rotation is on, will bootstrap in background" Aug 19 00:19:38.771616 kubelet[2299]: E0819 00:19:38.771491 2299 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:19:38.772954 kubelet[2299]: I0819 00:19:38.772914 2299 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 00:19:38.783913 kubelet[2299]: I0819 00:19:38.783883 2299 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 00:19:38.791547 kubelet[2299]: I0819 00:19:38.791495 2299 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 00:19:38.792588 kubelet[2299]: I0819 00:19:38.792269 2299 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 00:19:38.793079 kubelet[2299]: I0819 00:19:38.792663 2299 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 00:19:38.793183 kubelet[2299]: I0819 00:19:38.793139 2299 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 00:19:38.793183 kubelet[2299]: I0819 00:19:38.793150 2299 container_manager_linux.go:304] "Creating device plugin manager" Aug 19 00:19:38.793387 kubelet[2299]: I0819 00:19:38.793355 2299 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:19:38.795990 kubelet[2299]: I0819 00:19:38.795944 2299 kubelet.go:446] "Attempting to sync node with API server" Aug 19 00:19:38.795990 kubelet[2299]: I0819 00:19:38.795972 2299 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 00:19:38.796084 kubelet[2299]: I0819 00:19:38.796000 2299 kubelet.go:352] "Adding apiserver pod source" Aug 19 00:19:38.796084 kubelet[2299]: I0819 00:19:38.796012 2299 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 00:19:38.799044 kubelet[2299]: W0819 00:19:38.798949 2299 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Aug 19 00:19:38.799044 kubelet[2299]: E0819 00:19:38.799010 2299 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:19:38.799351 kubelet[2299]: W0819 00:19:38.799269 2299 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Aug 19 00:19:38.799351 kubelet[2299]: E0819 00:19:38.799308 2299 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:19:38.802332 kubelet[2299]: I0819 00:19:38.802290 2299 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 00:19:38.803002 kubelet[2299]: I0819 00:19:38.802964 2299 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 00:19:38.803131 kubelet[2299]: W0819 00:19:38.803087 2299 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 00:19:38.804358 kubelet[2299]: I0819 00:19:38.803928 2299 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 00:19:38.804358 kubelet[2299]: I0819 00:19:38.803962 2299 server.go:1287] "Started kubelet" Aug 19 00:19:38.805309 kubelet[2299]: I0819 00:19:38.805261 2299 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 00:19:38.806208 kubelet[2299]: I0819 00:19:38.806183 2299 server.go:479] "Adding debug handlers to kubelet server" Aug 19 00:19:38.806948 kubelet[2299]: I0819 00:19:38.806892 2299 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 00:19:38.807190 kubelet[2299]: I0819 00:19:38.807169 2299 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 00:19:38.809292 kubelet[2299]: I0819 00:19:38.808720 2299 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 00:19:38.809292 kubelet[2299]: I0819 00:19:38.809162 2299 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 00:19:38.809292 kubelet[2299]: I0819 00:19:38.809266 2299 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 00:19:38.809468 kubelet[2299]: I0819 00:19:38.809320 2299 reconciler.go:26] "Reconciler: start to sync state" Aug 19 00:19:38.810037 kubelet[2299]: W0819 00:19:38.809683 2299 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Aug 19 00:19:38.810037 kubelet[2299]: E0819 00:19:38.809729 2299 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:19:38.810037 kubelet[2299]: I0819 00:19:38.809753 2299 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 00:19:38.810531 kubelet[2299]: E0819 00:19:38.810049 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 00:19:38.810531 kubelet[2299]: E0819 00:19:38.810158 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="200ms" Aug 19 00:19:38.811978 kubelet[2299]: I0819 00:19:38.811249 2299 factory.go:221] Registration of the systemd container factory successfully Aug 19 00:19:38.811978 kubelet[2299]: I0819 00:19:38.811361 2299 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 00:19:38.812697 kubelet[2299]: E0819 00:19:38.812419 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.90:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.90:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185d030bc381b959 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-19 00:19:38.803943769 +0000 UTC m=+0.610936611,LastTimestamp:2025-08-19 00:19:38.803943769 +0000 UTC m=+0.610936611,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 19 00:19:38.814094 kubelet[2299]: E0819 00:19:38.814065 2299 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 00:19:38.814717 kubelet[2299]: I0819 00:19:38.814685 2299 factory.go:221] Registration of the containerd container factory successfully Aug 19 00:19:38.828815 kubelet[2299]: I0819 00:19:38.827832 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 00:19:38.829309 kubelet[2299]: I0819 00:19:38.828935 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 00:19:38.829309 kubelet[2299]: I0819 00:19:38.828966 2299 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 19 00:19:38.829309 kubelet[2299]: I0819 00:19:38.828986 2299 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 00:19:38.829309 kubelet[2299]: I0819 00:19:38.828994 2299 kubelet.go:2382] "Starting kubelet main sync loop" Aug 19 00:19:38.829309 kubelet[2299]: E0819 00:19:38.829046 2299 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 00:19:38.833795 kubelet[2299]: W0819 00:19:38.833732 2299 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Aug 19 00:19:38.834505 kubelet[2299]: E0819 00:19:38.834480 2299 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:19:38.837768 kubelet[2299]: I0819 00:19:38.837647 2299 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 00:19:38.837851 kubelet[2299]: I0819 00:19:38.837767 2299 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 00:19:38.837906 kubelet[2299]: I0819 00:19:38.837887 2299 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:19:38.910255 kubelet[2299]: I0819 00:19:38.910226 2299 policy_none.go:49] "None policy: Start" Aug 19 00:19:38.910255 kubelet[2299]: I0819 00:19:38.910261 2299 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 00:19:38.910351 kubelet[2299]: I0819 00:19:38.910278 2299 state_mem.go:35] "Initializing new in-memory state store" Aug 19 00:19:38.910930 kubelet[2299]: E0819 00:19:38.910891 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 00:19:38.917516 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 00:19:38.930186 kubelet[2299]: E0819 00:19:38.930075 2299 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 00:19:38.932383 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 00:19:38.937148 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 00:19:38.964031 kubelet[2299]: I0819 00:19:38.963995 2299 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 00:19:38.964382 kubelet[2299]: I0819 00:19:38.964363 2299 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 00:19:38.964506 kubelet[2299]: I0819 00:19:38.964465 2299 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 00:19:38.964833 kubelet[2299]: I0819 00:19:38.964801 2299 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 00:19:38.966427 kubelet[2299]: E0819 00:19:38.966405 2299 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 00:19:38.966538 kubelet[2299]: E0819 00:19:38.966448 2299 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 19 00:19:39.011264 kubelet[2299]: E0819 00:19:39.011124 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="400ms" Aug 19 00:19:39.066509 kubelet[2299]: I0819 00:19:39.066465 2299 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:19:39.070755 kubelet[2299]: E0819 00:19:39.070726 2299 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Aug 19 00:19:39.138061 systemd[1]: Created slice kubepods-burstable-podd68fd132a8fd0a59e6da3f244dad2953.slice - libcontainer container kubepods-burstable-podd68fd132a8fd0a59e6da3f244dad2953.slice. Aug 19 00:19:39.149798 kubelet[2299]: E0819 00:19:39.149597 2299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:19:39.152318 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Aug 19 00:19:39.171952 kubelet[2299]: E0819 00:19:39.171918 2299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:19:39.175719 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Aug 19 00:19:39.177337 kubelet[2299]: E0819 00:19:39.177298 2299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:19:39.211806 kubelet[2299]: I0819 00:19:39.211750 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d68fd132a8fd0a59e6da3f244dad2953-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d68fd132a8fd0a59e6da3f244dad2953\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:19:39.211806 kubelet[2299]: I0819 00:19:39.211805 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:39.211910 kubelet[2299]: I0819 00:19:39.211827 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:39.211910 kubelet[2299]: I0819 00:19:39.211844 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:39.211910 kubelet[2299]: I0819 00:19:39.211866 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d68fd132a8fd0a59e6da3f244dad2953-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d68fd132a8fd0a59e6da3f244dad2953\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:19:39.211910 kubelet[2299]: I0819 00:19:39.211881 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d68fd132a8fd0a59e6da3f244dad2953-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d68fd132a8fd0a59e6da3f244dad2953\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:19:39.211910 kubelet[2299]: I0819 00:19:39.211895 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:39.212016 kubelet[2299]: I0819 00:19:39.211910 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:39.212016 kubelet[2299]: I0819 00:19:39.211925 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Aug 19 00:19:39.272297 kubelet[2299]: I0819 00:19:39.272193 2299 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:19:39.272605 kubelet[2299]: E0819 00:19:39.272519 2299 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Aug 19 00:19:39.411937 kubelet[2299]: E0819 00:19:39.411875 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="800ms" Aug 19 00:19:39.450248 kubelet[2299]: E0819 00:19:39.450208 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:39.450954 containerd[1542]: time="2025-08-19T00:19:39.450905751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d68fd132a8fd0a59e6da3f244dad2953,Namespace:kube-system,Attempt:0,}" Aug 19 00:19:39.473119 kubelet[2299]: E0819 00:19:39.473066 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:39.473550 containerd[1542]: time="2025-08-19T00:19:39.473500797Z" level=info msg="connecting to shim b6ee5fdf775762aa95f253bfd62de9331fc1babd1ea37d7c755c3aee4d5bb42d" address="unix:///run/containerd/s/508cfac0659f359289c533c0a8f2e6b03012045462c140a4fa7b011d313b5e44" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:19:39.474387 containerd[1542]: time="2025-08-19T00:19:39.474350902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Aug 19 00:19:39.480552 kubelet[2299]: E0819 00:19:39.480521 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:39.481541 containerd[1542]: time="2025-08-19T00:19:39.481484268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Aug 19 00:19:39.502006 systemd[1]: Started cri-containerd-b6ee5fdf775762aa95f253bfd62de9331fc1babd1ea37d7c755c3aee4d5bb42d.scope - libcontainer container b6ee5fdf775762aa95f253bfd62de9331fc1babd1ea37d7c755c3aee4d5bb42d. Aug 19 00:19:39.516796 containerd[1542]: time="2025-08-19T00:19:39.516245321Z" level=info msg="connecting to shim 9cd4226a78674e2a6a1fb07bd0c9b60765ce10cde14a6fdeaa4828a784b80a38" address="unix:///run/containerd/s/523b20e97a5925b3593056dbb1011fd7a5a18a88aa9240733399502683e3fb27" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:19:39.532567 containerd[1542]: time="2025-08-19T00:19:39.532510351Z" level=info msg="connecting to shim 9b42296f8c30ffffc08408cbdfec45641c15d7f4400dfa71bb4d463c8d1b1022" address="unix:///run/containerd/s/cd702f9e6471b54d91f9d3318dba8ee946c2272837983dcce4d2c910edab1e8b" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:19:39.547959 systemd[1]: Started cri-containerd-9cd4226a78674e2a6a1fb07bd0c9b60765ce10cde14a6fdeaa4828a784b80a38.scope - libcontainer container 9cd4226a78674e2a6a1fb07bd0c9b60765ce10cde14a6fdeaa4828a784b80a38. Aug 19 00:19:39.550333 containerd[1542]: time="2025-08-19T00:19:39.550184984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d68fd132a8fd0a59e6da3f244dad2953,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6ee5fdf775762aa95f253bfd62de9331fc1babd1ea37d7c755c3aee4d5bb42d\"" Aug 19 00:19:39.551592 kubelet[2299]: E0819 00:19:39.551545 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:39.553694 containerd[1542]: time="2025-08-19T00:19:39.553654678Z" level=info msg="CreateContainer within sandbox \"b6ee5fdf775762aa95f253bfd62de9331fc1babd1ea37d7c755c3aee4d5bb42d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 00:19:39.564098 containerd[1542]: time="2025-08-19T00:19:39.564042202Z" level=info msg="Container d0c4dcb1984db62c757eeb7daa61c3486b73989487c826178556b6b3fdc9ac6f: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:19:39.571501 containerd[1542]: time="2025-08-19T00:19:39.571440459Z" level=info msg="CreateContainer within sandbox \"b6ee5fdf775762aa95f253bfd62de9331fc1babd1ea37d7c755c3aee4d5bb42d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d0c4dcb1984db62c757eeb7daa61c3486b73989487c826178556b6b3fdc9ac6f\"" Aug 19 00:19:39.572445 containerd[1542]: time="2025-08-19T00:19:39.572110025Z" level=info msg="StartContainer for \"d0c4dcb1984db62c757eeb7daa61c3486b73989487c826178556b6b3fdc9ac6f\"" Aug 19 00:19:39.573290 containerd[1542]: time="2025-08-19T00:19:39.573258377Z" level=info msg="connecting to shim d0c4dcb1984db62c757eeb7daa61c3486b73989487c826178556b6b3fdc9ac6f" address="unix:///run/containerd/s/508cfac0659f359289c533c0a8f2e6b03012045462c140a4fa7b011d313b5e44" protocol=ttrpc version=3 Aug 19 00:19:39.575059 systemd[1]: Started cri-containerd-9b42296f8c30ffffc08408cbdfec45641c15d7f4400dfa71bb4d463c8d1b1022.scope - libcontainer container 9b42296f8c30ffffc08408cbdfec45641c15d7f4400dfa71bb4d463c8d1b1022. Aug 19 00:19:39.597977 systemd[1]: Started cri-containerd-d0c4dcb1984db62c757eeb7daa61c3486b73989487c826178556b6b3fdc9ac6f.scope - libcontainer container d0c4dcb1984db62c757eeb7daa61c3486b73989487c826178556b6b3fdc9ac6f. Aug 19 00:19:39.612529 containerd[1542]: time="2025-08-19T00:19:39.611437569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cd4226a78674e2a6a1fb07bd0c9b60765ce10cde14a6fdeaa4828a784b80a38\"" Aug 19 00:19:39.613594 kubelet[2299]: E0819 00:19:39.613543 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:39.616042 containerd[1542]: time="2025-08-19T00:19:39.615279381Z" level=info msg="CreateContainer within sandbox \"9cd4226a78674e2a6a1fb07bd0c9b60765ce10cde14a6fdeaa4828a784b80a38\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 00:19:39.634788 containerd[1542]: time="2025-08-19T00:19:39.634738056Z" level=info msg="Container 93fc1db40aa51e298c5436e8b82fb636c291314b3f1abc4fa5eac6df833742aa: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:19:39.637297 containerd[1542]: time="2025-08-19T00:19:39.637247777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b42296f8c30ffffc08408cbdfec45641c15d7f4400dfa71bb4d463c8d1b1022\"" Aug 19 00:19:39.639111 kubelet[2299]: E0819 00:19:39.638931 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:39.641126 containerd[1542]: time="2025-08-19T00:19:39.641078991Z" level=info msg="CreateContainer within sandbox \"9b42296f8c30ffffc08408cbdfec45641c15d7f4400dfa71bb4d463c8d1b1022\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 00:19:39.646824 containerd[1542]: time="2025-08-19T00:19:39.646792115Z" level=info msg="StartContainer for \"d0c4dcb1984db62c757eeb7daa61c3486b73989487c826178556b6b3fdc9ac6f\" returns successfully" Aug 19 00:19:39.652414 containerd[1542]: time="2025-08-19T00:19:39.652371214Z" level=info msg="CreateContainer within sandbox \"9cd4226a78674e2a6a1fb07bd0c9b60765ce10cde14a6fdeaa4828a784b80a38\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"93fc1db40aa51e298c5436e8b82fb636c291314b3f1abc4fa5eac6df833742aa\"" Aug 19 00:19:39.653515 containerd[1542]: time="2025-08-19T00:19:39.653487770Z" level=info msg="StartContainer for \"93fc1db40aa51e298c5436e8b82fb636c291314b3f1abc4fa5eac6df833742aa\"" Aug 19 00:19:39.655462 containerd[1542]: time="2025-08-19T00:19:39.655372360Z" level=info msg="connecting to shim 93fc1db40aa51e298c5436e8b82fb636c291314b3f1abc4fa5eac6df833742aa" address="unix:///run/containerd/s/523b20e97a5925b3593056dbb1011fd7a5a18a88aa9240733399502683e3fb27" protocol=ttrpc version=3 Aug 19 00:19:39.661743 containerd[1542]: time="2025-08-19T00:19:39.661710415Z" level=info msg="Container 3103893b57d566e972cb1af8a6a7211d8b0ed579c4436cf848d000092c1c9cd7: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:19:39.672791 containerd[1542]: time="2025-08-19T00:19:39.672699392Z" level=info msg="CreateContainer within sandbox \"9b42296f8c30ffffc08408cbdfec45641c15d7f4400dfa71bb4d463c8d1b1022\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3103893b57d566e972cb1af8a6a7211d8b0ed579c4436cf848d000092c1c9cd7\"" Aug 19 00:19:39.673189 containerd[1542]: time="2025-08-19T00:19:39.673161341Z" level=info msg="StartContainer for \"3103893b57d566e972cb1af8a6a7211d8b0ed579c4436cf848d000092c1c9cd7\"" Aug 19 00:19:39.674920 kubelet[2299]: I0819 00:19:39.674900 2299 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:19:39.675250 kubelet[2299]: E0819 00:19:39.675222 2299 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Aug 19 00:19:39.678253 containerd[1542]: time="2025-08-19T00:19:39.677576410Z" level=info msg="connecting to shim 3103893b57d566e972cb1af8a6a7211d8b0ed579c4436cf848d000092c1c9cd7" address="unix:///run/containerd/s/cd702f9e6471b54d91f9d3318dba8ee946c2272837983dcce4d2c910edab1e8b" protocol=ttrpc version=3 Aug 19 00:19:39.678860 kubelet[2299]: W0819 00:19:39.678803 2299 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Aug 19 00:19:39.678924 kubelet[2299]: E0819 00:19:39.678882 2299 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:19:39.691044 systemd[1]: Started cri-containerd-93fc1db40aa51e298c5436e8b82fb636c291314b3f1abc4fa5eac6df833742aa.scope - libcontainer container 93fc1db40aa51e298c5436e8b82fb636c291314b3f1abc4fa5eac6df833742aa. Aug 19 00:19:39.694799 systemd[1]: Started cri-containerd-3103893b57d566e972cb1af8a6a7211d8b0ed579c4436cf848d000092c1c9cd7.scope - libcontainer container 3103893b57d566e972cb1af8a6a7211d8b0ed579c4436cf848d000092c1c9cd7. Aug 19 00:19:39.739768 kubelet[2299]: W0819 00:19:39.737757 2299 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Aug 19 00:19:39.740792 kubelet[2299]: E0819 00:19:39.740363 2299 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:19:39.771995 containerd[1542]: time="2025-08-19T00:19:39.771944229Z" level=info msg="StartContainer for \"93fc1db40aa51e298c5436e8b82fb636c291314b3f1abc4fa5eac6df833742aa\" returns successfully" Aug 19 00:19:39.791110 containerd[1542]: time="2025-08-19T00:19:39.790864244Z" level=info msg="StartContainer for \"3103893b57d566e972cb1af8a6a7211d8b0ed579c4436cf848d000092c1c9cd7\" returns successfully" Aug 19 00:19:39.844057 kubelet[2299]: E0819 00:19:39.844022 2299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:19:39.844173 kubelet[2299]: E0819 00:19:39.844155 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:39.848351 kubelet[2299]: E0819 00:19:39.848296 2299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:19:39.848435 kubelet[2299]: E0819 00:19:39.848419 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:39.857805 kubelet[2299]: E0819 00:19:39.857767 2299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:19:39.857917 kubelet[2299]: E0819 00:19:39.857899 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:40.477072 kubelet[2299]: I0819 00:19:40.476930 2299 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:19:40.853214 kubelet[2299]: E0819 00:19:40.853177 2299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:19:40.853214 kubelet[2299]: E0819 00:19:40.853177 2299 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:19:40.853341 kubelet[2299]: E0819 00:19:40.853303 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:40.853435 kubelet[2299]: E0819 00:19:40.853343 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:41.732089 kubelet[2299]: E0819 00:19:41.732040 2299 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 19 00:19:41.810444 kubelet[2299]: I0819 00:19:41.810392 2299 apiserver.go:52] "Watching apiserver" Aug 19 00:19:41.840137 kubelet[2299]: I0819 00:19:41.840078 2299 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 19 00:19:41.853796 kubelet[2299]: I0819 00:19:41.853626 2299 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 00:19:41.861520 kubelet[2299]: E0819 00:19:41.861477 2299 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 19 00:19:41.861704 kubelet[2299]: E0819 00:19:41.861641 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:41.909998 kubelet[2299]: I0819 00:19:41.909941 2299 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 00:19:41.911371 kubelet[2299]: I0819 00:19:41.910967 2299 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:41.914596 kubelet[2299]: E0819 00:19:41.914529 2299 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:41.914596 kubelet[2299]: I0819 00:19:41.914569 2299 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 00:19:41.917714 kubelet[2299]: E0819 00:19:41.917652 2299 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 19 00:19:41.917714 kubelet[2299]: I0819 00:19:41.917697 2299 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 00:19:41.919997 kubelet[2299]: E0819 00:19:41.919949 2299 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 19 00:19:42.990922 kubelet[2299]: I0819 00:19:42.990859 2299 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 00:19:43.010084 kubelet[2299]: E0819 00:19:43.010032 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:43.856711 kubelet[2299]: E0819 00:19:43.856665 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:44.157051 systemd[1]: Reload requested from client PID 2573 ('systemctl') (unit session-7.scope)... Aug 19 00:19:44.157067 systemd[1]: Reloading... Aug 19 00:19:44.237823 zram_generator::config[2616]: No configuration found. Aug 19 00:19:44.492088 systemd[1]: Reloading finished in 334 ms. Aug 19 00:19:44.527067 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:19:44.540865 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 00:19:44.541233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:19:44.541893 systemd[1]: kubelet.service: Consumed 1.097s CPU time, 130.3M memory peak. Aug 19 00:19:44.543766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:19:44.691370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:19:44.708179 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 00:19:44.750672 kubelet[2658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:19:44.750672 kubelet[2658]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 00:19:44.750672 kubelet[2658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:19:44.750672 kubelet[2658]: I0819 00:19:44.750439 2658 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 00:19:44.760150 kubelet[2658]: I0819 00:19:44.759051 2658 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 19 00:19:44.760285 kubelet[2658]: I0819 00:19:44.760215 2658 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 00:19:44.761026 kubelet[2658]: I0819 00:19:44.760967 2658 server.go:954] "Client rotation is on, will bootstrap in background" Aug 19 00:19:44.765209 kubelet[2658]: I0819 00:19:44.765176 2658 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 19 00:19:44.768165 kubelet[2658]: I0819 00:19:44.768126 2658 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 00:19:44.772991 kubelet[2658]: I0819 00:19:44.772946 2658 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 00:19:44.775478 kubelet[2658]: I0819 00:19:44.775453 2658 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 00:19:44.775723 kubelet[2658]: I0819 00:19:44.775680 2658 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 00:19:44.775891 kubelet[2658]: I0819 00:19:44.775714 2658 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 00:19:44.775969 kubelet[2658]: I0819 00:19:44.775896 2658 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 00:19:44.775969 kubelet[2658]: I0819 00:19:44.775906 2658 container_manager_linux.go:304] "Creating device plugin manager" Aug 19 00:19:44.775969 kubelet[2658]: I0819 00:19:44.775947 2658 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:19:44.776532 kubelet[2658]: I0819 00:19:44.776075 2658 kubelet.go:446] "Attempting to sync node with API server" Aug 19 00:19:44.776532 kubelet[2658]: I0819 00:19:44.776088 2658 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 00:19:44.776532 kubelet[2658]: I0819 00:19:44.776111 2658 kubelet.go:352] "Adding apiserver pod source" Aug 19 00:19:44.776532 kubelet[2658]: I0819 00:19:44.776121 2658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 00:19:44.777264 kubelet[2658]: I0819 00:19:44.777246 2658 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 00:19:44.780806 kubelet[2658]: I0819 00:19:44.780394 2658 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 00:19:44.780968 kubelet[2658]: I0819 00:19:44.780944 2658 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 00:19:44.781001 kubelet[2658]: I0819 00:19:44.780979 2658 server.go:1287] "Started kubelet" Aug 19 00:19:44.783746 kubelet[2658]: I0819 00:19:44.783435 2658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 00:19:44.788351 kubelet[2658]: E0819 00:19:44.788309 2658 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 00:19:44.790842 kubelet[2658]: I0819 00:19:44.790136 2658 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 00:19:44.790842 kubelet[2658]: I0819 00:19:44.790640 2658 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 00:19:44.790955 kubelet[2658]: I0819 00:19:44.790930 2658 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 00:19:44.791223 kubelet[2658]: I0819 00:19:44.791193 2658 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 00:19:44.793035 kubelet[2658]: I0819 00:19:44.792999 2658 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 00:19:44.793266 kubelet[2658]: E0819 00:19:44.793235 2658 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 00:19:44.794555 kubelet[2658]: I0819 00:19:44.794529 2658 server.go:479] "Adding debug handlers to kubelet server" Aug 19 00:19:44.799752 kubelet[2658]: I0819 00:19:44.799715 2658 factory.go:221] Registration of the systemd container factory successfully Aug 19 00:19:44.799886 kubelet[2658]: I0819 00:19:44.799857 2658 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 00:19:44.800980 kubelet[2658]: I0819 00:19:44.800963 2658 reconciler.go:26] "Reconciler: start to sync state" Aug 19 00:19:44.801118 kubelet[2658]: I0819 00:19:44.801104 2658 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 00:19:44.801526 kubelet[2658]: I0819 00:19:44.801501 2658 factory.go:221] Registration of the containerd container factory successfully Aug 19 00:19:44.803389 kubelet[2658]: I0819 00:19:44.803354 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 00:19:44.804383 kubelet[2658]: I0819 00:19:44.804354 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 00:19:44.804414 kubelet[2658]: I0819 00:19:44.804386 2658 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 19 00:19:44.804414 kubelet[2658]: I0819 00:19:44.804409 2658 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 00:19:44.804459 kubelet[2658]: I0819 00:19:44.804417 2658 kubelet.go:2382] "Starting kubelet main sync loop" Aug 19 00:19:44.804484 kubelet[2658]: E0819 00:19:44.804461 2658 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 00:19:44.844786 kubelet[2658]: I0819 00:19:44.844738 2658 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 00:19:44.844786 kubelet[2658]: I0819 00:19:44.844760 2658 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 00:19:44.844946 kubelet[2658]: I0819 00:19:44.844804 2658 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:19:44.845001 kubelet[2658]: I0819 00:19:44.844981 2658 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 00:19:44.845028 kubelet[2658]: I0819 00:19:44.845000 2658 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 00:19:44.845050 kubelet[2658]: I0819 00:19:44.845030 2658 policy_none.go:49] "None policy: Start" Aug 19 00:19:44.845050 kubelet[2658]: I0819 00:19:44.845039 2658 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 00:19:44.845050 kubelet[2658]: I0819 00:19:44.845050 2658 state_mem.go:35] "Initializing new in-memory state store" Aug 19 00:19:44.845155 kubelet[2658]: I0819 00:19:44.845144 2658 state_mem.go:75] "Updated machine memory state" Aug 19 00:19:44.849077 kubelet[2658]: I0819 00:19:44.849049 2658 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 00:19:44.849241 kubelet[2658]: I0819 00:19:44.849223 2658 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 00:19:44.849289 kubelet[2658]: I0819 00:19:44.849242 2658 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 00:19:44.849471 kubelet[2658]: I0819 00:19:44.849450 2658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 00:19:44.850248 kubelet[2658]: E0819 00:19:44.850222 2658 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 00:19:44.905832 kubelet[2658]: I0819 00:19:44.905721 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 00:19:44.905983 kubelet[2658]: I0819 00:19:44.905869 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:44.906066 kubelet[2658]: I0819 00:19:44.906033 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 00:19:44.912452 kubelet[2658]: E0819 00:19:44.912406 2658 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 19 00:19:44.953437 kubelet[2658]: I0819 00:19:44.953400 2658 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:19:44.961481 kubelet[2658]: I0819 00:19:44.961380 2658 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 19 00:19:44.961481 kubelet[2658]: I0819 00:19:44.961534 2658 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 19 00:19:45.102966 kubelet[2658]: I0819 00:19:45.102915 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:45.102966 kubelet[2658]: I0819 00:19:45.102958 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:45.102966 kubelet[2658]: I0819 00:19:45.102978 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Aug 19 00:19:45.103146 kubelet[2658]: I0819 00:19:45.102996 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d68fd132a8fd0a59e6da3f244dad2953-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d68fd132a8fd0a59e6da3f244dad2953\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:19:45.103146 kubelet[2658]: I0819 00:19:45.103013 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d68fd132a8fd0a59e6da3f244dad2953-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d68fd132a8fd0a59e6da3f244dad2953\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:19:45.103146 kubelet[2658]: I0819 00:19:45.103034 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:45.103146 kubelet[2658]: I0819 00:19:45.103052 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:45.103146 kubelet[2658]: I0819 00:19:45.103070 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:19:45.103248 kubelet[2658]: I0819 00:19:45.103089 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d68fd132a8fd0a59e6da3f244dad2953-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d68fd132a8fd0a59e6da3f244dad2953\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:19:45.212856 kubelet[2658]: E0819 00:19:45.212797 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:45.213166 kubelet[2658]: E0819 00:19:45.213034 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:45.218750 kubelet[2658]: E0819 00:19:45.218672 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:45.777593 kubelet[2658]: I0819 00:19:45.777334 2658 apiserver.go:52] "Watching apiserver" Aug 19 00:19:45.801398 kubelet[2658]: I0819 00:19:45.801337 2658 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 00:19:45.829793 kubelet[2658]: E0819 00:19:45.829715 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:45.830266 kubelet[2658]: E0819 00:19:45.830247 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:45.830563 kubelet[2658]: E0819 00:19:45.830547 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:45.853979 kubelet[2658]: I0819 00:19:45.852741 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.85272698 podStartE2EDuration="1.85272698s" podCreationTimestamp="2025-08-19 00:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:19:45.852384406 +0000 UTC m=+1.140454277" watchObservedRunningTime="2025-08-19 00:19:45.85272698 +0000 UTC m=+1.140796851" Aug 19 00:19:45.876070 kubelet[2658]: I0819 00:19:45.875955 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.875936427 podStartE2EDuration="3.875936427s" podCreationTimestamp="2025-08-19 00:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:19:45.861854011 +0000 UTC m=+1.149923882" watchObservedRunningTime="2025-08-19 00:19:45.875936427 +0000 UTC m=+1.164006338" Aug 19 00:19:45.876415 kubelet[2658]: I0819 00:19:45.876192 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.876185848 podStartE2EDuration="1.876185848s" podCreationTimestamp="2025-08-19 00:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:19:45.875713044 +0000 UTC m=+1.163782915" watchObservedRunningTime="2025-08-19 00:19:45.876185848 +0000 UTC m=+1.164255679" Aug 19 00:19:46.831791 kubelet[2658]: E0819 00:19:46.831163 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:46.831791 kubelet[2658]: E0819 00:19:46.831246 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:47.832720 kubelet[2658]: E0819 00:19:47.832616 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:47.832720 kubelet[2658]: E0819 00:19:47.832687 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:48.800466 kubelet[2658]: E0819 00:19:48.800427 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:48.967236 kubelet[2658]: I0819 00:19:48.967200 2658 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 00:19:48.967966 containerd[1542]: time="2025-08-19T00:19:48.967810670Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 00:19:48.968210 kubelet[2658]: I0819 00:19:48.968033 2658 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 00:19:50.083742 systemd[1]: Created slice kubepods-besteffort-pod7467194e_0f11_4e35_953b_7ae8ba549ab3.slice - libcontainer container kubepods-besteffort-pod7467194e_0f11_4e35_953b_7ae8ba549ab3.slice. Aug 19 00:19:50.125372 systemd[1]: Created slice kubepods-besteffort-pod462bd68d_09f6_4a10_92c9_f0a3438e5573.slice - libcontainer container kubepods-besteffort-pod462bd68d_09f6_4a10_92c9_f0a3438e5573.slice. Aug 19 00:19:50.131467 kubelet[2658]: I0819 00:19:50.131422 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7467194e-0f11-4e35-953b-7ae8ba549ab3-lib-modules\") pod \"kube-proxy-mqrlp\" (UID: \"7467194e-0f11-4e35-953b-7ae8ba549ab3\") " pod="kube-system/kube-proxy-mqrlp" Aug 19 00:19:50.131467 kubelet[2658]: I0819 00:19:50.131464 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgw65\" (UniqueName: \"kubernetes.io/projected/462bd68d-09f6-4a10-92c9-f0a3438e5573-kube-api-access-sgw65\") pod \"tigera-operator-747864d56d-jrx62\" (UID: \"462bd68d-09f6-4a10-92c9-f0a3438e5573\") " pod="tigera-operator/tigera-operator-747864d56d-jrx62" Aug 19 00:19:50.131836 kubelet[2658]: I0819 00:19:50.131487 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7467194e-0f11-4e35-953b-7ae8ba549ab3-kube-proxy\") pod \"kube-proxy-mqrlp\" (UID: \"7467194e-0f11-4e35-953b-7ae8ba549ab3\") " pod="kube-system/kube-proxy-mqrlp" Aug 19 00:19:50.131836 kubelet[2658]: I0819 00:19:50.131506 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7467194e-0f11-4e35-953b-7ae8ba549ab3-xtables-lock\") pod \"kube-proxy-mqrlp\" (UID: \"7467194e-0f11-4e35-953b-7ae8ba549ab3\") " pod="kube-system/kube-proxy-mqrlp" Aug 19 00:19:50.131836 kubelet[2658]: I0819 00:19:50.131523 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctk7\" (UniqueName: \"kubernetes.io/projected/7467194e-0f11-4e35-953b-7ae8ba549ab3-kube-api-access-7ctk7\") pod \"kube-proxy-mqrlp\" (UID: \"7467194e-0f11-4e35-953b-7ae8ba549ab3\") " pod="kube-system/kube-proxy-mqrlp" Aug 19 00:19:50.131836 kubelet[2658]: I0819 00:19:50.131539 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/462bd68d-09f6-4a10-92c9-f0a3438e5573-var-lib-calico\") pod \"tigera-operator-747864d56d-jrx62\" (UID: \"462bd68d-09f6-4a10-92c9-f0a3438e5573\") " pod="tigera-operator/tigera-operator-747864d56d-jrx62" Aug 19 00:19:50.394041 kubelet[2658]: E0819 00:19:50.393913 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:50.394671 containerd[1542]: time="2025-08-19T00:19:50.394631505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mqrlp,Uid:7467194e-0f11-4e35-953b-7ae8ba549ab3,Namespace:kube-system,Attempt:0,}" Aug 19 00:19:50.416946 containerd[1542]: time="2025-08-19T00:19:50.416751455Z" level=info msg="connecting to shim 84a2cca0f8a8a524f34b048ecbae6ad73c54d19d0bc58b3b1d3cdf22c51e9a35" address="unix:///run/containerd/s/e00f462c2ce2d3c9a2f2cbaa9acc8fc821b1d36f82e9fae5f65be9bfcbb33166" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:19:50.431698 containerd[1542]: time="2025-08-19T00:19:50.431645720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jrx62,Uid:462bd68d-09f6-4a10-92c9-f0a3438e5573,Namespace:tigera-operator,Attempt:0,}" Aug 19 00:19:50.449978 systemd[1]: Started cri-containerd-84a2cca0f8a8a524f34b048ecbae6ad73c54d19d0bc58b3b1d3cdf22c51e9a35.scope - libcontainer container 84a2cca0f8a8a524f34b048ecbae6ad73c54d19d0bc58b3b1d3cdf22c51e9a35. Aug 19 00:19:50.455512 containerd[1542]: time="2025-08-19T00:19:50.453585480Z" level=info msg="connecting to shim 4bc2d9e0a1471b6953064db03d699e45f1346e6a1ea3ddcff5437834e812cd1c" address="unix:///run/containerd/s/937fd278636c806a41b18314b57d1faaa42055b5d841ce9b9665687001798088" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:19:50.481954 systemd[1]: Started cri-containerd-4bc2d9e0a1471b6953064db03d699e45f1346e6a1ea3ddcff5437834e812cd1c.scope - libcontainer container 4bc2d9e0a1471b6953064db03d699e45f1346e6a1ea3ddcff5437834e812cd1c. Aug 19 00:19:50.488819 containerd[1542]: time="2025-08-19T00:19:50.488754156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mqrlp,Uid:7467194e-0f11-4e35-953b-7ae8ba549ab3,Namespace:kube-system,Attempt:0,} returns sandbox id \"84a2cca0f8a8a524f34b048ecbae6ad73c54d19d0bc58b3b1d3cdf22c51e9a35\"" Aug 19 00:19:50.489574 kubelet[2658]: E0819 00:19:50.489551 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:50.491454 containerd[1542]: time="2025-08-19T00:19:50.491406251Z" level=info msg="CreateContainer within sandbox \"84a2cca0f8a8a524f34b048ecbae6ad73c54d19d0bc58b3b1d3cdf22c51e9a35\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 00:19:50.509358 containerd[1542]: time="2025-08-19T00:19:50.509304072Z" level=info msg="Container 8a25037013e233a9dc34c0f528e68ac3450ab1d52d83158a0c7ec9dbf534f11f: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:19:50.519326 containerd[1542]: time="2025-08-19T00:19:50.518680119Z" level=info msg="CreateContainer within sandbox \"84a2cca0f8a8a524f34b048ecbae6ad73c54d19d0bc58b3b1d3cdf22c51e9a35\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8a25037013e233a9dc34c0f528e68ac3450ab1d52d83158a0c7ec9dbf534f11f\"" Aug 19 00:19:50.519740 containerd[1542]: time="2025-08-19T00:19:50.519705343Z" level=info msg="StartContainer for \"8a25037013e233a9dc34c0f528e68ac3450ab1d52d83158a0c7ec9dbf534f11f\"" Aug 19 00:19:50.521766 containerd[1542]: time="2025-08-19T00:19:50.521728912Z" level=info msg="connecting to shim 8a25037013e233a9dc34c0f528e68ac3450ab1d52d83158a0c7ec9dbf534f11f" address="unix:///run/containerd/s/e00f462c2ce2d3c9a2f2cbaa9acc8fc821b1d36f82e9fae5f65be9bfcbb33166" protocol=ttrpc version=3 Aug 19 00:19:50.526416 containerd[1542]: time="2025-08-19T00:19:50.526372018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jrx62,Uid:462bd68d-09f6-4a10-92c9-f0a3438e5573,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4bc2d9e0a1471b6953064db03d699e45f1346e6a1ea3ddcff5437834e812cd1c\"" Aug 19 00:19:50.529613 containerd[1542]: time="2025-08-19T00:19:50.529050831Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 19 00:19:50.544964 systemd[1]: Started cri-containerd-8a25037013e233a9dc34c0f528e68ac3450ab1d52d83158a0c7ec9dbf534f11f.scope - libcontainer container 8a25037013e233a9dc34c0f528e68ac3450ab1d52d83158a0c7ec9dbf534f11f. Aug 19 00:19:50.582135 containerd[1542]: time="2025-08-19T00:19:50.582083890Z" level=info msg="StartContainer for \"8a25037013e233a9dc34c0f528e68ac3450ab1d52d83158a0c7ec9dbf534f11f\" returns successfully" Aug 19 00:19:50.841366 kubelet[2658]: E0819 00:19:50.841333 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:51.552485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299568267.mount: Deactivated successfully. Aug 19 00:19:52.009749 containerd[1542]: time="2025-08-19T00:19:52.009526712Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:52.010889 containerd[1542]: time="2025-08-19T00:19:52.010848168Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 19 00:19:52.011667 containerd[1542]: time="2025-08-19T00:19:52.011625251Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:52.014017 containerd[1542]: time="2025-08-19T00:19:52.013981018Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:19:52.014671 containerd[1542]: time="2025-08-19T00:19:52.014632307Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.485017866s" Aug 19 00:19:52.014671 containerd[1542]: time="2025-08-19T00:19:52.014667665Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 19 00:19:52.016757 containerd[1542]: time="2025-08-19T00:19:52.016712806Z" level=info msg="CreateContainer within sandbox \"4bc2d9e0a1471b6953064db03d699e45f1346e6a1ea3ddcff5437834e812cd1c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 19 00:19:52.025692 containerd[1542]: time="2025-08-19T00:19:52.025647937Z" level=info msg="Container ef70793fd7a5e8f9351df090d25b02df36874848477565d6c24398229a2f9dae: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:19:52.031321 containerd[1542]: time="2025-08-19T00:19:52.031279986Z" level=info msg="CreateContainer within sandbox \"4bc2d9e0a1471b6953064db03d699e45f1346e6a1ea3ddcff5437834e812cd1c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ef70793fd7a5e8f9351df090d25b02df36874848477565d6c24398229a2f9dae\"" Aug 19 00:19:52.031945 containerd[1542]: time="2025-08-19T00:19:52.031919435Z" level=info msg="StartContainer for \"ef70793fd7a5e8f9351df090d25b02df36874848477565d6c24398229a2f9dae\"" Aug 19 00:19:52.032697 containerd[1542]: time="2025-08-19T00:19:52.032672279Z" level=info msg="connecting to shim ef70793fd7a5e8f9351df090d25b02df36874848477565d6c24398229a2f9dae" address="unix:///run/containerd/s/937fd278636c806a41b18314b57d1faaa42055b5d841ce9b9665687001798088" protocol=ttrpc version=3 Aug 19 00:19:52.053959 systemd[1]: Started cri-containerd-ef70793fd7a5e8f9351df090d25b02df36874848477565d6c24398229a2f9dae.scope - libcontainer container ef70793fd7a5e8f9351df090d25b02df36874848477565d6c24398229a2f9dae. Aug 19 00:19:52.078511 containerd[1542]: time="2025-08-19T00:19:52.078468637Z" level=info msg="StartContainer for \"ef70793fd7a5e8f9351df090d25b02df36874848477565d6c24398229a2f9dae\" returns successfully" Aug 19 00:19:52.858820 kubelet[2658]: I0819 00:19:52.858729 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mqrlp" podStartSLOduration=2.858712283 podStartE2EDuration="2.858712283s" podCreationTimestamp="2025-08-19 00:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:19:50.851756938 +0000 UTC m=+6.139826809" watchObservedRunningTime="2025-08-19 00:19:52.858712283 +0000 UTC m=+8.146782154" Aug 19 00:19:52.859614 kubelet[2658]: I0819 00:19:52.859572 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-jrx62" podStartSLOduration=1.371528295 podStartE2EDuration="2.859559522s" podCreationTimestamp="2025-08-19 00:19:50 +0000 UTC" firstStartedPulling="2025-08-19 00:19:50.527482397 +0000 UTC m=+5.815552268" lastFinishedPulling="2025-08-19 00:19:52.015513664 +0000 UTC m=+7.303583495" observedRunningTime="2025-08-19 00:19:52.859544763 +0000 UTC m=+8.147614634" watchObservedRunningTime="2025-08-19 00:19:52.859559522 +0000 UTC m=+8.147629393" Aug 19 00:19:56.062547 kubelet[2658]: E0819 00:19:56.062492 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:56.163918 kubelet[2658]: E0819 00:19:56.163882 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:56.859372 kubelet[2658]: E0819 00:19:56.858276 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:56.859372 kubelet[2658]: E0819 00:19:56.858329 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:19:57.605197 sudo[1744]: pam_unix(sudo:session): session closed for user root Aug 19 00:19:57.607143 sshd[1743]: Connection closed by 10.0.0.1 port 49990 Aug 19 00:19:57.608158 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Aug 19 00:19:57.615539 systemd-logind[1511]: Session 7 logged out. Waiting for processes to exit. Aug 19 00:19:57.615920 systemd[1]: sshd@6-10.0.0.90:22-10.0.0.1:49990.service: Deactivated successfully. Aug 19 00:19:57.624903 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 00:19:57.625979 systemd[1]: session-7.scope: Consumed 7.510s CPU time, 223.9M memory peak. Aug 19 00:19:57.628748 systemd-logind[1511]: Removed session 7. Aug 19 00:19:58.809958 kubelet[2658]: E0819 00:19:58.809915 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:00.506885 update_engine[1517]: I20250819 00:20:00.506811 1517 update_attempter.cc:509] Updating boot flags... Aug 19 00:20:02.953341 systemd[1]: Created slice kubepods-besteffort-pod14742f14_9169_48dd_b41a_b0088e38d2c6.slice - libcontainer container kubepods-besteffort-pod14742f14_9169_48dd_b41a_b0088e38d2c6.slice. Aug 19 00:20:03.020879 kubelet[2658]: I0819 00:20:03.020833 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14742f14-9169-48dd-b41a-b0088e38d2c6-tigera-ca-bundle\") pod \"calico-typha-7fb5878c4b-lscb9\" (UID: \"14742f14-9169-48dd-b41a-b0088e38d2c6\") " pod="calico-system/calico-typha-7fb5878c4b-lscb9" Aug 19 00:20:03.021437 kubelet[2658]: I0819 00:20:03.021348 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk2pc\" (UniqueName: \"kubernetes.io/projected/14742f14-9169-48dd-b41a-b0088e38d2c6-kube-api-access-mk2pc\") pod \"calico-typha-7fb5878c4b-lscb9\" (UID: \"14742f14-9169-48dd-b41a-b0088e38d2c6\") " pod="calico-system/calico-typha-7fb5878c4b-lscb9" Aug 19 00:20:03.021437 kubelet[2658]: I0819 00:20:03.021382 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/14742f14-9169-48dd-b41a-b0088e38d2c6-typha-certs\") pod \"calico-typha-7fb5878c4b-lscb9\" (UID: \"14742f14-9169-48dd-b41a-b0088e38d2c6\") " pod="calico-system/calico-typha-7fb5878c4b-lscb9" Aug 19 00:20:03.258815 kubelet[2658]: E0819 00:20:03.258555 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:03.268608 containerd[1542]: time="2025-08-19T00:20:03.268225305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fb5878c4b-lscb9,Uid:14742f14-9169-48dd-b41a-b0088e38d2c6,Namespace:calico-system,Attempt:0,}" Aug 19 00:20:03.311056 containerd[1542]: time="2025-08-19T00:20:03.310881736Z" level=info msg="connecting to shim ff40ed3bc7735f0774bbffa9394aed402779caf001913431c8bf6a70267e12bd" address="unix:///run/containerd/s/05e1296c5c20f94d032b2a7626b887d53118b31b47d739c702e81d6b618977f0" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:20:03.365273 systemd[1]: Created slice kubepods-besteffort-pode25fa331_6800_4fe4_8297_93fb5d53e15b.slice - libcontainer container kubepods-besteffort-pode25fa331_6800_4fe4_8297_93fb5d53e15b.slice. Aug 19 00:20:03.389051 systemd[1]: Started cri-containerd-ff40ed3bc7735f0774bbffa9394aed402779caf001913431c8bf6a70267e12bd.scope - libcontainer container ff40ed3bc7735f0774bbffa9394aed402779caf001913431c8bf6a70267e12bd. Aug 19 00:20:03.426395 kubelet[2658]: I0819 00:20:03.425848 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e25fa331-6800-4fe4-8297-93fb5d53e15b-lib-modules\") pod \"calico-node-kxlf7\" (UID: \"e25fa331-6800-4fe4-8297-93fb5d53e15b\") " pod="calico-system/calico-node-kxlf7" Aug 19 00:20:03.426395 kubelet[2658]: I0819 00:20:03.425907 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e25fa331-6800-4fe4-8297-93fb5d53e15b-cni-net-dir\") pod \"calico-node-kxlf7\" (UID: \"e25fa331-6800-4fe4-8297-93fb5d53e15b\") " pod="calico-system/calico-node-kxlf7" Aug 19 00:20:03.426395 kubelet[2658]: I0819 00:20:03.425929 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e25fa331-6800-4fe4-8297-93fb5d53e15b-node-certs\") pod \"calico-node-kxlf7\" (UID: \"e25fa331-6800-4fe4-8297-93fb5d53e15b\") " pod="calico-system/calico-node-kxlf7" Aug 19 00:20:03.426395 kubelet[2658]: I0819 00:20:03.425951 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e25fa331-6800-4fe4-8297-93fb5d53e15b-flexvol-driver-host\") pod \"calico-node-kxlf7\" (UID: \"e25fa331-6800-4fe4-8297-93fb5d53e15b\") " pod="calico-system/calico-node-kxlf7" Aug 19 00:20:03.426395 kubelet[2658]: I0819 00:20:03.426003 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e25fa331-6800-4fe4-8297-93fb5d53e15b-var-run-calico\") pod \"calico-node-kxlf7\" (UID: \"e25fa331-6800-4fe4-8297-93fb5d53e15b\") " pod="calico-system/calico-node-kxlf7" Aug 19 00:20:03.426765 kubelet[2658]: I0819 00:20:03.426021 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s62rf\" (UniqueName: \"kubernetes.io/projected/e25fa331-6800-4fe4-8297-93fb5d53e15b-kube-api-access-s62rf\") pod \"calico-node-kxlf7\" (UID: \"e25fa331-6800-4fe4-8297-93fb5d53e15b\") " pod="calico-system/calico-node-kxlf7" Aug 19 00:20:03.426765 kubelet[2658]: I0819 00:20:03.426045 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e25fa331-6800-4fe4-8297-93fb5d53e15b-policysync\") pod \"calico-node-kxlf7\" (UID: \"e25fa331-6800-4fe4-8297-93fb5d53e15b\") " pod="calico-system/calico-node-kxlf7" Aug 19 00:20:03.426765 kubelet[2658]: I0819 00:20:03.426069 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e25fa331-6800-4fe4-8297-93fb5d53e15b-cni-log-dir\") pod \"calico-node-kxlf7\" (UID: \"e25fa331-6800-4fe4-8297-93fb5d53e15b\") " pod="calico-system/calico-node-kxlf7" Aug 19 00:20:03.426765 kubelet[2658]: I0819 00:20:03.426090 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e25fa331-6800-4fe4-8297-93fb5d53e15b-var-lib-calico\") pod \"calico-node-kxlf7\" (UID: \"e25fa331-6800-4fe4-8297-93fb5d53e15b\") " pod="calico-system/calico-node-kxlf7" Aug 19 00:20:03.426765 kubelet[2658]: I0819 00:20:03.426110 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e25fa331-6800-4fe4-8297-93fb5d53e15b-cni-bin-dir\") pod \"calico-node-kxlf7\" (UID: \"e25fa331-6800-4fe4-8297-93fb5d53e15b\") " pod="calico-system/calico-node-kxlf7" Aug 19 00:20:03.427016 kubelet[2658]: I0819 00:20:03.426130 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e25fa331-6800-4fe4-8297-93fb5d53e15b-tigera-ca-bundle\") pod \"calico-node-kxlf7\" (UID: \"e25fa331-6800-4fe4-8297-93fb5d53e15b\") " pod="calico-system/calico-node-kxlf7" Aug 19 00:20:03.427016 kubelet[2658]: I0819 00:20:03.426150 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e25fa331-6800-4fe4-8297-93fb5d53e15b-xtables-lock\") pod \"calico-node-kxlf7\" (UID: \"e25fa331-6800-4fe4-8297-93fb5d53e15b\") " pod="calico-system/calico-node-kxlf7" Aug 19 00:20:03.458547 containerd[1542]: time="2025-08-19T00:20:03.458497967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fb5878c4b-lscb9,Uid:14742f14-9169-48dd-b41a-b0088e38d2c6,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff40ed3bc7735f0774bbffa9394aed402779caf001913431c8bf6a70267e12bd\"" Aug 19 00:20:03.460790 kubelet[2658]: E0819 00:20:03.460710 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:03.467720 containerd[1542]: time="2025-08-19T00:20:03.467613311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 19 00:20:03.528655 kubelet[2658]: E0819 00:20:03.528611 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.528655 kubelet[2658]: W0819 00:20:03.528635 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.529074 kubelet[2658]: E0819 00:20:03.529053 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.529106 kubelet[2658]: W0819 00:20:03.529073 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.529521 kubelet[2658]: E0819 00:20:03.529501 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.529565 kubelet[2658]: W0819 00:20:03.529531 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.532242 kubelet[2658]: E0819 00:20:03.530363 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.532242 kubelet[2658]: E0819 00:20:03.530513 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.532242 kubelet[2658]: E0819 00:20:03.530586 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.532242 kubelet[2658]: W0819 00:20:03.530595 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.532242 kubelet[2658]: E0819 00:20:03.530614 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.532242 kubelet[2658]: E0819 00:20:03.530745 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.532242 kubelet[2658]: W0819 00:20:03.530753 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.532242 kubelet[2658]: E0819 00:20:03.530806 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.532242 kubelet[2658]: E0819 00:20:03.530881 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.532242 kubelet[2658]: W0819 00:20:03.530889 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.532626 kubelet[2658]: E0819 00:20:03.530903 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.532626 kubelet[2658]: E0819 00:20:03.531029 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.532626 kubelet[2658]: W0819 00:20:03.531036 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.532626 kubelet[2658]: E0819 00:20:03.531074 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.532626 kubelet[2658]: E0819 00:20:03.531192 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.532626 kubelet[2658]: W0819 00:20:03.531252 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.532626 kubelet[2658]: E0819 00:20:03.531333 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.532626 kubelet[2658]: E0819 00:20:03.531452 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.532626 kubelet[2658]: W0819 00:20:03.531459 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.532626 kubelet[2658]: E0819 00:20:03.531467 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.533009 kubelet[2658]: E0819 00:20:03.531607 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.533009 kubelet[2658]: W0819 00:20:03.531614 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.533009 kubelet[2658]: E0819 00:20:03.531621 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.533009 kubelet[2658]: E0819 00:20:03.531756 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.533009 kubelet[2658]: W0819 00:20:03.531763 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.533009 kubelet[2658]: E0819 00:20:03.531784 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.533009 kubelet[2658]: E0819 00:20:03.531904 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.533009 kubelet[2658]: E0819 00:20:03.532941 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.533780 kubelet[2658]: W0819 00:20:03.531911 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.533780 kubelet[2658]: E0819 00:20:03.533722 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.534043 kubelet[2658]: E0819 00:20:03.533952 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.534043 kubelet[2658]: W0819 00:20:03.533968 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.534043 kubelet[2658]: E0819 00:20:03.533978 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.536010 kubelet[2658]: E0819 00:20:03.535965 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.536010 kubelet[2658]: W0819 00:20:03.535986 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.536010 kubelet[2658]: E0819 00:20:03.536009 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.542794 kubelet[2658]: E0819 00:20:03.538015 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.542794 kubelet[2658]: W0819 00:20:03.538039 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.542794 kubelet[2658]: E0819 00:20:03.538114 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.542794 kubelet[2658]: E0819 00:20:03.538220 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.542794 kubelet[2658]: W0819 00:20:03.538228 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.542794 kubelet[2658]: E0819 00:20:03.538240 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.542794 kubelet[2658]: E0819 00:20:03.538546 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.542794 kubelet[2658]: W0819 00:20:03.538576 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.542794 kubelet[2658]: E0819 00:20:03.538588 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.549038 kubelet[2658]: E0819 00:20:03.548997 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.549038 kubelet[2658]: W0819 00:20:03.549022 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.549195 kubelet[2658]: E0819 00:20:03.549055 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.556063 kubelet[2658]: E0819 00:20:03.556018 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.556063 kubelet[2658]: W0819 00:20:03.556046 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.556063 kubelet[2658]: E0819 00:20:03.556067 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.669929 kubelet[2658]: E0819 00:20:03.669870 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnt8g" podUID="f6e5453f-d978-4803-a9fd-45cd7fd5890f" Aug 19 00:20:03.670076 containerd[1542]: time="2025-08-19T00:20:03.669980287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kxlf7,Uid:e25fa331-6800-4fe4-8297-93fb5d53e15b,Namespace:calico-system,Attempt:0,}" Aug 19 00:20:03.673108 kubelet[2658]: I0819 00:20:03.672860 2658 status_manager.go:890] "Failed to get status for pod" podUID="f6e5453f-d978-4803-a9fd-45cd7fd5890f" pod="calico-system/csi-node-driver-mnt8g" err="pods \"csi-node-driver-mnt8g\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Aug 19 00:20:03.694373 containerd[1542]: time="2025-08-19T00:20:03.694325792Z" level=info msg="connecting to shim 044229527aecf12ece72f8da8a32038d545691c1877159a8e09ec6026a5bb3a5" address="unix:///run/containerd/s/ebbd2f1ce8c76009b1339b4666b9e7c0ea1d9ea23c4ac108b49c6fdb1e4ac6e8" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:20:03.713873 kubelet[2658]: E0819 00:20:03.713835 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.713873 kubelet[2658]: W0819 00:20:03.713860 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.713873 kubelet[2658]: E0819 00:20:03.713882 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.714073 kubelet[2658]: E0819 00:20:03.714057 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.714102 kubelet[2658]: W0819 00:20:03.714070 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.714123 kubelet[2658]: E0819 00:20:03.714105 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.714290 kubelet[2658]: E0819 00:20:03.714265 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.714290 kubelet[2658]: W0819 00:20:03.714279 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.714290 kubelet[2658]: E0819 00:20:03.714288 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.714434 kubelet[2658]: E0819 00:20:03.714422 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.714434 kubelet[2658]: W0819 00:20:03.714433 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.714502 kubelet[2658]: E0819 00:20:03.714442 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.714636 kubelet[2658]: E0819 00:20:03.714623 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.714636 kubelet[2658]: W0819 00:20:03.714635 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.714736 kubelet[2658]: E0819 00:20:03.714642 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.714762 kubelet[2658]: E0819 00:20:03.714754 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.714762 kubelet[2658]: W0819 00:20:03.714760 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.714854 kubelet[2658]: E0819 00:20:03.714767 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.715687 kubelet[2658]: E0819 00:20:03.715532 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.715687 kubelet[2658]: W0819 00:20:03.715548 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.715687 kubelet[2658]: E0819 00:20:03.715561 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.715868 kubelet[2658]: E0819 00:20:03.715814 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.715899 kubelet[2658]: W0819 00:20:03.715874 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.715899 kubelet[2658]: E0819 00:20:03.715887 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.716073 kubelet[2658]: E0819 00:20:03.716044 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.716073 kubelet[2658]: W0819 00:20:03.716073 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.716227 kubelet[2658]: E0819 00:20:03.716082 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.716278 kubelet[2658]: E0819 00:20:03.716228 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.716278 kubelet[2658]: W0819 00:20:03.716236 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.716278 kubelet[2658]: E0819 00:20:03.716245 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.716923 kubelet[2658]: E0819 00:20:03.716900 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.716923 kubelet[2658]: W0819 00:20:03.716916 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.716923 kubelet[2658]: E0819 00:20:03.716927 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.717114 kubelet[2658]: E0819 00:20:03.717091 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.717114 kubelet[2658]: W0819 00:20:03.717104 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.717114 kubelet[2658]: E0819 00:20:03.717112 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.717313 kubelet[2658]: E0819 00:20:03.717300 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.717313 kubelet[2658]: W0819 00:20:03.717312 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.717402 kubelet[2658]: E0819 00:20:03.717322 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.717529 kubelet[2658]: E0819 00:20:03.717514 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.717560 kubelet[2658]: W0819 00:20:03.717530 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.717560 kubelet[2658]: E0819 00:20:03.717538 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.717869 kubelet[2658]: E0819 00:20:03.717851 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.717869 kubelet[2658]: W0819 00:20:03.717868 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.717946 kubelet[2658]: E0819 00:20:03.717880 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.718078 kubelet[2658]: E0819 00:20:03.718055 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.718117 kubelet[2658]: W0819 00:20:03.718079 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.718117 kubelet[2658]: E0819 00:20:03.718090 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.718254 kubelet[2658]: E0819 00:20:03.718242 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.718254 kubelet[2658]: W0819 00:20:03.718253 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.718319 kubelet[2658]: E0819 00:20:03.718262 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.718397 kubelet[2658]: E0819 00:20:03.718386 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.718418 kubelet[2658]: W0819 00:20:03.718397 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.718450 kubelet[2658]: E0819 00:20:03.718417 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.718552 kubelet[2658]: E0819 00:20:03.718540 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.718572 kubelet[2658]: W0819 00:20:03.718552 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.718572 kubelet[2658]: E0819 00:20:03.718560 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.718704 kubelet[2658]: E0819 00:20:03.718691 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.718730 kubelet[2658]: W0819 00:20:03.718702 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.718730 kubelet[2658]: E0819 00:20:03.718725 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.729269 kubelet[2658]: E0819 00:20:03.729231 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.729269 kubelet[2658]: W0819 00:20:03.729254 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.729269 kubelet[2658]: E0819 00:20:03.729276 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.729461 kubelet[2658]: I0819 00:20:03.729309 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbnfm\" (UniqueName: \"kubernetes.io/projected/f6e5453f-d978-4803-a9fd-45cd7fd5890f-kube-api-access-nbnfm\") pod \"csi-node-driver-mnt8g\" (UID: \"f6e5453f-d978-4803-a9fd-45cd7fd5890f\") " pod="calico-system/csi-node-driver-mnt8g" Aug 19 00:20:03.729638 kubelet[2658]: E0819 00:20:03.729606 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.729638 kubelet[2658]: W0819 00:20:03.729624 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.729848 kubelet[2658]: E0819 00:20:03.729688 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.729848 kubelet[2658]: I0819 00:20:03.729710 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f6e5453f-d978-4803-a9fd-45cd7fd5890f-varrun\") pod \"csi-node-driver-mnt8g\" (UID: \"f6e5453f-d978-4803-a9fd-45cd7fd5890f\") " pod="calico-system/csi-node-driver-mnt8g" Aug 19 00:20:03.730144 kubelet[2658]: E0819 00:20:03.730107 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.730144 kubelet[2658]: W0819 00:20:03.730125 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.730144 kubelet[2658]: E0819 00:20:03.730144 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.730237 kubelet[2658]: I0819 00:20:03.730164 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6e5453f-d978-4803-a9fd-45cd7fd5890f-socket-dir\") pod \"csi-node-driver-mnt8g\" (UID: \"f6e5453f-d978-4803-a9fd-45cd7fd5890f\") " pod="calico-system/csi-node-driver-mnt8g" Aug 19 00:20:03.730435 kubelet[2658]: E0819 00:20:03.730413 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.730435 kubelet[2658]: W0819 00:20:03.730433 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.730495 kubelet[2658]: E0819 00:20:03.730452 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.730883 kubelet[2658]: E0819 00:20:03.730866 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.730924 kubelet[2658]: W0819 00:20:03.730886 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.730924 kubelet[2658]: E0819 00:20:03.730902 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.731111 kubelet[2658]: E0819 00:20:03.731098 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.731111 kubelet[2658]: W0819 00:20:03.731110 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.731220 kubelet[2658]: E0819 00:20:03.731122 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.731277 kubelet[2658]: E0819 00:20:03.731263 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.731277 kubelet[2658]: W0819 00:20:03.731274 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.731343 kubelet[2658]: E0819 00:20:03.731323 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.731651 kubelet[2658]: E0819 00:20:03.731624 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.731651 kubelet[2658]: W0819 00:20:03.731636 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.731719 kubelet[2658]: E0819 00:20:03.731694 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.731958 kubelet[2658]: I0819 00:20:03.731942 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6e5453f-d978-4803-a9fd-45cd7fd5890f-kubelet-dir\") pod \"csi-node-driver-mnt8g\" (UID: \"f6e5453f-d978-4803-a9fd-45cd7fd5890f\") " pod="calico-system/csi-node-driver-mnt8g" Aug 19 00:20:03.731988 kubelet[2658]: E0819 00:20:03.731819 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.732010 kubelet[2658]: W0819 00:20:03.731997 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.732038 kubelet[2658]: E0819 00:20:03.732015 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.732715 kubelet[2658]: E0819 00:20:03.732678 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.732715 kubelet[2658]: W0819 00:20:03.732702 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.732832 kubelet[2658]: E0819 00:20:03.732722 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.733510 kubelet[2658]: E0819 00:20:03.733475 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.733510 kubelet[2658]: W0819 00:20:03.733496 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.733572 kubelet[2658]: E0819 00:20:03.733518 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.733572 kubelet[2658]: I0819 00:20:03.733540 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6e5453f-d978-4803-a9fd-45cd7fd5890f-registration-dir\") pod \"csi-node-driver-mnt8g\" (UID: \"f6e5453f-d978-4803-a9fd-45cd7fd5890f\") " pod="calico-system/csi-node-driver-mnt8g" Aug 19 00:20:03.733838 kubelet[2658]: E0819 00:20:03.733811 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.733838 kubelet[2658]: W0819 00:20:03.733834 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.733911 kubelet[2658]: E0819 00:20:03.733854 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.734552 kubelet[2658]: E0819 00:20:03.734513 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.734552 kubelet[2658]: W0819 00:20:03.734538 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.735006 kubelet[2658]: E0819 00:20:03.734813 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.735066 kubelet[2658]: E0819 00:20:03.735031 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.735066 kubelet[2658]: W0819 00:20:03.735047 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.735066 kubelet[2658]: E0819 00:20:03.735063 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.735868 kubelet[2658]: E0819 00:20:03.735610 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.735868 kubelet[2658]: W0819 00:20:03.735626 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.735868 kubelet[2658]: E0819 00:20:03.735637 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.739028 systemd[1]: Started cri-containerd-044229527aecf12ece72f8da8a32038d545691c1877159a8e09ec6026a5bb3a5.scope - libcontainer container 044229527aecf12ece72f8da8a32038d545691c1877159a8e09ec6026a5bb3a5. Aug 19 00:20:03.783976 containerd[1542]: time="2025-08-19T00:20:03.783834076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kxlf7,Uid:e25fa331-6800-4fe4-8297-93fb5d53e15b,Namespace:calico-system,Attempt:0,} returns sandbox id \"044229527aecf12ece72f8da8a32038d545691c1877159a8e09ec6026a5bb3a5\"" Aug 19 00:20:03.834952 kubelet[2658]: E0819 00:20:03.834902 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.834952 kubelet[2658]: W0819 00:20:03.834935 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.834952 kubelet[2658]: E0819 00:20:03.834956 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.835242 kubelet[2658]: E0819 00:20:03.835212 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.835242 kubelet[2658]: W0819 00:20:03.835225 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.835304 kubelet[2658]: E0819 00:20:03.835246 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.835469 kubelet[2658]: E0819 00:20:03.835441 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.835469 kubelet[2658]: W0819 00:20:03.835455 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.835469 kubelet[2658]: E0819 00:20:03.835469 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.835644 kubelet[2658]: E0819 00:20:03.835620 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.835644 kubelet[2658]: W0819 00:20:03.835634 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.835704 kubelet[2658]: E0819 00:20:03.835646 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.835863 kubelet[2658]: E0819 00:20:03.835848 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.835863 kubelet[2658]: W0819 00:20:03.835861 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.835919 kubelet[2658]: E0819 00:20:03.835874 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.836129 kubelet[2658]: E0819 00:20:03.836098 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.836129 kubelet[2658]: W0819 00:20:03.836114 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.836180 kubelet[2658]: E0819 00:20:03.836128 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.836283 kubelet[2658]: E0819 00:20:03.836271 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.836283 kubelet[2658]: W0819 00:20:03.836282 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.836330 kubelet[2658]: E0819 00:20:03.836308 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.836423 kubelet[2658]: E0819 00:20:03.836412 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.836443 kubelet[2658]: W0819 00:20:03.836422 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.836464 kubelet[2658]: E0819 00:20:03.836445 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.836560 kubelet[2658]: E0819 00:20:03.836550 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.836585 kubelet[2658]: W0819 00:20:03.836560 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.836585 kubelet[2658]: E0819 00:20:03.836577 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.836703 kubelet[2658]: E0819 00:20:03.836692 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.836726 kubelet[2658]: W0819 00:20:03.836703 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.836726 kubelet[2658]: E0819 00:20:03.836721 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.836866 kubelet[2658]: E0819 00:20:03.836855 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.836897 kubelet[2658]: W0819 00:20:03.836868 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.836897 kubelet[2658]: E0819 00:20:03.836880 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.837055 kubelet[2658]: E0819 00:20:03.837044 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.837076 kubelet[2658]: W0819 00:20:03.837054 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.837076 kubelet[2658]: E0819 00:20:03.837066 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.837218 kubelet[2658]: E0819 00:20:03.837205 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.837247 kubelet[2658]: W0819 00:20:03.837217 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.837247 kubelet[2658]: E0819 00:20:03.837235 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.837432 kubelet[2658]: E0819 00:20:03.837421 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.837453 kubelet[2658]: W0819 00:20:03.837432 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.837453 kubelet[2658]: E0819 00:20:03.837445 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.837591 kubelet[2658]: E0819 00:20:03.837580 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.837619 kubelet[2658]: W0819 00:20:03.837590 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.837619 kubelet[2658]: E0819 00:20:03.837603 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.837889 kubelet[2658]: E0819 00:20:03.837857 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.837889 kubelet[2658]: W0819 00:20:03.837875 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.837945 kubelet[2658]: E0819 00:20:03.837895 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.838028 kubelet[2658]: E0819 00:20:03.838017 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.838028 kubelet[2658]: W0819 00:20:03.838027 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.838073 kubelet[2658]: E0819 00:20:03.838040 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.838223 kubelet[2658]: E0819 00:20:03.838212 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.838244 kubelet[2658]: W0819 00:20:03.838223 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.838244 kubelet[2658]: E0819 00:20:03.838236 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.838414 kubelet[2658]: E0819 00:20:03.838403 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.838434 kubelet[2658]: W0819 00:20:03.838414 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.838434 kubelet[2658]: E0819 00:20:03.838427 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.838591 kubelet[2658]: E0819 00:20:03.838579 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.838591 kubelet[2658]: W0819 00:20:03.838589 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.838648 kubelet[2658]: E0819 00:20:03.838600 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.838739 kubelet[2658]: E0819 00:20:03.838728 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.838739 kubelet[2658]: W0819 00:20:03.838738 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.838802 kubelet[2658]: E0819 00:20:03.838763 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.838910 kubelet[2658]: E0819 00:20:03.838899 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.838941 kubelet[2658]: W0819 00:20:03.838910 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.838941 kubelet[2658]: E0819 00:20:03.838928 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.839059 kubelet[2658]: E0819 00:20:03.839049 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.839084 kubelet[2658]: W0819 00:20:03.839059 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.839084 kubelet[2658]: E0819 00:20:03.839072 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.839236 kubelet[2658]: E0819 00:20:03.839226 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.839257 kubelet[2658]: W0819 00:20:03.839236 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.839257 kubelet[2658]: E0819 00:20:03.839250 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.839404 kubelet[2658]: E0819 00:20:03.839393 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.839427 kubelet[2658]: W0819 00:20:03.839404 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.839427 kubelet[2658]: E0819 00:20:03.839413 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:03.850253 kubelet[2658]: E0819 00:20:03.850207 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:03.850253 kubelet[2658]: W0819 00:20:03.850236 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:03.850253 kubelet[2658]: E0819 00:20:03.850258 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:04.590832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670492479.mount: Deactivated successfully. Aug 19 00:20:05.680499 containerd[1542]: time="2025-08-19T00:20:05.680234828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:05.684277 containerd[1542]: time="2025-08-19T00:20:05.684224745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 19 00:20:05.685723 containerd[1542]: time="2025-08-19T00:20:05.685636795Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:05.703639 containerd[1542]: time="2025-08-19T00:20:05.703579343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:05.704252 containerd[1542]: time="2025-08-19T00:20:05.704173530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.23651578s" Aug 19 00:20:05.704252 containerd[1542]: time="2025-08-19T00:20:05.704211770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 19 00:20:05.710259 containerd[1542]: time="2025-08-19T00:20:05.710221565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 19 00:20:05.763203 containerd[1542]: time="2025-08-19T00:20:05.763007668Z" level=info msg="CreateContainer within sandbox \"ff40ed3bc7735f0774bbffa9394aed402779caf001913431c8bf6a70267e12bd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 19 00:20:05.788658 containerd[1542]: time="2025-08-19T00:20:05.788580337Z" level=info msg="Container 0157ea1ad847fb4cd62cd57c8f452e07bae80885e79299cf229e2ae5f9be201c: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:05.809989 kubelet[2658]: E0819 00:20:05.809928 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnt8g" podUID="f6e5453f-d978-4803-a9fd-45cd7fd5890f" Aug 19 00:20:05.812320 containerd[1542]: time="2025-08-19T00:20:05.812264325Z" level=info msg="CreateContainer within sandbox \"ff40ed3bc7735f0774bbffa9394aed402779caf001913431c8bf6a70267e12bd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0157ea1ad847fb4cd62cd57c8f452e07bae80885e79299cf229e2ae5f9be201c\"" Aug 19 00:20:05.827525 containerd[1542]: time="2025-08-19T00:20:05.827476008Z" level=info msg="StartContainer for \"0157ea1ad847fb4cd62cd57c8f452e07bae80885e79299cf229e2ae5f9be201c\"" Aug 19 00:20:05.829051 containerd[1542]: time="2025-08-19T00:20:05.829014616Z" level=info msg="connecting to shim 0157ea1ad847fb4cd62cd57c8f452e07bae80885e79299cf229e2ae5f9be201c" address="unix:///run/containerd/s/05e1296c5c20f94d032b2a7626b887d53118b31b47d739c702e81d6b618977f0" protocol=ttrpc version=3 Aug 19 00:20:05.863004 systemd[1]: Started cri-containerd-0157ea1ad847fb4cd62cd57c8f452e07bae80885e79299cf229e2ae5f9be201c.scope - libcontainer container 0157ea1ad847fb4cd62cd57c8f452e07bae80885e79299cf229e2ae5f9be201c. Aug 19 00:20:05.942002 containerd[1542]: time="2025-08-19T00:20:05.941849632Z" level=info msg="StartContainer for \"0157ea1ad847fb4cd62cd57c8f452e07bae80885e79299cf229e2ae5f9be201c\" returns successfully" Aug 19 00:20:06.903689 containerd[1542]: time="2025-08-19T00:20:06.903629902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:06.904337 containerd[1542]: time="2025-08-19T00:20:06.904303048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 19 00:20:06.905239 containerd[1542]: time="2025-08-19T00:20:06.905199311Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:06.907895 containerd[1542]: time="2025-08-19T00:20:06.907846939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:06.908669 containerd[1542]: time="2025-08-19T00:20:06.908359089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.197950969s" Aug 19 00:20:06.908669 containerd[1542]: time="2025-08-19T00:20:06.908391209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 19 00:20:06.923093 containerd[1542]: time="2025-08-19T00:20:06.923050003Z" level=info msg="CreateContainer within sandbox \"044229527aecf12ece72f8da8a32038d545691c1877159a8e09ec6026a5bb3a5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 19 00:20:06.923346 kubelet[2658]: E0819 00:20:06.923314 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:06.948370 containerd[1542]: time="2025-08-19T00:20:06.948321951Z" level=info msg="Container f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:06.953687 kubelet[2658]: E0819 00:20:06.953069 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.953687 kubelet[2658]: W0819 00:20:06.953098 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.953687 kubelet[2658]: E0819 00:20:06.953128 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.953687 kubelet[2658]: E0819 00:20:06.953302 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.953687 kubelet[2658]: W0819 00:20:06.953310 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.953687 kubelet[2658]: E0819 00:20:06.953354 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.953687 kubelet[2658]: E0819 00:20:06.953488 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.953687 kubelet[2658]: W0819 00:20:06.953495 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.953687 kubelet[2658]: E0819 00:20:06.953503 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.953687 kubelet[2658]: E0819 00:20:06.953625 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.954888 kubelet[2658]: W0819 00:20:06.953632 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.954888 kubelet[2658]: E0819 00:20:06.953639 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.955251 kubelet[2658]: E0819 00:20:06.955217 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.955251 kubelet[2658]: W0819 00:20:06.955246 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.955499 kubelet[2658]: E0819 00:20:06.955269 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.955499 kubelet[2658]: E0819 00:20:06.955476 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.955499 kubelet[2658]: W0819 00:20:06.955485 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.955499 kubelet[2658]: E0819 00:20:06.955494 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.955902 kubelet[2658]: E0819 00:20:06.955625 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.955902 kubelet[2658]: W0819 00:20:06.955633 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.955902 kubelet[2658]: E0819 00:20:06.955641 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.955902 kubelet[2658]: E0819 00:20:06.955755 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.955902 kubelet[2658]: W0819 00:20:06.955761 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.955902 kubelet[2658]: E0819 00:20:06.955787 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.956725 kubelet[2658]: E0819 00:20:06.956001 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.956725 kubelet[2658]: W0819 00:20:06.956008 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.956725 kubelet[2658]: E0819 00:20:06.956016 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.956725 kubelet[2658]: E0819 00:20:06.956132 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.956725 kubelet[2658]: W0819 00:20:06.956139 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.956725 kubelet[2658]: E0819 00:20:06.956146 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.956725 kubelet[2658]: E0819 00:20:06.956277 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.956725 kubelet[2658]: W0819 00:20:06.956284 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.956725 kubelet[2658]: E0819 00:20:06.956291 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.957946 kubelet[2658]: E0819 00:20:06.957334 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.957946 kubelet[2658]: W0819 00:20:06.957346 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.957946 kubelet[2658]: E0819 00:20:06.957356 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.957946 kubelet[2658]: E0819 00:20:06.957514 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.957946 kubelet[2658]: W0819 00:20:06.957521 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.957946 kubelet[2658]: E0819 00:20:06.957529 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.957946 kubelet[2658]: E0819 00:20:06.957663 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.957946 kubelet[2658]: W0819 00:20:06.957670 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.957946 kubelet[2658]: E0819 00:20:06.957677 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.957946 kubelet[2658]: E0819 00:20:06.957789 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.958913 kubelet[2658]: W0819 00:20:06.957796 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.958913 kubelet[2658]: E0819 00:20:06.957804 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.974898 kubelet[2658]: E0819 00:20:06.974862 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.974898 kubelet[2658]: W0819 00:20:06.974886 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.975228 kubelet[2658]: E0819 00:20:06.974909 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.975405 kubelet[2658]: E0819 00:20:06.975385 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.975405 kubelet[2658]: W0819 00:20:06.975401 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.975504 kubelet[2658]: E0819 00:20:06.975417 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.976269 kubelet[2658]: E0819 00:20:06.976243 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.976269 kubelet[2658]: W0819 00:20:06.976262 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.976342 kubelet[2658]: E0819 00:20:06.976279 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.976700 kubelet[2658]: E0819 00:20:06.976681 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.976700 kubelet[2658]: W0819 00:20:06.976698 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.976846 kubelet[2658]: E0819 00:20:06.976762 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.976922 kubelet[2658]: E0819 00:20:06.976898 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.976922 kubelet[2658]: W0819 00:20:06.976908 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.977136 kubelet[2658]: E0819 00:20:06.976963 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.977136 kubelet[2658]: E0819 00:20:06.977033 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.977136 kubelet[2658]: W0819 00:20:06.977042 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.977136 kubelet[2658]: E0819 00:20:06.977090 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.977302 kubelet[2658]: E0819 00:20:06.977162 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.977302 kubelet[2658]: W0819 00:20:06.977169 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.977302 kubelet[2658]: E0819 00:20:06.977185 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.977396 kubelet[2658]: E0819 00:20:06.977304 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.977396 kubelet[2658]: W0819 00:20:06.977312 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.977396 kubelet[2658]: E0819 00:20:06.977322 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.977722 kubelet[2658]: E0819 00:20:06.977600 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.977722 kubelet[2658]: W0819 00:20:06.977617 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.977722 kubelet[2658]: E0819 00:20:06.977628 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.978144 kubelet[2658]: E0819 00:20:06.978117 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.978830 kubelet[2658]: W0819 00:20:06.978627 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.978830 kubelet[2658]: E0819 00:20:06.978680 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.979142 kubelet[2658]: E0819 00:20:06.979122 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.979261 kubelet[2658]: W0819 00:20:06.979245 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.979385 kubelet[2658]: E0819 00:20:06.979354 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.979641 kubelet[2658]: E0819 00:20:06.979624 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.979706 kubelet[2658]: W0819 00:20:06.979693 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.979812 kubelet[2658]: E0819 00:20:06.979788 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.980182 kubelet[2658]: E0819 00:20:06.980009 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.980182 kubelet[2658]: W0819 00:20:06.980031 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.980182 kubelet[2658]: E0819 00:20:06.980053 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.980391 kubelet[2658]: E0819 00:20:06.980372 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.980443 kubelet[2658]: W0819 00:20:06.980431 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.980548 kubelet[2658]: E0819 00:20:06.980535 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.980809 kubelet[2658]: E0819 00:20:06.980763 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.980809 kubelet[2658]: W0819 00:20:06.980801 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.980902 kubelet[2658]: E0819 00:20:06.980815 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.981033 kubelet[2658]: E0819 00:20:06.981011 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.981033 kubelet[2658]: W0819 00:20:06.981025 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.981103 kubelet[2658]: E0819 00:20:06.981041 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.981403 kubelet[2658]: E0819 00:20:06.981328 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.981403 kubelet[2658]: W0819 00:20:06.981346 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.981403 kubelet[2658]: E0819 00:20:06.981363 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.981744 kubelet[2658]: E0819 00:20:06.981719 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:20:06.981744 kubelet[2658]: W0819 00:20:06.981737 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:20:06.981880 kubelet[2658]: E0819 00:20:06.981750 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:20:06.986819 containerd[1542]: time="2025-08-19T00:20:06.986572686Z" level=info msg="CreateContainer within sandbox \"044229527aecf12ece72f8da8a32038d545691c1877159a8e09ec6026a5bb3a5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2\"" Aug 19 00:20:06.992189 containerd[1542]: time="2025-08-19T00:20:06.992099058Z" level=info msg="StartContainer for \"f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2\"" Aug 19 00:20:06.994725 containerd[1542]: time="2025-08-19T00:20:06.994668888Z" level=info msg="connecting to shim f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2" address="unix:///run/containerd/s/ebbd2f1ce8c76009b1339b4666b9e7c0ea1d9ea23c4ac108b49c6fdb1e4ac6e8" protocol=ttrpc version=3 Aug 19 00:20:07.012487 kubelet[2658]: I0819 00:20:07.011524 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7fb5878c4b-lscb9" podStartSLOduration=2.765647757 podStartE2EDuration="5.011415735s" podCreationTimestamp="2025-08-19 00:20:02 +0000 UTC" firstStartedPulling="2025-08-19 00:20:03.464159553 +0000 UTC m=+18.752229424" lastFinishedPulling="2025-08-19 00:20:05.709927531 +0000 UTC m=+20.997997402" observedRunningTime="2025-08-19 00:20:06.990501729 +0000 UTC m=+22.278571600" watchObservedRunningTime="2025-08-19 00:20:07.011415735 +0000 UTC m=+22.299485606" Aug 19 00:20:07.035154 systemd[1]: Started cri-containerd-f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2.scope - libcontainer container f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2. Aug 19 00:20:07.136972 containerd[1542]: time="2025-08-19T00:20:07.136886804Z" level=info msg="StartContainer for \"f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2\" returns successfully" Aug 19 00:20:07.349621 systemd[1]: cri-containerd-f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2.scope: Deactivated successfully. Aug 19 00:20:07.349953 systemd[1]: cri-containerd-f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2.scope: Consumed 267ms CPU time, 8.8M memory peak, 5.8M read from disk. Aug 19 00:20:07.370516 containerd[1542]: time="2025-08-19T00:20:07.370295501Z" level=info msg="received exit event container_id:\"f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2\" id:\"f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2\" pid:3380 exited_at:{seconds:1755562807 nanos:364956719}" Aug 19 00:20:07.370685 containerd[1542]: time="2025-08-19T00:20:07.370383300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2\" id:\"f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2\" pid:3380 exited_at:{seconds:1755562807 nanos:364956719}" Aug 19 00:20:07.441292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9c6498db9cfb28d36a9af1e12bca83a1c8d52e038b2d663047052f4de6685d2-rootfs.mount: Deactivated successfully. Aug 19 00:20:07.805738 kubelet[2658]: E0819 00:20:07.805674 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnt8g" podUID="f6e5453f-d978-4803-a9fd-45cd7fd5890f" Aug 19 00:20:07.936806 kubelet[2658]: E0819 00:20:07.936750 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:07.937873 containerd[1542]: time="2025-08-19T00:20:07.937307787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 19 00:20:08.938275 kubelet[2658]: E0819 00:20:08.938245 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:09.805598 kubelet[2658]: E0819 00:20:09.805533 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnt8g" podUID="f6e5453f-d978-4803-a9fd-45cd7fd5890f" Aug 19 00:20:11.805302 kubelet[2658]: E0819 00:20:11.805249 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnt8g" podUID="f6e5453f-d978-4803-a9fd-45cd7fd5890f" Aug 19 00:20:11.867152 containerd[1542]: time="2025-08-19T00:20:11.867097952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:11.867997 containerd[1542]: time="2025-08-19T00:20:11.867958219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 19 00:20:11.869678 containerd[1542]: time="2025-08-19T00:20:11.869122363Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:11.871399 containerd[1542]: time="2025-08-19T00:20:11.871349692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:11.872196 containerd[1542]: time="2025-08-19T00:20:11.872166680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.934813533s" Aug 19 00:20:11.872312 containerd[1542]: time="2025-08-19T00:20:11.872286518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 19 00:20:11.874905 containerd[1542]: time="2025-08-19T00:20:11.874846602Z" level=info msg="CreateContainer within sandbox \"044229527aecf12ece72f8da8a32038d545691c1877159a8e09ec6026a5bb3a5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 19 00:20:11.888831 containerd[1542]: time="2025-08-19T00:20:11.887761820Z" level=info msg="Container 0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:11.890343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749340077.mount: Deactivated successfully. Aug 19 00:20:11.896916 containerd[1542]: time="2025-08-19T00:20:11.896736214Z" level=info msg="CreateContainer within sandbox \"044229527aecf12ece72f8da8a32038d545691c1877159a8e09ec6026a5bb3a5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10\"" Aug 19 00:20:11.897621 containerd[1542]: time="2025-08-19T00:20:11.897498443Z" level=info msg="StartContainer for \"0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10\"" Aug 19 00:20:11.899511 containerd[1542]: time="2025-08-19T00:20:11.899460095Z" level=info msg="connecting to shim 0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10" address="unix:///run/containerd/s/ebbd2f1ce8c76009b1339b4666b9e7c0ea1d9ea23c4ac108b49c6fdb1e4ac6e8" protocol=ttrpc version=3 Aug 19 00:20:11.929007 systemd[1]: Started cri-containerd-0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10.scope - libcontainer container 0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10. Aug 19 00:20:11.974578 containerd[1542]: time="2025-08-19T00:20:11.974530356Z" level=info msg="StartContainer for \"0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10\" returns successfully" Aug 19 00:20:12.631107 systemd[1]: cri-containerd-0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10.scope: Deactivated successfully. Aug 19 00:20:12.631418 systemd[1]: cri-containerd-0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10.scope: Consumed 522ms CPU time, 178.3M memory peak, 3.8M read from disk, 165.8M written to disk. Aug 19 00:20:12.642924 containerd[1542]: time="2025-08-19T00:20:12.642872815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10\" id:\"0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10\" pid:3440 exited_at:{seconds:1755562812 nanos:640913561}" Aug 19 00:20:12.643082 containerd[1542]: time="2025-08-19T00:20:12.642954814Z" level=info msg="received exit event container_id:\"0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10\" id:\"0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10\" pid:3440 exited_at:{seconds:1755562812 nanos:640913561}" Aug 19 00:20:12.662305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0166a04e1226a1658c747cb81db9f51fdd61334c3626caa98b2a24ce72a07a10-rootfs.mount: Deactivated successfully. Aug 19 00:20:12.758426 kubelet[2658]: I0819 00:20:12.758175 2658 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 19 00:20:12.874092 systemd[1]: Created slice kubepods-burstable-podb0272f1e_81be_4b55_962b_becad3f92145.slice - libcontainer container kubepods-burstable-podb0272f1e_81be_4b55_962b_becad3f92145.slice. Aug 19 00:20:12.880191 systemd[1]: Created slice kubepods-besteffort-pod1495a0fe_e7d7_4006_86dc_aa41f51f3a3f.slice - libcontainer container kubepods-besteffort-pod1495a0fe_e7d7_4006_86dc_aa41f51f3a3f.slice. Aug 19 00:20:12.889174 systemd[1]: Created slice kubepods-besteffort-podb42bf112_ed43_4908_8f92_dbc623fd1e93.slice - libcontainer container kubepods-besteffort-podb42bf112_ed43_4908_8f92_dbc623fd1e93.slice. Aug 19 00:20:12.895616 systemd[1]: Created slice kubepods-burstable-pod23f8949e_36f6_45c2_a78d_d3f5983db36b.slice - libcontainer container kubepods-burstable-pod23f8949e_36f6_45c2_a78d_d3f5983db36b.slice. Aug 19 00:20:12.902878 systemd[1]: Created slice kubepods-besteffort-pod6a38e80b_4f1a_4714_a905_dbc453cab4e0.slice - libcontainer container kubepods-besteffort-pod6a38e80b_4f1a_4714_a905_dbc453cab4e0.slice. Aug 19 00:20:12.909666 systemd[1]: Created slice kubepods-besteffort-pod50103c4d_fbf4_4d05_8c65_1f1789776c46.slice - libcontainer container kubepods-besteffort-pod50103c4d_fbf4_4d05_8c65_1f1789776c46.slice. Aug 19 00:20:12.918431 kubelet[2658]: I0819 00:20:12.918353 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhmrp\" (UniqueName: \"kubernetes.io/projected/50103c4d-fbf4-4d05-8c65-1f1789776c46-kube-api-access-jhmrp\") pod \"whisker-7997d9c98-jg76j\" (UID: \"50103c4d-fbf4-4d05-8c65-1f1789776c46\") " pod="calico-system/whisker-7997d9c98-jg76j" Aug 19 00:20:12.919430 kubelet[2658]: I0819 00:20:12.918434 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4p8n\" (UniqueName: \"kubernetes.io/projected/1495a0fe-e7d7-4006-86dc-aa41f51f3a3f-kube-api-access-b4p8n\") pod \"calico-apiserver-f655c676f-wx822\" (UID: \"1495a0fe-e7d7-4006-86dc-aa41f51f3a3f\") " pod="calico-apiserver/calico-apiserver-f655c676f-wx822" Aug 19 00:20:12.919430 kubelet[2658]: I0819 00:20:12.918539 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkxl9\" (UniqueName: \"kubernetes.io/projected/6a38e80b-4f1a-4714-a905-dbc453cab4e0-kube-api-access-qkxl9\") pod \"calico-kube-controllers-547fbbbbdf-q4hdg\" (UID: \"6a38e80b-4f1a-4714-a905-dbc453cab4e0\") " pod="calico-system/calico-kube-controllers-547fbbbbdf-q4hdg" Aug 19 00:20:12.919430 kubelet[2658]: I0819 00:20:12.918564 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q56r\" (UniqueName: \"kubernetes.io/projected/b42bf112-ed43-4908-8f92-dbc623fd1e93-kube-api-access-4q56r\") pod \"goldmane-768f4c5c69-h8ngp\" (UID: \"b42bf112-ed43-4908-8f92-dbc623fd1e93\") " pod="calico-system/goldmane-768f4c5c69-h8ngp" Aug 19 00:20:12.919430 kubelet[2658]: I0819 00:20:12.918583 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23f8949e-36f6-45c2-a78d-d3f5983db36b-config-volume\") pod \"coredns-668d6bf9bc-d6sfn\" (UID: \"23f8949e-36f6-45c2-a78d-d3f5983db36b\") " pod="kube-system/coredns-668d6bf9bc-d6sfn" Aug 19 00:20:12.919430 kubelet[2658]: I0819 00:20:12.918603 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t587v\" (UniqueName: \"kubernetes.io/projected/23f8949e-36f6-45c2-a78d-d3f5983db36b-kube-api-access-t587v\") pod \"coredns-668d6bf9bc-d6sfn\" (UID: \"23f8949e-36f6-45c2-a78d-d3f5983db36b\") " pod="kube-system/coredns-668d6bf9bc-d6sfn" Aug 19 00:20:12.918854 systemd[1]: Created slice kubepods-besteffort-pod2873fb9f_c1a3_4dd4_9a74_be8cf0edc28c.slice - libcontainer container kubepods-besteffort-pod2873fb9f_c1a3_4dd4_9a74_be8cf0edc28c.slice. Aug 19 00:20:12.919701 kubelet[2658]: I0819 00:20:12.918619 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b42bf112-ed43-4908-8f92-dbc623fd1e93-goldmane-key-pair\") pod \"goldmane-768f4c5c69-h8ngp\" (UID: \"b42bf112-ed43-4908-8f92-dbc623fd1e93\") " pod="calico-system/goldmane-768f4c5c69-h8ngp" Aug 19 00:20:12.919701 kubelet[2658]: I0819 00:20:12.918640 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfscq\" (UniqueName: \"kubernetes.io/projected/2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c-kube-api-access-zfscq\") pod \"calico-apiserver-f655c676f-9ff6x\" (UID: \"2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c\") " pod="calico-apiserver/calico-apiserver-f655c676f-9ff6x" Aug 19 00:20:12.919701 kubelet[2658]: I0819 00:20:12.918660 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50103c4d-fbf4-4d05-8c65-1f1789776c46-whisker-ca-bundle\") pod \"whisker-7997d9c98-jg76j\" (UID: \"50103c4d-fbf4-4d05-8c65-1f1789776c46\") " pod="calico-system/whisker-7997d9c98-jg76j" Aug 19 00:20:12.919701 kubelet[2658]: I0819 00:20:12.918677 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0272f1e-81be-4b55-962b-becad3f92145-config-volume\") pod \"coredns-668d6bf9bc-v7v5f\" (UID: \"b0272f1e-81be-4b55-962b-becad3f92145\") " pod="kube-system/coredns-668d6bf9bc-v7v5f" Aug 19 00:20:12.919701 kubelet[2658]: I0819 00:20:12.918692 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c-calico-apiserver-certs\") pod \"calico-apiserver-f655c676f-9ff6x\" (UID: \"2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c\") " pod="calico-apiserver/calico-apiserver-f655c676f-9ff6x" Aug 19 00:20:12.920668 kubelet[2658]: I0819 00:20:12.918707 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b42bf112-ed43-4908-8f92-dbc623fd1e93-config\") pod \"goldmane-768f4c5c69-h8ngp\" (UID: \"b42bf112-ed43-4908-8f92-dbc623fd1e93\") " pod="calico-system/goldmane-768f4c5c69-h8ngp" Aug 19 00:20:12.920668 kubelet[2658]: I0819 00:20:12.918727 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50103c4d-fbf4-4d05-8c65-1f1789776c46-whisker-backend-key-pair\") pod \"whisker-7997d9c98-jg76j\" (UID: \"50103c4d-fbf4-4d05-8c65-1f1789776c46\") " pod="calico-system/whisker-7997d9c98-jg76j" Aug 19 00:20:12.920668 kubelet[2658]: I0819 00:20:12.918745 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b42bf112-ed43-4908-8f92-dbc623fd1e93-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-h8ngp\" (UID: \"b42bf112-ed43-4908-8f92-dbc623fd1e93\") " pod="calico-system/goldmane-768f4c5c69-h8ngp" Aug 19 00:20:12.920668 kubelet[2658]: I0819 00:20:12.918765 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h8x2\" (UniqueName: \"kubernetes.io/projected/b0272f1e-81be-4b55-962b-becad3f92145-kube-api-access-7h8x2\") pod \"coredns-668d6bf9bc-v7v5f\" (UID: \"b0272f1e-81be-4b55-962b-becad3f92145\") " pod="kube-system/coredns-668d6bf9bc-v7v5f" Aug 19 00:20:12.920668 kubelet[2658]: I0819 00:20:12.918808 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1495a0fe-e7d7-4006-86dc-aa41f51f3a3f-calico-apiserver-certs\") pod \"calico-apiserver-f655c676f-wx822\" (UID: \"1495a0fe-e7d7-4006-86dc-aa41f51f3a3f\") " pod="calico-apiserver/calico-apiserver-f655c676f-wx822" Aug 19 00:20:12.921371 kubelet[2658]: I0819 00:20:12.918826 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a38e80b-4f1a-4714-a905-dbc453cab4e0-tigera-ca-bundle\") pod \"calico-kube-controllers-547fbbbbdf-q4hdg\" (UID: \"6a38e80b-4f1a-4714-a905-dbc453cab4e0\") " pod="calico-system/calico-kube-controllers-547fbbbbdf-q4hdg" Aug 19 00:20:12.956497 containerd[1542]: time="2025-08-19T00:20:12.956416188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 19 00:20:13.185105 kubelet[2658]: E0819 00:20:13.184975 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:13.186524 containerd[1542]: time="2025-08-19T00:20:13.186394340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f655c676f-wx822,Uid:1495a0fe-e7d7-4006-86dc-aa41f51f3a3f,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:20:13.186924 containerd[1542]: time="2025-08-19T00:20:13.186884414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7v5f,Uid:b0272f1e-81be-4b55-962b-becad3f92145,Namespace:kube-system,Attempt:0,}" Aug 19 00:20:13.197250 containerd[1542]: time="2025-08-19T00:20:13.197121087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-h8ngp,Uid:b42bf112-ed43-4908-8f92-dbc623fd1e93,Namespace:calico-system,Attempt:0,}" Aug 19 00:20:13.198905 kubelet[2658]: E0819 00:20:13.198877 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:13.199545 containerd[1542]: time="2025-08-19T00:20:13.199468058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d6sfn,Uid:23f8949e-36f6-45c2-a78d-d3f5983db36b,Namespace:kube-system,Attempt:0,}" Aug 19 00:20:13.207622 containerd[1542]: time="2025-08-19T00:20:13.207577438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547fbbbbdf-q4hdg,Uid:6a38e80b-4f1a-4714-a905-dbc453cab4e0,Namespace:calico-system,Attempt:0,}" Aug 19 00:20:13.216692 containerd[1542]: time="2025-08-19T00:20:13.216647725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7997d9c98-jg76j,Uid:50103c4d-fbf4-4d05-8c65-1f1789776c46,Namespace:calico-system,Attempt:0,}" Aug 19 00:20:13.229431 containerd[1542]: time="2025-08-19T00:20:13.229286809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f655c676f-9ff6x,Uid:2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:20:13.813819 systemd[1]: Created slice kubepods-besteffort-podf6e5453f_d978_4803_a9fd_45cd7fd5890f.slice - libcontainer container kubepods-besteffort-podf6e5453f_d978_4803_a9fd_45cd7fd5890f.slice. Aug 19 00:20:13.816599 containerd[1542]: time="2025-08-19T00:20:13.816561808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mnt8g,Uid:f6e5453f-d978-4803-a9fd-45cd7fd5890f,Namespace:calico-system,Attempt:0,}" Aug 19 00:20:13.902809 containerd[1542]: time="2025-08-19T00:20:13.902738179Z" level=error msg="Failed to destroy network for sandbox \"f7bb3ddab52d1ba1592cfffe4ba7fe79c9fdde6faea57730636b656ca1dcf894\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.905721 containerd[1542]: time="2025-08-19T00:20:13.905547584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f655c676f-wx822,Uid:1495a0fe-e7d7-4006-86dc-aa41f51f3a3f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7bb3ddab52d1ba1592cfffe4ba7fe79c9fdde6faea57730636b656ca1dcf894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.907873 kubelet[2658]: E0819 00:20:13.907698 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7bb3ddab52d1ba1592cfffe4ba7fe79c9fdde6faea57730636b656ca1dcf894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.907873 kubelet[2658]: E0819 00:20:13.907827 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7bb3ddab52d1ba1592cfffe4ba7fe79c9fdde6faea57730636b656ca1dcf894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f655c676f-wx822" Aug 19 00:20:13.907873 kubelet[2658]: E0819 00:20:13.907853 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7bb3ddab52d1ba1592cfffe4ba7fe79c9fdde6faea57730636b656ca1dcf894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f655c676f-wx822" Aug 19 00:20:13.908318 kubelet[2658]: E0819 00:20:13.907917 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f655c676f-wx822_calico-apiserver(1495a0fe-e7d7-4006-86dc-aa41f51f3a3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f655c676f-wx822_calico-apiserver(1495a0fe-e7d7-4006-86dc-aa41f51f3a3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7bb3ddab52d1ba1592cfffe4ba7fe79c9fdde6faea57730636b656ca1dcf894\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f655c676f-wx822" podUID="1495a0fe-e7d7-4006-86dc-aa41f51f3a3f" Aug 19 00:20:13.927366 containerd[1542]: time="2025-08-19T00:20:13.927319514Z" level=error msg="Failed to destroy network for sandbox \"ec159c117b2e80f0e861f527483a81ce73cc6cca297224139eafad13ccf7086a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.928887 containerd[1542]: time="2025-08-19T00:20:13.928846135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f655c676f-9ff6x,Uid:2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec159c117b2e80f0e861f527483a81ce73cc6cca297224139eafad13ccf7086a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.929396 kubelet[2658]: E0819 00:20:13.929194 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec159c117b2e80f0e861f527483a81ce73cc6cca297224139eafad13ccf7086a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.929396 kubelet[2658]: E0819 00:20:13.929250 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec159c117b2e80f0e861f527483a81ce73cc6cca297224139eafad13ccf7086a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f655c676f-9ff6x" Aug 19 00:20:13.929396 kubelet[2658]: E0819 00:20:13.929271 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec159c117b2e80f0e861f527483a81ce73cc6cca297224139eafad13ccf7086a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f655c676f-9ff6x" Aug 19 00:20:13.929733 kubelet[2658]: E0819 00:20:13.929353 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f655c676f-9ff6x_calico-apiserver(2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f655c676f-9ff6x_calico-apiserver(2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec159c117b2e80f0e861f527483a81ce73cc6cca297224139eafad13ccf7086a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f655c676f-9ff6x" podUID="2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c" Aug 19 00:20:13.930988 containerd[1542]: time="2025-08-19T00:20:13.930955029Z" level=error msg="Failed to destroy network for sandbox \"a7d3af6e3ecbe72620d3fafa7239f3ab7db23acc8d3cdaeff60dd0ada5df12c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.932304 containerd[1542]: time="2025-08-19T00:20:13.932270173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7v5f,Uid:b0272f1e-81be-4b55-962b-becad3f92145,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d3af6e3ecbe72620d3fafa7239f3ab7db23acc8d3cdaeff60dd0ada5df12c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.933144 kubelet[2658]: E0819 00:20:13.933089 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d3af6e3ecbe72620d3fafa7239f3ab7db23acc8d3cdaeff60dd0ada5df12c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.933208 kubelet[2658]: E0819 00:20:13.933162 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d3af6e3ecbe72620d3fafa7239f3ab7db23acc8d3cdaeff60dd0ada5df12c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7v5f" Aug 19 00:20:13.933208 kubelet[2658]: E0819 00:20:13.933180 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d3af6e3ecbe72620d3fafa7239f3ab7db23acc8d3cdaeff60dd0ada5df12c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7v5f" Aug 19 00:20:13.933392 kubelet[2658]: E0819 00:20:13.933355 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v7v5f_kube-system(b0272f1e-81be-4b55-962b-becad3f92145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v7v5f_kube-system(b0272f1e-81be-4b55-962b-becad3f92145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7d3af6e3ecbe72620d3fafa7239f3ab7db23acc8d3cdaeff60dd0ada5df12c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v7v5f" podUID="b0272f1e-81be-4b55-962b-becad3f92145" Aug 19 00:20:13.935332 containerd[1542]: time="2025-08-19T00:20:13.935295376Z" level=error msg="Failed to destroy network for sandbox \"ea0afefabea93b297ea2626ea146d92d0b1a34119608cbda928d2cb75ba79626\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.938949 containerd[1542]: time="2025-08-19T00:20:13.938897251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-h8ngp,Uid:b42bf112-ed43-4908-8f92-dbc623fd1e93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0afefabea93b297ea2626ea146d92d0b1a34119608cbda928d2cb75ba79626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.939158 kubelet[2658]: E0819 00:20:13.939118 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0afefabea93b297ea2626ea146d92d0b1a34119608cbda928d2cb75ba79626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.939205 kubelet[2658]: E0819 00:20:13.939181 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0afefabea93b297ea2626ea146d92d0b1a34119608cbda928d2cb75ba79626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-h8ngp" Aug 19 00:20:13.939272 kubelet[2658]: E0819 00:20:13.939202 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0afefabea93b297ea2626ea146d92d0b1a34119608cbda928d2cb75ba79626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-h8ngp" Aug 19 00:20:13.939272 kubelet[2658]: E0819 00:20:13.939245 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-h8ngp_calico-system(b42bf112-ed43-4908-8f92-dbc623fd1e93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-h8ngp_calico-system(b42bf112-ed43-4908-8f92-dbc623fd1e93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea0afefabea93b297ea2626ea146d92d0b1a34119608cbda928d2cb75ba79626\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-h8ngp" podUID="b42bf112-ed43-4908-8f92-dbc623fd1e93" Aug 19 00:20:13.944965 containerd[1542]: time="2025-08-19T00:20:13.944921096Z" level=error msg="Failed to destroy network for sandbox \"7fb03bd7eb91cd8cee591c8380322c27499c218b5fc2d872891ae6a2c4742127\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.949481 containerd[1542]: time="2025-08-19T00:20:13.949422920Z" level=error msg="Failed to destroy network for sandbox \"59abc22ee69bbe298eec06040a12d382a258c4ee778c0c47bb7ac68a0e25d999\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.949644 containerd[1542]: time="2025-08-19T00:20:13.949620958Z" level=error msg="Failed to destroy network for sandbox \"e70c2bc95425a51441d54c64c08617099eb37558cd1b1fcf87b76c4c0438df76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.951432 containerd[1542]: time="2025-08-19T00:20:13.951388056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547fbbbbdf-q4hdg,Uid:6a38e80b-4f1a-4714-a905-dbc453cab4e0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb03bd7eb91cd8cee591c8380322c27499c218b5fc2d872891ae6a2c4742127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.951718 kubelet[2658]: E0819 00:20:13.951670 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb03bd7eb91cd8cee591c8380322c27499c218b5fc2d872891ae6a2c4742127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.951765 kubelet[2658]: E0819 00:20:13.951735 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb03bd7eb91cd8cee591c8380322c27499c218b5fc2d872891ae6a2c4742127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-547fbbbbdf-q4hdg" Aug 19 00:20:13.951765 kubelet[2658]: E0819 00:20:13.951753 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb03bd7eb91cd8cee591c8380322c27499c218b5fc2d872891ae6a2c4742127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-547fbbbbdf-q4hdg" Aug 19 00:20:13.952251 kubelet[2658]: E0819 00:20:13.951891 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-547fbbbbdf-q4hdg_calico-system(6a38e80b-4f1a-4714-a905-dbc453cab4e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-547fbbbbdf-q4hdg_calico-system(6a38e80b-4f1a-4714-a905-dbc453cab4e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fb03bd7eb91cd8cee591c8380322c27499c218b5fc2d872891ae6a2c4742127\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-547fbbbbdf-q4hdg" podUID="6a38e80b-4f1a-4714-a905-dbc453cab4e0" Aug 19 00:20:13.952570 containerd[1542]: time="2025-08-19T00:20:13.952533682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d6sfn,Uid:23f8949e-36f6-45c2-a78d-d3f5983db36b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59abc22ee69bbe298eec06040a12d382a258c4ee778c0c47bb7ac68a0e25d999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.952764 kubelet[2658]: E0819 00:20:13.952724 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59abc22ee69bbe298eec06040a12d382a258c4ee778c0c47bb7ac68a0e25d999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.952812 kubelet[2658]: E0819 00:20:13.952785 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59abc22ee69bbe298eec06040a12d382a258c4ee778c0c47bb7ac68a0e25d999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d6sfn" Aug 19 00:20:13.952812 kubelet[2658]: E0819 00:20:13.952803 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59abc22ee69bbe298eec06040a12d382a258c4ee778c0c47bb7ac68a0e25d999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d6sfn" Aug 19 00:20:13.952868 kubelet[2658]: E0819 00:20:13.952838 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d6sfn_kube-system(23f8949e-36f6-45c2-a78d-d3f5983db36b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d6sfn_kube-system(23f8949e-36f6-45c2-a78d-d3f5983db36b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59abc22ee69bbe298eec06040a12d382a258c4ee778c0c47bb7ac68a0e25d999\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d6sfn" podUID="23f8949e-36f6-45c2-a78d-d3f5983db36b" Aug 19 00:20:13.953502 containerd[1542]: time="2025-08-19T00:20:13.953465550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7997d9c98-jg76j,Uid:50103c4d-fbf4-4d05-8c65-1f1789776c46,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e70c2bc95425a51441d54c64c08617099eb37558cd1b1fcf87b76c4c0438df76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.954140 kubelet[2658]: E0819 00:20:13.954058 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e70c2bc95425a51441d54c64c08617099eb37558cd1b1fcf87b76c4c0438df76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.954140 kubelet[2658]: E0819 00:20:13.954101 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e70c2bc95425a51441d54c64c08617099eb37558cd1b1fcf87b76c4c0438df76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7997d9c98-jg76j" Aug 19 00:20:13.954140 kubelet[2658]: E0819 00:20:13.954116 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e70c2bc95425a51441d54c64c08617099eb37558cd1b1fcf87b76c4c0438df76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7997d9c98-jg76j" Aug 19 00:20:13.954455 kubelet[2658]: E0819 00:20:13.954144 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7997d9c98-jg76j_calico-system(50103c4d-fbf4-4d05-8c65-1f1789776c46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7997d9c98-jg76j_calico-system(50103c4d-fbf4-4d05-8c65-1f1789776c46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e70c2bc95425a51441d54c64c08617099eb37558cd1b1fcf87b76c4c0438df76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7997d9c98-jg76j" podUID="50103c4d-fbf4-4d05-8c65-1f1789776c46" Aug 19 00:20:13.955864 containerd[1542]: time="2025-08-19T00:20:13.955829321Z" level=error msg="Failed to destroy network for sandbox \"311cdf1cc384cc2ca907358530ab50d8b7ec598e3659e055f3adf570e3354208\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.957652 containerd[1542]: time="2025-08-19T00:20:13.957604619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mnt8g,Uid:f6e5453f-d978-4803-a9fd-45cd7fd5890f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"311cdf1cc384cc2ca907358530ab50d8b7ec598e3659e055f3adf570e3354208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.958048 kubelet[2658]: E0819 00:20:13.957838 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"311cdf1cc384cc2ca907358530ab50d8b7ec598e3659e055f3adf570e3354208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:20:13.958048 kubelet[2658]: E0819 00:20:13.957981 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"311cdf1cc384cc2ca907358530ab50d8b7ec598e3659e055f3adf570e3354208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mnt8g" Aug 19 00:20:13.958048 kubelet[2658]: E0819 00:20:13.958017 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"311cdf1cc384cc2ca907358530ab50d8b7ec598e3659e055f3adf570e3354208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mnt8g" Aug 19 00:20:13.958178 kubelet[2658]: E0819 00:20:13.958068 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mnt8g_calico-system(f6e5453f-d978-4803-a9fd-45cd7fd5890f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mnt8g_calico-system(f6e5453f-d978-4803-a9fd-45cd7fd5890f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"311cdf1cc384cc2ca907358530ab50d8b7ec598e3659e055f3adf570e3354208\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mnt8g" podUID="f6e5453f-d978-4803-a9fd-45cd7fd5890f" Aug 19 00:20:17.272086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222053202.mount: Deactivated successfully. Aug 19 00:20:17.638409 containerd[1542]: time="2025-08-19T00:20:17.638332961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:17.642023 containerd[1542]: time="2025-08-19T00:20:17.641965727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 19 00:20:17.643338 containerd[1542]: time="2025-08-19T00:20:17.643302514Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:17.648863 containerd[1542]: time="2025-08-19T00:20:17.648812981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:17.649524 containerd[1542]: time="2025-08-19T00:20:17.649438695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.692975427s" Aug 19 00:20:17.649524 containerd[1542]: time="2025-08-19T00:20:17.649470095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 19 00:20:17.661559 containerd[1542]: time="2025-08-19T00:20:17.661513179Z" level=info msg="CreateContainer within sandbox \"044229527aecf12ece72f8da8a32038d545691c1877159a8e09ec6026a5bb3a5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 19 00:20:17.702800 containerd[1542]: time="2025-08-19T00:20:17.701933152Z" level=info msg="Container 2e504bccd983c5c72b0063d4bfaa4743aefbb35787a25f3a71d3c65f6b614b2b: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:17.789873 containerd[1542]: time="2025-08-19T00:20:17.789826790Z" level=info msg="CreateContainer within sandbox \"044229527aecf12ece72f8da8a32038d545691c1877159a8e09ec6026a5bb3a5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2e504bccd983c5c72b0063d4bfaa4743aefbb35787a25f3a71d3c65f6b614b2b\"" Aug 19 00:20:17.790368 containerd[1542]: time="2025-08-19T00:20:17.790329986Z" level=info msg="StartContainer for \"2e504bccd983c5c72b0063d4bfaa4743aefbb35787a25f3a71d3c65f6b614b2b\"" Aug 19 00:20:17.791764 containerd[1542]: time="2025-08-19T00:20:17.791727812Z" level=info msg="connecting to shim 2e504bccd983c5c72b0063d4bfaa4743aefbb35787a25f3a71d3c65f6b614b2b" address="unix:///run/containerd/s/ebbd2f1ce8c76009b1339b4666b9e7c0ea1d9ea23c4ac108b49c6fdb1e4ac6e8" protocol=ttrpc version=3 Aug 19 00:20:17.813987 systemd[1]: Started cri-containerd-2e504bccd983c5c72b0063d4bfaa4743aefbb35787a25f3a71d3c65f6b614b2b.scope - libcontainer container 2e504bccd983c5c72b0063d4bfaa4743aefbb35787a25f3a71d3c65f6b614b2b. Aug 19 00:20:17.855204 containerd[1542]: time="2025-08-19T00:20:17.855146645Z" level=info msg="StartContainer for \"2e504bccd983c5c72b0063d4bfaa4743aefbb35787a25f3a71d3c65f6b614b2b\" returns successfully" Aug 19 00:20:17.999452 kubelet[2658]: I0819 00:20:17.999279 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kxlf7" podStartSLOduration=1.13721087 podStartE2EDuration="14.999263305s" podCreationTimestamp="2025-08-19 00:20:03 +0000 UTC" firstStartedPulling="2025-08-19 00:20:03.788204332 +0000 UTC m=+19.076274203" lastFinishedPulling="2025-08-19 00:20:17.650256767 +0000 UTC m=+32.938326638" observedRunningTime="2025-08-19 00:20:17.999110026 +0000 UTC m=+33.287179897" watchObservedRunningTime="2025-08-19 00:20:17.999263305 +0000 UTC m=+33.287333176" Aug 19 00:20:18.061095 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 19 00:20:18.061217 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 19 00:20:18.293513 kubelet[2658]: I0819 00:20:18.293204 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhmrp\" (UniqueName: \"kubernetes.io/projected/50103c4d-fbf4-4d05-8c65-1f1789776c46-kube-api-access-jhmrp\") pod \"50103c4d-fbf4-4d05-8c65-1f1789776c46\" (UID: \"50103c4d-fbf4-4d05-8c65-1f1789776c46\") " Aug 19 00:20:18.293513 kubelet[2658]: I0819 00:20:18.293257 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50103c4d-fbf4-4d05-8c65-1f1789776c46-whisker-ca-bundle\") pod \"50103c4d-fbf4-4d05-8c65-1f1789776c46\" (UID: \"50103c4d-fbf4-4d05-8c65-1f1789776c46\") " Aug 19 00:20:18.293513 kubelet[2658]: I0819 00:20:18.293282 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50103c4d-fbf4-4d05-8c65-1f1789776c46-whisker-backend-key-pair\") pod \"50103c4d-fbf4-4d05-8c65-1f1789776c46\" (UID: \"50103c4d-fbf4-4d05-8c65-1f1789776c46\") " Aug 19 00:20:18.296808 kubelet[2658]: I0819 00:20:18.296488 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50103c4d-fbf4-4d05-8c65-1f1789776c46-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "50103c4d-fbf4-4d05-8c65-1f1789776c46" (UID: "50103c4d-fbf4-4d05-8c65-1f1789776c46"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 00:20:18.304069 kubelet[2658]: I0819 00:20:18.304010 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50103c4d-fbf4-4d05-8c65-1f1789776c46-kube-api-access-jhmrp" (OuterVolumeSpecName: "kube-api-access-jhmrp") pod "50103c4d-fbf4-4d05-8c65-1f1789776c46" (UID: "50103c4d-fbf4-4d05-8c65-1f1789776c46"). InnerVolumeSpecName "kube-api-access-jhmrp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 00:20:18.304069 kubelet[2658]: I0819 00:20:18.304010 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50103c4d-fbf4-4d05-8c65-1f1789776c46-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "50103c4d-fbf4-4d05-8c65-1f1789776c46" (UID: "50103c4d-fbf4-4d05-8c65-1f1789776c46"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 19 00:20:18.305025 systemd[1]: var-lib-kubelet-pods-50103c4d\x2dfbf4\x2d4d05\x2d8c65\x2d1f1789776c46-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djhmrp.mount: Deactivated successfully. Aug 19 00:20:18.305140 systemd[1]: var-lib-kubelet-pods-50103c4d\x2dfbf4\x2d4d05\x2d8c65\x2d1f1789776c46-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 19 00:20:18.394587 kubelet[2658]: I0819 00:20:18.394541 2658 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jhmrp\" (UniqueName: \"kubernetes.io/projected/50103c4d-fbf4-4d05-8c65-1f1789776c46-kube-api-access-jhmrp\") on node \"localhost\" DevicePath \"\"" Aug 19 00:20:18.394587 kubelet[2658]: I0819 00:20:18.394576 2658 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50103c4d-fbf4-4d05-8c65-1f1789776c46-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 19 00:20:18.394587 kubelet[2658]: I0819 00:20:18.394585 2658 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50103c4d-fbf4-4d05-8c65-1f1789776c46-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 19 00:20:18.814915 systemd[1]: Removed slice kubepods-besteffort-pod50103c4d_fbf4_4d05_8c65_1f1789776c46.slice - libcontainer container kubepods-besteffort-pod50103c4d_fbf4_4d05_8c65_1f1789776c46.slice. Aug 19 00:20:18.975090 kubelet[2658]: I0819 00:20:18.975042 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:20:19.089796 systemd[1]: Created slice kubepods-besteffort-pod73958b6d_30f9_4320_aa5e_dba7be018367.slice - libcontainer container kubepods-besteffort-pod73958b6d_30f9_4320_aa5e_dba7be018367.slice. Aug 19 00:20:19.199563 kubelet[2658]: I0819 00:20:19.199509 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/73958b6d-30f9-4320-aa5e-dba7be018367-whisker-backend-key-pair\") pod \"whisker-5bf67656fd-p6bsc\" (UID: \"73958b6d-30f9-4320-aa5e-dba7be018367\") " pod="calico-system/whisker-5bf67656fd-p6bsc" Aug 19 00:20:19.199563 kubelet[2658]: I0819 00:20:19.199549 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73958b6d-30f9-4320-aa5e-dba7be018367-whisker-ca-bundle\") pod \"whisker-5bf67656fd-p6bsc\" (UID: \"73958b6d-30f9-4320-aa5e-dba7be018367\") " pod="calico-system/whisker-5bf67656fd-p6bsc" Aug 19 00:20:19.199563 kubelet[2658]: I0819 00:20:19.199574 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwd4w\" (UniqueName: \"kubernetes.io/projected/73958b6d-30f9-4320-aa5e-dba7be018367-kube-api-access-wwd4w\") pod \"whisker-5bf67656fd-p6bsc\" (UID: \"73958b6d-30f9-4320-aa5e-dba7be018367\") " pod="calico-system/whisker-5bf67656fd-p6bsc" Aug 19 00:20:19.395726 containerd[1542]: time="2025-08-19T00:20:19.395623628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bf67656fd-p6bsc,Uid:73958b6d-30f9-4320-aa5e-dba7be018367,Namespace:calico-system,Attempt:0,}" Aug 19 00:20:19.804951 systemd-networkd[1437]: cali32f838e273d: Link UP Aug 19 00:20:19.806528 systemd-networkd[1437]: cali32f838e273d: Gained carrier Aug 19 00:20:19.820080 containerd[1542]: 2025-08-19 00:20:19.521 [INFO][3907] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 19 00:20:19.820080 containerd[1542]: 2025-08-19 00:20:19.600 [INFO][3907] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5bf67656fd--p6bsc-eth0 whisker-5bf67656fd- calico-system 73958b6d-30f9-4320-aa5e-dba7be018367 883 0 2025-08-19 00:20:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5bf67656fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5bf67656fd-p6bsc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali32f838e273d [] [] }} ContainerID="0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" Namespace="calico-system" Pod="whisker-5bf67656fd-p6bsc" WorkloadEndpoint="localhost-k8s-whisker--5bf67656fd--p6bsc-" Aug 19 00:20:19.820080 containerd[1542]: 2025-08-19 00:20:19.600 [INFO][3907] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" Namespace="calico-system" Pod="whisker-5bf67656fd-p6bsc" WorkloadEndpoint="localhost-k8s-whisker--5bf67656fd--p6bsc-eth0" Aug 19 00:20:19.820080 containerd[1542]: 2025-08-19 00:20:19.730 [INFO][3942] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" HandleID="k8s-pod-network.0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" Workload="localhost-k8s-whisker--5bf67656fd--p6bsc-eth0" Aug 19 00:20:19.820322 containerd[1542]: 2025-08-19 00:20:19.732 [INFO][3942] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" HandleID="k8s-pod-network.0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" Workload="localhost-k8s-whisker--5bf67656fd--p6bsc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000515430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5bf67656fd-p6bsc", "timestamp":"2025-08-19 00:20:19.730327612 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:20:19.820322 containerd[1542]: 2025-08-19 00:20:19.732 [INFO][3942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:20:19.820322 containerd[1542]: 2025-08-19 00:20:19.732 [INFO][3942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:20:19.820322 containerd[1542]: 2025-08-19 00:20:19.732 [INFO][3942] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:20:19.820322 containerd[1542]: 2025-08-19 00:20:19.748 [INFO][3942] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" host="localhost" Aug 19 00:20:19.820322 containerd[1542]: 2025-08-19 00:20:19.756 [INFO][3942] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:20:19.820322 containerd[1542]: 2025-08-19 00:20:19.766 [INFO][3942] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:20:19.820322 containerd[1542]: 2025-08-19 00:20:19.769 [INFO][3942] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:19.820322 containerd[1542]: 2025-08-19 00:20:19.773 [INFO][3942] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:19.820322 containerd[1542]: 2025-08-19 00:20:19.775 [INFO][3942] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" host="localhost" Aug 19 00:20:19.820531 containerd[1542]: 2025-08-19 00:20:19.777 [INFO][3942] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76 Aug 19 00:20:19.820531 containerd[1542]: 2025-08-19 00:20:19.784 [INFO][3942] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" host="localhost" Aug 19 00:20:19.820531 containerd[1542]: 2025-08-19 00:20:19.791 [INFO][3942] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" host="localhost" Aug 19 00:20:19.820531 containerd[1542]: 2025-08-19 00:20:19.791 [INFO][3942] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" host="localhost" Aug 19 00:20:19.820531 containerd[1542]: 2025-08-19 00:20:19.791 [INFO][3942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:20:19.820531 containerd[1542]: 2025-08-19 00:20:19.791 [INFO][3942] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" HandleID="k8s-pod-network.0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" Workload="localhost-k8s-whisker--5bf67656fd--p6bsc-eth0" Aug 19 00:20:19.820639 containerd[1542]: 2025-08-19 00:20:19.794 [INFO][3907] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" Namespace="calico-system" Pod="whisker-5bf67656fd-p6bsc" WorkloadEndpoint="localhost-k8s-whisker--5bf67656fd--p6bsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5bf67656fd--p6bsc-eth0", GenerateName:"whisker-5bf67656fd-", Namespace:"calico-system", SelfLink:"", UID:"73958b6d-30f9-4320-aa5e-dba7be018367", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 20, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bf67656fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5bf67656fd-p6bsc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali32f838e273d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:19.820639 containerd[1542]: 2025-08-19 00:20:19.794 [INFO][3907] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" Namespace="calico-system" Pod="whisker-5bf67656fd-p6bsc" WorkloadEndpoint="localhost-k8s-whisker--5bf67656fd--p6bsc-eth0" Aug 19 00:20:19.820707 containerd[1542]: 2025-08-19 00:20:19.794 [INFO][3907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32f838e273d ContainerID="0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" Namespace="calico-system" Pod="whisker-5bf67656fd-p6bsc" WorkloadEndpoint="localhost-k8s-whisker--5bf67656fd--p6bsc-eth0" Aug 19 00:20:19.820707 containerd[1542]: 2025-08-19 00:20:19.804 [INFO][3907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" Namespace="calico-system" Pod="whisker-5bf67656fd-p6bsc" WorkloadEndpoint="localhost-k8s-whisker--5bf67656fd--p6bsc-eth0" Aug 19 00:20:19.820748 containerd[1542]: 2025-08-19 00:20:19.805 [INFO][3907] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" Namespace="calico-system" Pod="whisker-5bf67656fd-p6bsc" WorkloadEndpoint="localhost-k8s-whisker--5bf67656fd--p6bsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5bf67656fd--p6bsc-eth0", GenerateName:"whisker-5bf67656fd-", Namespace:"calico-system", SelfLink:"", UID:"73958b6d-30f9-4320-aa5e-dba7be018367", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 20, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bf67656fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76", Pod:"whisker-5bf67656fd-p6bsc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali32f838e273d", MAC:"76:19:a3:33:95:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:19.820822 containerd[1542]: 2025-08-19 00:20:19.816 [INFO][3907] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" Namespace="calico-system" Pod="whisker-5bf67656fd-p6bsc" WorkloadEndpoint="localhost-k8s-whisker--5bf67656fd--p6bsc-eth0" Aug 19 00:20:19.856990 containerd[1542]: time="2025-08-19T00:20:19.856941157Z" level=info msg="connecting to shim 0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76" address="unix:///run/containerd/s/461b5fdfb30d4b90b1acc1a5a2a8ca3781178f0f18c7b98713b500bd9a4959fc" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:20:19.864535 systemd-networkd[1437]: vxlan.calico: Link UP Aug 19 00:20:19.864546 systemd-networkd[1437]: vxlan.calico: Gained carrier Aug 19 00:20:19.901955 systemd[1]: Started cri-containerd-0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76.scope - libcontainer container 0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76. Aug 19 00:20:19.913099 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:20:19.947225 containerd[1542]: time="2025-08-19T00:20:19.947179353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bf67656fd-p6bsc,Uid:73958b6d-30f9-4320-aa5e-dba7be018367,Namespace:calico-system,Attempt:0,} returns sandbox id \"0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76\"" Aug 19 00:20:19.948922 containerd[1542]: time="2025-08-19T00:20:19.948709860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 19 00:20:20.808151 kubelet[2658]: I0819 00:20:20.808103 2658 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50103c4d-fbf4-4d05-8c65-1f1789776c46" path="/var/lib/kubelet/pods/50103c4d-fbf4-4d05-8c65-1f1789776c46/volumes" Aug 19 00:20:21.221106 containerd[1542]: time="2025-08-19T00:20:21.220979637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:21.222406 containerd[1542]: time="2025-08-19T00:20:21.222369042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 19 00:20:21.223176 containerd[1542]: time="2025-08-19T00:20:21.223142525Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:21.225822 containerd[1542]: time="2025-08-19T00:20:21.225765854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:21.226536 containerd[1542]: time="2025-08-19T00:20:21.226420016Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.277493517s" Aug 19 00:20:21.226536 containerd[1542]: time="2025-08-19T00:20:21.226457576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 19 00:20:21.228597 containerd[1542]: time="2025-08-19T00:20:21.228560464Z" level=info msg="CreateContainer within sandbox \"0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 19 00:20:21.235090 containerd[1542]: time="2025-08-19T00:20:21.235051086Z" level=info msg="Container 239d06567a024cc7b4b17a04241a62bc26f74b184314cb5ed35a1d6a7e62ff3b: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:21.261535 containerd[1542]: time="2025-08-19T00:20:21.261482657Z" level=info msg="CreateContainer within sandbox \"0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"239d06567a024cc7b4b17a04241a62bc26f74b184314cb5ed35a1d6a7e62ff3b\"" Aug 19 00:20:21.262228 containerd[1542]: time="2025-08-19T00:20:21.262031579Z" level=info msg="StartContainer for \"239d06567a024cc7b4b17a04241a62bc26f74b184314cb5ed35a1d6a7e62ff3b\"" Aug 19 00:20:21.263290 containerd[1542]: time="2025-08-19T00:20:21.263243823Z" level=info msg="connecting to shim 239d06567a024cc7b4b17a04241a62bc26f74b184314cb5ed35a1d6a7e62ff3b" address="unix:///run/containerd/s/461b5fdfb30d4b90b1acc1a5a2a8ca3781178f0f18c7b98713b500bd9a4959fc" protocol=ttrpc version=3 Aug 19 00:20:21.281942 systemd[1]: Started cri-containerd-239d06567a024cc7b4b17a04241a62bc26f74b184314cb5ed35a1d6a7e62ff3b.scope - libcontainer container 239d06567a024cc7b4b17a04241a62bc26f74b184314cb5ed35a1d6a7e62ff3b. Aug 19 00:20:21.319632 containerd[1542]: time="2025-08-19T00:20:21.319582538Z" level=info msg="StartContainer for \"239d06567a024cc7b4b17a04241a62bc26f74b184314cb5ed35a1d6a7e62ff3b\" returns successfully" Aug 19 00:20:21.322149 containerd[1542]: time="2025-08-19T00:20:21.322118907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 19 00:20:21.527932 systemd-networkd[1437]: cali32f838e273d: Gained IPv6LL Aug 19 00:20:21.911947 systemd-networkd[1437]: vxlan.calico: Gained IPv6LL Aug 19 00:20:23.253747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931222559.mount: Deactivated successfully. Aug 19 00:20:23.269542 containerd[1542]: time="2025-08-19T00:20:23.269482369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:23.270522 containerd[1542]: time="2025-08-19T00:20:23.270357572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 19 00:20:23.271294 containerd[1542]: time="2025-08-19T00:20:23.271263135Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:23.274431 containerd[1542]: time="2025-08-19T00:20:23.274237065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:23.274847 containerd[1542]: time="2025-08-19T00:20:23.274824347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.95267232s" Aug 19 00:20:23.274889 containerd[1542]: time="2025-08-19T00:20:23.274852467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 19 00:20:23.279577 containerd[1542]: time="2025-08-19T00:20:23.279514482Z" level=info msg="CreateContainer within sandbox \"0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 19 00:20:23.286517 containerd[1542]: time="2025-08-19T00:20:23.286438545Z" level=info msg="Container 1f1bd7c5df1defa9e953c633bd138d61b6d55233e0584285c7fbc5ecb8999b34: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:23.295317 containerd[1542]: time="2025-08-19T00:20:23.295186173Z" level=info msg="CreateContainer within sandbox \"0fef332dab40895a59b9c272851910ee49655da631ed4b2d42f3d4a5022a6f76\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1f1bd7c5df1defa9e953c633bd138d61b6d55233e0584285c7fbc5ecb8999b34\"" Aug 19 00:20:23.296802 containerd[1542]: time="2025-08-19T00:20:23.296716738Z" level=info msg="StartContainer for \"1f1bd7c5df1defa9e953c633bd138d61b6d55233e0584285c7fbc5ecb8999b34\"" Aug 19 00:20:23.298727 containerd[1542]: time="2025-08-19T00:20:23.298484064Z" level=info msg="connecting to shim 1f1bd7c5df1defa9e953c633bd138d61b6d55233e0584285c7fbc5ecb8999b34" address="unix:///run/containerd/s/461b5fdfb30d4b90b1acc1a5a2a8ca3781178f0f18c7b98713b500bd9a4959fc" protocol=ttrpc version=3 Aug 19 00:20:23.322947 systemd[1]: Started cri-containerd-1f1bd7c5df1defa9e953c633bd138d61b6d55233e0584285c7fbc5ecb8999b34.scope - libcontainer container 1f1bd7c5df1defa9e953c633bd138d61b6d55233e0584285c7fbc5ecb8999b34. Aug 19 00:20:23.373425 containerd[1542]: time="2025-08-19T00:20:23.373259268Z" level=info msg="StartContainer for \"1f1bd7c5df1defa9e953c633bd138d61b6d55233e0584285c7fbc5ecb8999b34\" returns successfully" Aug 19 00:20:24.011343 kubelet[2658]: I0819 00:20:24.011106 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5bf67656fd-p6bsc" podStartSLOduration=1.684133864 podStartE2EDuration="5.011086791s" podCreationTimestamp="2025-08-19 00:20:19 +0000 UTC" firstStartedPulling="2025-08-19 00:20:19.948487542 +0000 UTC m=+35.236557373" lastFinishedPulling="2025-08-19 00:20:23.275440429 +0000 UTC m=+38.563510300" observedRunningTime="2025-08-19 00:20:24.010452029 +0000 UTC m=+39.298521900" watchObservedRunningTime="2025-08-19 00:20:24.011086791 +0000 UTC m=+39.299156662" Aug 19 00:20:24.278261 kubelet[2658]: I0819 00:20:24.278227 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:20:24.401460 containerd[1542]: time="2025-08-19T00:20:24.401381711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e504bccd983c5c72b0063d4bfaa4743aefbb35787a25f3a71d3c65f6b614b2b\" id:\"209837c2dcb94b15f93af7ef44fae758218e2a03394536d415f0daeb982ed234\" pid:4188 exited_at:{seconds:1755562824 nanos:401027070}" Aug 19 00:20:24.485836 containerd[1542]: time="2025-08-19T00:20:24.485761939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e504bccd983c5c72b0063d4bfaa4743aefbb35787a25f3a71d3c65f6b614b2b\" id:\"954bf42e97684400b9b6d757570bd64719a12607b5a477ac5930d35253ae6ad5\" pid:4214 exited_at:{seconds:1755562824 nanos:485479738}" Aug 19 00:20:25.570404 systemd[1]: Started sshd@7-10.0.0.90:22-10.0.0.1:44906.service - OpenSSH per-connection server daemon (10.0.0.1:44906). Aug 19 00:20:25.634433 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 44906 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:20:25.636094 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:20:25.640028 systemd-logind[1511]: New session 8 of user core. Aug 19 00:20:25.649928 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 00:20:25.805311 kubelet[2658]: E0819 00:20:25.805269 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:25.806207 containerd[1542]: time="2025-08-19T00:20:25.805882583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d6sfn,Uid:23f8949e-36f6-45c2-a78d-d3f5983db36b,Namespace:kube-system,Attempt:0,}" Aug 19 00:20:25.806525 containerd[1542]: time="2025-08-19T00:20:25.806501465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f655c676f-wx822,Uid:1495a0fe-e7d7-4006-86dc-aa41f51f3a3f,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:20:25.877464 sshd[4243]: Connection closed by 10.0.0.1 port 44906 Aug 19 00:20:25.877222 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Aug 19 00:20:25.882097 systemd[1]: sshd@7-10.0.0.90:22-10.0.0.1:44906.service: Deactivated successfully. Aug 19 00:20:25.883929 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 00:20:25.887534 systemd-logind[1511]: Session 8 logged out. Waiting for processes to exit. Aug 19 00:20:25.888848 systemd-logind[1511]: Removed session 8. Aug 19 00:20:25.958319 systemd-networkd[1437]: cali4d90f941a63: Link UP Aug 19 00:20:25.958546 systemd-networkd[1437]: cali4d90f941a63: Gained carrier Aug 19 00:20:25.971629 containerd[1542]: 2025-08-19 00:20:25.883 [INFO][4263] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0 coredns-668d6bf9bc- kube-system 23f8949e-36f6-45c2-a78d-d3f5983db36b 820 0 2025-08-19 00:19:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-d6sfn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4d90f941a63 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d6sfn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d6sfn-" Aug 19 00:20:25.971629 containerd[1542]: 2025-08-19 00:20:25.883 [INFO][4263] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d6sfn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0" Aug 19 00:20:25.971629 containerd[1542]: 2025-08-19 00:20:25.914 [INFO][4295] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" HandleID="k8s-pod-network.c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" Workload="localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0" Aug 19 00:20:25.971974 containerd[1542]: 2025-08-19 00:20:25.914 [INFO][4295] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" HandleID="k8s-pod-network.c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" Workload="localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-d6sfn", "timestamp":"2025-08-19 00:20:25.914427919 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:20:25.971974 containerd[1542]: 2025-08-19 00:20:25.914 [INFO][4295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:20:25.971974 containerd[1542]: 2025-08-19 00:20:25.914 [INFO][4295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:20:25.971974 containerd[1542]: 2025-08-19 00:20:25.914 [INFO][4295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:20:25.971974 containerd[1542]: 2025-08-19 00:20:25.928 [INFO][4295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" host="localhost" Aug 19 00:20:25.971974 containerd[1542]: 2025-08-19 00:20:25.933 [INFO][4295] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:20:25.971974 containerd[1542]: 2025-08-19 00:20:25.939 [INFO][4295] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:20:25.971974 containerd[1542]: 2025-08-19 00:20:25.940 [INFO][4295] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:25.971974 containerd[1542]: 2025-08-19 00:20:25.942 [INFO][4295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:25.971974 containerd[1542]: 2025-08-19 00:20:25.942 [INFO][4295] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" host="localhost" Aug 19 00:20:25.972335 containerd[1542]: 2025-08-19 00:20:25.944 [INFO][4295] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b Aug 19 00:20:25.972335 containerd[1542]: 2025-08-19 00:20:25.947 [INFO][4295] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" host="localhost" Aug 19 00:20:25.972335 containerd[1542]: 2025-08-19 00:20:25.953 [INFO][4295] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" host="localhost" Aug 19 00:20:25.972335 containerd[1542]: 2025-08-19 00:20:25.953 [INFO][4295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" host="localhost" Aug 19 00:20:25.972335 containerd[1542]: 2025-08-19 00:20:25.953 [INFO][4295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:20:25.972335 containerd[1542]: 2025-08-19 00:20:25.953 [INFO][4295] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" HandleID="k8s-pod-network.c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" Workload="localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0" Aug 19 00:20:25.972589 containerd[1542]: 2025-08-19 00:20:25.955 [INFO][4263] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d6sfn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"23f8949e-36f6-45c2-a78d-d3f5983db36b", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 19, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-d6sfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d90f941a63", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:25.972814 containerd[1542]: 2025-08-19 00:20:25.955 [INFO][4263] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d6sfn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0" Aug 19 00:20:25.972814 containerd[1542]: 2025-08-19 00:20:25.955 [INFO][4263] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d90f941a63 ContainerID="c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d6sfn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0" Aug 19 00:20:25.972814 containerd[1542]: 2025-08-19 00:20:25.958 [INFO][4263] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d6sfn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0" Aug 19 00:20:25.972914 containerd[1542]: 2025-08-19 00:20:25.958 [INFO][4263] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d6sfn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"23f8949e-36f6-45c2-a78d-d3f5983db36b", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 19, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b", Pod:"coredns-668d6bf9bc-d6sfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d90f941a63", MAC:"aa:01:8f:4f:1d:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:25.972914 containerd[1542]: 2025-08-19 00:20:25.967 [INFO][4263] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d6sfn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d6sfn-eth0" Aug 19 00:20:26.013385 containerd[1542]: time="2025-08-19T00:20:26.013279463Z" level=info msg="connecting to shim c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b" address="unix:///run/containerd/s/f14d8e473ea978b9b6947a4ee84f9f443b3e1a367fd7b5165b0887b6ec910c2d" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:20:26.049183 systemd[1]: Started cri-containerd-c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b.scope - libcontainer container c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b. Aug 19 00:20:26.064560 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:20:26.067133 systemd-networkd[1437]: calibe126b67b85: Link UP Aug 19 00:20:26.067282 systemd-networkd[1437]: calibe126b67b85: Gained carrier Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:25.878 [INFO][4265] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f655c676f--wx822-eth0 calico-apiserver-f655c676f- calico-apiserver 1495a0fe-e7d7-4006-86dc-aa41f51f3a3f 821 0 2025-08-19 00:19:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f655c676f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f655c676f-wx822 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe126b67b85 [] [] }} ContainerID="26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-wx822" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--wx822-" Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:25.878 [INFO][4265] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-wx822" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--wx822-eth0" Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:25.915 [INFO][4286] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" HandleID="k8s-pod-network.26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" Workload="localhost-k8s-calico--apiserver--f655c676f--wx822-eth0" Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:25.915 [INFO][4286] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" HandleID="k8s-pod-network.26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" Workload="localhost-k8s-calico--apiserver--f655c676f--wx822-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d6e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f655c676f-wx822", "timestamp":"2025-08-19 00:20:25.915343922 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:25.915 [INFO][4286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:25.953 [INFO][4286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:25.953 [INFO][4286] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:26.029 [INFO][4286] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" host="localhost" Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:26.033 [INFO][4286] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:26.040 [INFO][4286] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:26.041 [INFO][4286] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:26.043 [INFO][4286] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:26.044 [INFO][4286] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" host="localhost" Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:26.045 [INFO][4286] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826 Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:26.049 [INFO][4286] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" host="localhost" Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:26.056 [INFO][4286] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" host="localhost" Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:26.056 [INFO][4286] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" host="localhost" Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:26.056 [INFO][4286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:20:26.085154 containerd[1542]: 2025-08-19 00:20:26.056 [INFO][4286] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" HandleID="k8s-pod-network.26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" Workload="localhost-k8s-calico--apiserver--f655c676f--wx822-eth0" Aug 19 00:20:26.086181 containerd[1542]: 2025-08-19 00:20:26.064 [INFO][4265] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-wx822" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--wx822-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f655c676f--wx822-eth0", GenerateName:"calico-apiserver-f655c676f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1495a0fe-e7d7-4006-86dc-aa41f51f3a3f", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 19, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f655c676f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f655c676f-wx822", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe126b67b85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:26.086181 containerd[1542]: 2025-08-19 00:20:26.064 [INFO][4265] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-wx822" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--wx822-eth0" Aug 19 00:20:26.086181 containerd[1542]: 2025-08-19 00:20:26.064 [INFO][4265] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe126b67b85 ContainerID="26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-wx822" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--wx822-eth0" Aug 19 00:20:26.086181 containerd[1542]: 2025-08-19 00:20:26.066 [INFO][4265] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-wx822" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--wx822-eth0" Aug 19 00:20:26.086181 containerd[1542]: 2025-08-19 00:20:26.067 [INFO][4265] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-wx822" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--wx822-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f655c676f--wx822-eth0", GenerateName:"calico-apiserver-f655c676f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1495a0fe-e7d7-4006-86dc-aa41f51f3a3f", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 19, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f655c676f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826", Pod:"calico-apiserver-f655c676f-wx822", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe126b67b85", MAC:"52:db:1b:43:4c:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:26.086181 containerd[1542]: 2025-08-19 00:20:26.079 [INFO][4265] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-wx822" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--wx822-eth0" Aug 19 00:20:26.099371 containerd[1542]: time="2025-08-19T00:20:26.099219842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d6sfn,Uid:23f8949e-36f6-45c2-a78d-d3f5983db36b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b\"" Aug 19 00:20:26.100243 kubelet[2658]: E0819 00:20:26.100219 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:26.106530 containerd[1542]: time="2025-08-19T00:20:26.106489543Z" level=info msg="CreateContainer within sandbox \"c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 00:20:26.116171 containerd[1542]: time="2025-08-19T00:20:26.116084012Z" level=info msg="connecting to shim 26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826" address="unix:///run/containerd/s/6b74498c6b8d54571f255ef65b18785aad06ffe454d98103f79fd15d6c3cca3a" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:20:26.120539 containerd[1542]: time="2025-08-19T00:20:26.120496905Z" level=info msg="Container c561b3b44276beaf185ce59c474fd8c1e75eff1a891d2316ee3d8b499ddeeebf: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:26.126196 containerd[1542]: time="2025-08-19T00:20:26.126160723Z" level=info msg="CreateContainer within sandbox \"c43f948e632a296273ec13ecd5e29d69dea6e0266dd7a364a6f635dce7896b9b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c561b3b44276beaf185ce59c474fd8c1e75eff1a891d2316ee3d8b499ddeeebf\"" Aug 19 00:20:26.127078 containerd[1542]: time="2025-08-19T00:20:26.126894885Z" level=info msg="StartContainer for \"c561b3b44276beaf185ce59c474fd8c1e75eff1a891d2316ee3d8b499ddeeebf\"" Aug 19 00:20:26.133239 containerd[1542]: time="2025-08-19T00:20:26.129897374Z" level=info msg="connecting to shim c561b3b44276beaf185ce59c474fd8c1e75eff1a891d2316ee3d8b499ddeeebf" address="unix:///run/containerd/s/f14d8e473ea978b9b6947a4ee84f9f443b3e1a367fd7b5165b0887b6ec910c2d" protocol=ttrpc version=3 Aug 19 00:20:26.139024 systemd[1]: Started cri-containerd-26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826.scope - libcontainer container 26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826. Aug 19 00:20:26.146156 systemd[1]: Started cri-containerd-c561b3b44276beaf185ce59c474fd8c1e75eff1a891d2316ee3d8b499ddeeebf.scope - libcontainer container c561b3b44276beaf185ce59c474fd8c1e75eff1a891d2316ee3d8b499ddeeebf. Aug 19 00:20:26.153201 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:20:26.184247 containerd[1542]: time="2025-08-19T00:20:26.184198377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f655c676f-wx822,Uid:1495a0fe-e7d7-4006-86dc-aa41f51f3a3f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826\"" Aug 19 00:20:26.186843 containerd[1542]: time="2025-08-19T00:20:26.186704145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 19 00:20:26.190266 containerd[1542]: time="2025-08-19T00:20:26.190206395Z" level=info msg="StartContainer for \"c561b3b44276beaf185ce59c474fd8c1e75eff1a891d2316ee3d8b499ddeeebf\" returns successfully" Aug 19 00:20:27.004820 kubelet[2658]: E0819 00:20:27.004718 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:27.015189 kubelet[2658]: I0819 00:20:27.015072 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d6sfn" podStartSLOduration=37.015052473 podStartE2EDuration="37.015052473s" podCreationTimestamp="2025-08-19 00:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:20:27.014583232 +0000 UTC m=+42.302653103" watchObservedRunningTime="2025-08-19 00:20:27.015052473 +0000 UTC m=+42.303122424" Aug 19 00:20:27.671910 systemd-networkd[1437]: cali4d90f941a63: Gained IPv6LL Aug 19 00:20:27.799970 systemd-networkd[1437]: calibe126b67b85: Gained IPv6LL Aug 19 00:20:27.805396 kubelet[2658]: E0819 00:20:27.805345 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:27.806498 containerd[1542]: time="2025-08-19T00:20:27.806131107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f655c676f-9ff6x,Uid:2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:20:27.806498 containerd[1542]: time="2025-08-19T00:20:27.806204387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7v5f,Uid:b0272f1e-81be-4b55-962b-becad3f92145,Namespace:kube-system,Attempt:0,}" Aug 19 00:20:27.806498 containerd[1542]: time="2025-08-19T00:20:27.806342587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mnt8g,Uid:f6e5453f-d978-4803-a9fd-45cd7fd5890f,Namespace:calico-system,Attempt:0,}" Aug 19 00:20:27.807747 containerd[1542]: time="2025-08-19T00:20:27.806145707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-h8ngp,Uid:b42bf112-ed43-4908-8f92-dbc623fd1e93,Namespace:calico-system,Attempt:0,}" Aug 19 00:20:27.969679 containerd[1542]: time="2025-08-19T00:20:27.969556505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:27.976323 containerd[1542]: time="2025-08-19T00:20:27.976240084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 19 00:20:27.977840 containerd[1542]: time="2025-08-19T00:20:27.977800529Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:27.984856 containerd[1542]: time="2025-08-19T00:20:27.984628509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:27.986162 containerd[1542]: time="2025-08-19T00:20:27.986120473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.799303688s" Aug 19 00:20:27.986162 containerd[1542]: time="2025-08-19T00:20:27.986159833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 19 00:20:27.991217 containerd[1542]: time="2025-08-19T00:20:27.991178368Z" level=info msg="CreateContainer within sandbox \"26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 19 00:20:28.006733 containerd[1542]: time="2025-08-19T00:20:28.004073565Z" level=info msg="Container f9bd8d17fa71b3b4c61d13fdc9bbb5b4ee58e98ef336d4aa603097b4f9acc051: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:28.006115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2463938916.mount: Deactivated successfully. Aug 19 00:20:28.010937 kubelet[2658]: E0819 00:20:28.010907 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:28.021454 containerd[1542]: time="2025-08-19T00:20:28.021408255Z" level=info msg="CreateContainer within sandbox \"26dc235e6c5b8515f4a8ca8b49e570ebb1b25144f881a0eacdbfdaaa22dd2826\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f9bd8d17fa71b3b4c61d13fdc9bbb5b4ee58e98ef336d4aa603097b4f9acc051\"" Aug 19 00:20:28.023666 systemd-networkd[1437]: cali90a5fe7ff02: Link UP Aug 19 00:20:28.024540 systemd-networkd[1437]: cali90a5fe7ff02: Gained carrier Aug 19 00:20:28.024990 containerd[1542]: time="2025-08-19T00:20:28.024955665Z" level=info msg="StartContainer for \"f9bd8d17fa71b3b4c61d13fdc9bbb5b4ee58e98ef336d4aa603097b4f9acc051\"" Aug 19 00:20:28.028900 containerd[1542]: time="2025-08-19T00:20:28.028847796Z" level=info msg="connecting to shim f9bd8d17fa71b3b4c61d13fdc9bbb5b4ee58e98ef336d4aa603097b4f9acc051" address="unix:///run/containerd/s/6b74498c6b8d54571f255ef65b18785aad06ffe454d98103f79fd15d6c3cca3a" protocol=ttrpc version=3 Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:27.912 [INFO][4466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0 calico-apiserver-f655c676f- calico-apiserver 2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c 817 0 2025-08-19 00:19:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f655c676f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f655c676f-9ff6x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali90a5fe7ff02 [] [] }} ContainerID="839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-9ff6x" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--9ff6x-" Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:27.915 [INFO][4466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-9ff6x" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0" Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:27.968 [INFO][4526] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" HandleID="k8s-pod-network.839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" Workload="localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0" Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:27.968 [INFO][4526] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" HandleID="k8s-pod-network.839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" Workload="localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f655c676f-9ff6x", "timestamp":"2025-08-19 00:20:27.968474901 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:27.968 [INFO][4526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:27.968 [INFO][4526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:27.968 [INFO][4526] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:27.980 [INFO][4526] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" host="localhost" Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:27.988 [INFO][4526] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:27.995 [INFO][4526] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:27.997 [INFO][4526] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:28.000 [INFO][4526] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:28.002 [INFO][4526] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" host="localhost" Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:28.008 [INFO][4526] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13 Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:28.012 [INFO][4526] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" host="localhost" Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:28.018 [INFO][4526] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" host="localhost" Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:28.018 [INFO][4526] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" host="localhost" Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:28.018 [INFO][4526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:20:28.042472 containerd[1542]: 2025-08-19 00:20:28.018 [INFO][4526] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" HandleID="k8s-pod-network.839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" Workload="localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0" Aug 19 00:20:28.042998 containerd[1542]: 2025-08-19 00:20:28.021 [INFO][4466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-9ff6x" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0", GenerateName:"calico-apiserver-f655c676f-", Namespace:"calico-apiserver", SelfLink:"", UID:"2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 19, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f655c676f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f655c676f-9ff6x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90a5fe7ff02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:28.042998 containerd[1542]: 2025-08-19 00:20:28.021 [INFO][4466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-9ff6x" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0" Aug 19 00:20:28.042998 containerd[1542]: 2025-08-19 00:20:28.021 [INFO][4466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90a5fe7ff02 ContainerID="839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-9ff6x" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0" Aug 19 00:20:28.042998 containerd[1542]: 2025-08-19 00:20:28.025 [INFO][4466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-9ff6x" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0" Aug 19 00:20:28.042998 containerd[1542]: 2025-08-19 00:20:28.026 [INFO][4466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-9ff6x" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0", GenerateName:"calico-apiserver-f655c676f-", Namespace:"calico-apiserver", SelfLink:"", UID:"2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 19, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f655c676f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13", Pod:"calico-apiserver-f655c676f-9ff6x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90a5fe7ff02", MAC:"22:4c:04:9a:16:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:28.042998 containerd[1542]: 2025-08-19 00:20:28.038 [INFO][4466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" Namespace="calico-apiserver" Pod="calico-apiserver-f655c676f-9ff6x" WorkloadEndpoint="localhost-k8s-calico--apiserver--f655c676f--9ff6x-eth0" Aug 19 00:20:28.055932 systemd[1]: Started cri-containerd-f9bd8d17fa71b3b4c61d13fdc9bbb5b4ee58e98ef336d4aa603097b4f9acc051.scope - libcontainer container f9bd8d17fa71b3b4c61d13fdc9bbb5b4ee58e98ef336d4aa603097b4f9acc051. Aug 19 00:20:28.062890 containerd[1542]: time="2025-08-19T00:20:28.062830372Z" level=info msg="connecting to shim 839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13" address="unix:///run/containerd/s/d947526ce61d243d756a73cda5880c3b14575ee3101632888cb3693852986afc" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:20:28.086060 systemd[1]: Started cri-containerd-839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13.scope - libcontainer container 839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13. Aug 19 00:20:28.102188 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:20:28.128029 containerd[1542]: time="2025-08-19T00:20:28.127987518Z" level=info msg="StartContainer for \"f9bd8d17fa71b3b4c61d13fdc9bbb5b4ee58e98ef336d4aa603097b4f9acc051\" returns successfully" Aug 19 00:20:28.134226 systemd-networkd[1437]: cali3d8d421dde4: Link UP Aug 19 00:20:28.135494 systemd-networkd[1437]: cali3d8d421dde4: Gained carrier Aug 19 00:20:28.150552 containerd[1542]: time="2025-08-19T00:20:28.150511462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f655c676f-9ff6x,Uid:2873fb9f-c1a3-4dd4-9a74-be8cf0edc28c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13\"" Aug 19 00:20:28.153438 containerd[1542]: time="2025-08-19T00:20:28.153403030Z" level=info msg="CreateContainer within sandbox \"839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:27.934 [INFO][4485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0 coredns-668d6bf9bc- kube-system b0272f1e-81be-4b55-962b-becad3f92145 809 0 2025-08-19 00:19:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-v7v5f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3d8d421dde4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7v5f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7v5f-" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:27.934 [INFO][4485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7v5f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:27.978 [INFO][4540] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" HandleID="k8s-pod-network.8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" Workload="localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:27.979 [INFO][4540] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" HandleID="k8s-pod-network.8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" Workload="localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd990), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-v7v5f", "timestamp":"2025-08-19 00:20:27.978852852 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:27.979 [INFO][4540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.018 [INFO][4540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.019 [INFO][4540] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.081 [INFO][4540] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" host="localhost" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.087 [INFO][4540] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.100 [INFO][4540] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.102 [INFO][4540] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.105 [INFO][4540] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.105 [INFO][4540] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" host="localhost" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.107 [INFO][4540] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.112 [INFO][4540] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" host="localhost" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.121 [INFO][4540] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" host="localhost" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.121 [INFO][4540] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" host="localhost" Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.121 [INFO][4540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:20:28.155356 containerd[1542]: 2025-08-19 00:20:28.121 [INFO][4540] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" HandleID="k8s-pod-network.8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" Workload="localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0" Aug 19 00:20:28.155913 containerd[1542]: 2025-08-19 00:20:28.124 [INFO][4485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7v5f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0272f1e-81be-4b55-962b-becad3f92145", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 19, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-v7v5f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d8d421dde4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:28.155913 containerd[1542]: 2025-08-19 00:20:28.125 [INFO][4485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7v5f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0" Aug 19 00:20:28.155913 containerd[1542]: 2025-08-19 00:20:28.126 [INFO][4485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d8d421dde4 ContainerID="8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7v5f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0" Aug 19 00:20:28.155913 containerd[1542]: 2025-08-19 00:20:28.136 [INFO][4485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7v5f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0" Aug 19 00:20:28.155913 containerd[1542]: 2025-08-19 00:20:28.136 [INFO][4485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7v5f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0272f1e-81be-4b55-962b-becad3f92145", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 19, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d", Pod:"coredns-668d6bf9bc-v7v5f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d8d421dde4", MAC:"da:6b:03:d2:c4:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:28.155913 containerd[1542]: 2025-08-19 00:20:28.148 [INFO][4485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7v5f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7v5f-eth0" Aug 19 00:20:28.166548 containerd[1542]: time="2025-08-19T00:20:28.166414827Z" level=info msg="Container 1831f0eca644ed81a3722050efab69df681729803097878d5924c01fcb2a7b0f: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:28.179186 containerd[1542]: time="2025-08-19T00:20:28.179143663Z" level=info msg="CreateContainer within sandbox \"839854f7e54b72072927a1e57a20b8e2220fe7d8bdb3dd1cc7642fb313aace13\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1831f0eca644ed81a3722050efab69df681729803097878d5924c01fcb2a7b0f\"" Aug 19 00:20:28.179917 containerd[1542]: time="2025-08-19T00:20:28.179840025Z" level=info msg="StartContainer for \"1831f0eca644ed81a3722050efab69df681729803097878d5924c01fcb2a7b0f\"" Aug 19 00:20:28.181494 containerd[1542]: time="2025-08-19T00:20:28.181463590Z" level=info msg="connecting to shim 1831f0eca644ed81a3722050efab69df681729803097878d5924c01fcb2a7b0f" address="unix:///run/containerd/s/d947526ce61d243d756a73cda5880c3b14575ee3101632888cb3693852986afc" protocol=ttrpc version=3 Aug 19 00:20:28.190061 containerd[1542]: time="2025-08-19T00:20:28.190013054Z" level=info msg="connecting to shim 8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d" address="unix:///run/containerd/s/8c3f2eb02bcea20cbf6273c1bffde6257bb48545f97517e1ff8cf0e351dc8a79" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:20:28.207983 systemd[1]: Started cri-containerd-1831f0eca644ed81a3722050efab69df681729803097878d5924c01fcb2a7b0f.scope - libcontainer container 1831f0eca644ed81a3722050efab69df681729803097878d5924c01fcb2a7b0f. Aug 19 00:20:28.229131 systemd-networkd[1437]: calia006d1d719b: Link UP Aug 19 00:20:28.230328 systemd-networkd[1437]: calia006d1d719b: Gained carrier Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:27.930 [INFO][4473] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mnt8g-eth0 csi-node-driver- calico-system f6e5453f-d978-4803-a9fd-45cd7fd5890f 722 0 2025-08-19 00:20:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mnt8g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia006d1d719b [] [] }} ContainerID="44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" Namespace="calico-system" Pod="csi-node-driver-mnt8g" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnt8g-" Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:27.930 [INFO][4473] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" Namespace="calico-system" Pod="csi-node-driver-mnt8g" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnt8g-eth0" Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:27.988 [INFO][4532] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" HandleID="k8s-pod-network.44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" Workload="localhost-k8s-csi--node--driver--mnt8g-eth0" Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:27.988 [INFO][4532] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" HandleID="k8s-pod-network.44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" Workload="localhost-k8s-csi--node--driver--mnt8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137c30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mnt8g", "timestamp":"2025-08-19 00:20:27.988234879 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:27.988 [INFO][4532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.121 [INFO][4532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.121 [INFO][4532] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.181 [INFO][4532] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" host="localhost" Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.188 [INFO][4532] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.196 [INFO][4532] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.199 [INFO][4532] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.202 [INFO][4532] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.202 [INFO][4532] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" host="localhost" Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.204 [INFO][4532] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423 Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.208 [INFO][4532] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" host="localhost" Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.214 [INFO][4532] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" host="localhost" Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.214 [INFO][4532] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" host="localhost" Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.215 [INFO][4532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:20:28.254872 containerd[1542]: 2025-08-19 00:20:28.215 [INFO][4532] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" HandleID="k8s-pod-network.44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" Workload="localhost-k8s-csi--node--driver--mnt8g-eth0" Aug 19 00:20:28.255569 containerd[1542]: 2025-08-19 00:20:28.224 [INFO][4473] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" Namespace="calico-system" Pod="csi-node-driver-mnt8g" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnt8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mnt8g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f6e5453f-d978-4803-a9fd-45cd7fd5890f", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mnt8g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia006d1d719b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:28.255569 containerd[1542]: 2025-08-19 00:20:28.224 [INFO][4473] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" Namespace="calico-system" Pod="csi-node-driver-mnt8g" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnt8g-eth0" Aug 19 00:20:28.255569 containerd[1542]: 2025-08-19 00:20:28.224 [INFO][4473] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia006d1d719b ContainerID="44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" Namespace="calico-system" Pod="csi-node-driver-mnt8g" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnt8g-eth0" Aug 19 00:20:28.255569 containerd[1542]: 2025-08-19 00:20:28.231 [INFO][4473] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" Namespace="calico-system" Pod="csi-node-driver-mnt8g" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnt8g-eth0" Aug 19 00:20:28.255569 containerd[1542]: 2025-08-19 00:20:28.232 [INFO][4473] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" Namespace="calico-system" Pod="csi-node-driver-mnt8g" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnt8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mnt8g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f6e5453f-d978-4803-a9fd-45cd7fd5890f", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423", Pod:"csi-node-driver-mnt8g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia006d1d719b", MAC:"a6:6d:3a:d5:d7:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:28.255569 containerd[1542]: 2025-08-19 00:20:28.250 [INFO][4473] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" Namespace="calico-system" Pod="csi-node-driver-mnt8g" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnt8g-eth0" Aug 19 00:20:28.261990 systemd[1]: Started cri-containerd-8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d.scope - libcontainer container 8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d. Aug 19 00:20:28.278281 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:20:28.288113 containerd[1542]: time="2025-08-19T00:20:28.288066213Z" level=info msg="connecting to shim 44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423" address="unix:///run/containerd/s/b223a19b7ac49d970a894f9ccc2d7d7ec2556d1ad609fe9ef4b2d2a60b260c03" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:20:28.318655 containerd[1542]: time="2025-08-19T00:20:28.318599340Z" level=info msg="StartContainer for \"1831f0eca644ed81a3722050efab69df681729803097878d5924c01fcb2a7b0f\" returns successfully" Aug 19 00:20:28.330305 containerd[1542]: time="2025-08-19T00:20:28.330260493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7v5f,Uid:b0272f1e-81be-4b55-962b-becad3f92145,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d\"" Aug 19 00:20:28.334135 kubelet[2658]: E0819 00:20:28.332236 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:28.340120 systemd[1]: Started cri-containerd-44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423.scope - libcontainer container 44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423. Aug 19 00:20:28.346919 containerd[1542]: time="2025-08-19T00:20:28.346640300Z" level=info msg="CreateContainer within sandbox \"8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 00:20:28.349462 systemd-networkd[1437]: cali2f9aa34333d: Link UP Aug 19 00:20:28.351978 systemd-networkd[1437]: cali2f9aa34333d: Gained carrier Aug 19 00:20:28.361419 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:20:28.373795 containerd[1542]: time="2025-08-19T00:20:28.373722697Z" level=info msg="Container 7c0a632edc7f51159808cbba3c613a2c6e330fb9768ebf89fa213717cfa5e673: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:27.947 [INFO][4501] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0 goldmane-768f4c5c69- calico-system b42bf112-ed43-4908-8f92-dbc623fd1e93 819 0 2025-08-19 00:20:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-h8ngp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2f9aa34333d [] [] }} ContainerID="926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" Namespace="calico-system" Pod="goldmane-768f4c5c69-h8ngp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--h8ngp-" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:27.948 [INFO][4501] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" Namespace="calico-system" Pod="goldmane-768f4c5c69-h8ngp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.005 [INFO][4552] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" HandleID="k8s-pod-network.926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" Workload="localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.006 [INFO][4552] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" HandleID="k8s-pod-network.926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" Workload="localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000433870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-h8ngp", "timestamp":"2025-08-19 00:20:28.005940731 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.008 [INFO][4552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.214 [INFO][4552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.214 [INFO][4552] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.282 [INFO][4552] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" host="localhost" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.296 [INFO][4552] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.302 [INFO][4552] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.305 [INFO][4552] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.311 [INFO][4552] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.311 [INFO][4552] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" host="localhost" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.315 [INFO][4552] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84 Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.320 [INFO][4552] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" host="localhost" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.328 [INFO][4552] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" host="localhost" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.328 [INFO][4552] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" host="localhost" Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.328 [INFO][4552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:20:28.379249 containerd[1542]: 2025-08-19 00:20:28.328 [INFO][4552] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" HandleID="k8s-pod-network.926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" Workload="localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0" Aug 19 00:20:28.379762 containerd[1542]: 2025-08-19 00:20:28.336 [INFO][4501] cni-plugin/k8s.go 418: Populated endpoint ContainerID="926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" Namespace="calico-system" Pod="goldmane-768f4c5c69-h8ngp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b42bf112-ed43-4908-8f92-dbc623fd1e93", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 20, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-h8ngp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f9aa34333d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:28.379762 containerd[1542]: 2025-08-19 00:20:28.337 [INFO][4501] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" Namespace="calico-system" Pod="goldmane-768f4c5c69-h8ngp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0" Aug 19 00:20:28.379762 containerd[1542]: 2025-08-19 00:20:28.337 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f9aa34333d ContainerID="926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" Namespace="calico-system" Pod="goldmane-768f4c5c69-h8ngp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0" Aug 19 00:20:28.379762 containerd[1542]: 2025-08-19 00:20:28.351 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" Namespace="calico-system" Pod="goldmane-768f4c5c69-h8ngp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0" Aug 19 00:20:28.379762 containerd[1542]: 2025-08-19 00:20:28.354 [INFO][4501] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" Namespace="calico-system" Pod="goldmane-768f4c5c69-h8ngp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b42bf112-ed43-4908-8f92-dbc623fd1e93", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 20, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84", Pod:"goldmane-768f4c5c69-h8ngp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f9aa34333d", MAC:"36:d6:a8:c5:d6:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:28.379762 containerd[1542]: 2025-08-19 00:20:28.369 [INFO][4501] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" Namespace="calico-system" Pod="goldmane-768f4c5c69-h8ngp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--h8ngp-eth0" Aug 19 00:20:28.413995 containerd[1542]: time="2025-08-19T00:20:28.413952532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mnt8g,Uid:f6e5453f-d978-4803-a9fd-45cd7fd5890f,Namespace:calico-system,Attempt:0,} returns sandbox id \"44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423\"" Aug 19 00:20:28.414604 containerd[1542]: time="2025-08-19T00:20:28.414543013Z" level=info msg="CreateContainer within sandbox \"8bdee54a6ebf14edf5ba0a227883a399422cd02ea0c06211b3324610e1b46d8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c0a632edc7f51159808cbba3c613a2c6e330fb9768ebf89fa213717cfa5e673\"" Aug 19 00:20:28.415919 containerd[1542]: time="2025-08-19T00:20:28.415890497Z" level=info msg="StartContainer for \"7c0a632edc7f51159808cbba3c613a2c6e330fb9768ebf89fa213717cfa5e673\"" Aug 19 00:20:28.417283 containerd[1542]: time="2025-08-19T00:20:28.417253741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 19 00:20:28.417586 containerd[1542]: time="2025-08-19T00:20:28.417556622Z" level=info msg="connecting to shim 7c0a632edc7f51159808cbba3c613a2c6e330fb9768ebf89fa213717cfa5e673" address="unix:///run/containerd/s/8c3f2eb02bcea20cbf6273c1bffde6257bb48545f97517e1ff8cf0e351dc8a79" protocol=ttrpc version=3 Aug 19 00:20:28.436983 systemd[1]: Started cri-containerd-7c0a632edc7f51159808cbba3c613a2c6e330fb9768ebf89fa213717cfa5e673.scope - libcontainer container 7c0a632edc7f51159808cbba3c613a2c6e330fb9768ebf89fa213717cfa5e673. Aug 19 00:20:28.505897 containerd[1542]: time="2025-08-19T00:20:28.505755873Z" level=info msg="StartContainer for \"7c0a632edc7f51159808cbba3c613a2c6e330fb9768ebf89fa213717cfa5e673\" returns successfully" Aug 19 00:20:28.526078 containerd[1542]: time="2025-08-19T00:20:28.526030690Z" level=info msg="connecting to shim 926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84" address="unix:///run/containerd/s/b36b4ab9d7a47546531806721762cc1b0a719c2b756263d5a086fd2aa51fd0ff" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:20:28.549956 systemd[1]: Started cri-containerd-926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84.scope - libcontainer container 926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84. Aug 19 00:20:28.562409 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:20:28.587815 containerd[1542]: time="2025-08-19T00:20:28.587087424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-h8ngp,Uid:b42bf112-ed43-4908-8f92-dbc623fd1e93,Namespace:calico-system,Attempt:0,} returns sandbox id \"926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84\"" Aug 19 00:20:28.806427 containerd[1542]: time="2025-08-19T00:20:28.806242008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547fbbbbdf-q4hdg,Uid:6a38e80b-4f1a-4714-a905-dbc453cab4e0,Namespace:calico-system,Attempt:0,}" Aug 19 00:20:28.963394 systemd-networkd[1437]: cali3ed532e1554: Link UP Aug 19 00:20:28.964049 systemd-networkd[1437]: cali3ed532e1554: Gained carrier Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.861 [INFO][4888] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0 calico-kube-controllers-547fbbbbdf- calico-system 6a38e80b-4f1a-4714-a905-dbc453cab4e0 812 0 2025-08-19 00:20:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:547fbbbbdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-547fbbbbdf-q4hdg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3ed532e1554 [] [] }} ContainerID="48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" Namespace="calico-system" Pod="calico-kube-controllers-547fbbbbdf-q4hdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-" Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.861 [INFO][4888] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" Namespace="calico-system" Pod="calico-kube-controllers-547fbbbbdf-q4hdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0" Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.910 [INFO][4903] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" HandleID="k8s-pod-network.48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" Workload="localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0" Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.911 [INFO][4903] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" HandleID="k8s-pod-network.48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" Workload="localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dd70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-547fbbbbdf-q4hdg", "timestamp":"2025-08-19 00:20:28.910926266 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.911 [INFO][4903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.911 [INFO][4903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.911 [INFO][4903] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.924 [INFO][4903] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" host="localhost" Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.930 [INFO][4903] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.935 [INFO][4903] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.937 [INFO][4903] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.942 [INFO][4903] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.942 [INFO][4903] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" host="localhost" Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.944 [INFO][4903] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.949 [INFO][4903] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" host="localhost" Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.956 [INFO][4903] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" host="localhost" Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.956 [INFO][4903] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" host="localhost" Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.956 [INFO][4903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:20:28.980935 containerd[1542]: 2025-08-19 00:20:28.956 [INFO][4903] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" HandleID="k8s-pod-network.48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" Workload="localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0" Aug 19 00:20:28.984340 containerd[1542]: 2025-08-19 00:20:28.960 [INFO][4888] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" Namespace="calico-system" Pod="calico-kube-controllers-547fbbbbdf-q4hdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0", GenerateName:"calico-kube-controllers-547fbbbbdf-", Namespace:"calico-system", SelfLink:"", UID:"6a38e80b-4f1a-4714-a905-dbc453cab4e0", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547fbbbbdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-547fbbbbdf-q4hdg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3ed532e1554", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:28.984340 containerd[1542]: 2025-08-19 00:20:28.960 [INFO][4888] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" Namespace="calico-system" Pod="calico-kube-controllers-547fbbbbdf-q4hdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0" Aug 19 00:20:28.984340 containerd[1542]: 2025-08-19 00:20:28.960 [INFO][4888] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ed532e1554 ContainerID="48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" Namespace="calico-system" Pod="calico-kube-controllers-547fbbbbdf-q4hdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0" Aug 19 00:20:28.984340 containerd[1542]: 2025-08-19 00:20:28.964 [INFO][4888] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" Namespace="calico-system" Pod="calico-kube-controllers-547fbbbbdf-q4hdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0" Aug 19 00:20:28.984340 containerd[1542]: 2025-08-19 00:20:28.964 [INFO][4888] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" Namespace="calico-system" Pod="calico-kube-controllers-547fbbbbdf-q4hdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0", GenerateName:"calico-kube-controllers-547fbbbbdf-", Namespace:"calico-system", SelfLink:"", UID:"6a38e80b-4f1a-4714-a905-dbc453cab4e0", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547fbbbbdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a", Pod:"calico-kube-controllers-547fbbbbdf-q4hdg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3ed532e1554", MAC:"a6:10:cf:da:38:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:20:28.984340 containerd[1542]: 2025-08-19 00:20:28.976 [INFO][4888] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" Namespace="calico-system" Pod="calico-kube-controllers-547fbbbbdf-q4hdg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--547fbbbbdf--q4hdg-eth0" Aug 19 00:20:29.054122 containerd[1542]: time="2025-08-19T00:20:29.054061949Z" level=info msg="connecting to shim 48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a" address="unix:///run/containerd/s/3394c697f039842d9e8ab7d29291235e6ec7dd101e67efc13697f32383aee25c" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:20:29.095154 kubelet[2658]: E0819 00:20:29.094382 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:29.105669 kubelet[2658]: I0819 00:20:29.105586 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f655c676f-wx822" podStartSLOduration=29.304510798 podStartE2EDuration="31.105566131s" podCreationTimestamp="2025-08-19 00:19:58 +0000 UTC" firstStartedPulling="2025-08-19 00:20:26.186092183 +0000 UTC m=+41.474162054" lastFinishedPulling="2025-08-19 00:20:27.987147556 +0000 UTC m=+43.275217387" observedRunningTime="2025-08-19 00:20:29.077993855 +0000 UTC m=+44.366063726" watchObservedRunningTime="2025-08-19 00:20:29.105566131 +0000 UTC m=+44.393636002" Aug 19 00:20:29.122792 kubelet[2658]: I0819 00:20:29.122495 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v7v5f" podStartSLOduration=39.122477738 podStartE2EDuration="39.122477738s" podCreationTimestamp="2025-08-19 00:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:20:29.122101497 +0000 UTC m=+44.410171368" watchObservedRunningTime="2025-08-19 00:20:29.122477738 +0000 UTC m=+44.410547609" Aug 19 00:20:29.122792 kubelet[2658]: I0819 00:20:29.122728 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f655c676f-9ff6x" podStartSLOduration=31.122723699 podStartE2EDuration="31.122723699s" podCreationTimestamp="2025-08-19 00:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:20:29.107990738 +0000 UTC m=+44.396060609" watchObservedRunningTime="2025-08-19 00:20:29.122723699 +0000 UTC m=+44.410793530" Aug 19 00:20:29.149004 systemd[1]: Started cri-containerd-48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a.scope - libcontainer container 48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a. Aug 19 00:20:29.205414 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:20:29.253086 containerd[1542]: time="2025-08-19T00:20:29.253042780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547fbbbbdf-q4hdg,Uid:6a38e80b-4f1a-4714-a905-dbc453cab4e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a\"" Aug 19 00:20:29.655990 systemd-networkd[1437]: cali90a5fe7ff02: Gained IPv6LL Aug 19 00:20:29.721139 systemd-networkd[1437]: cali2f9aa34333d: Gained IPv6LL Aug 19 00:20:29.783922 systemd-networkd[1437]: cali3d8d421dde4: Gained IPv6LL Aug 19 00:20:29.847898 systemd-networkd[1437]: calia006d1d719b: Gained IPv6LL Aug 19 00:20:29.909384 containerd[1542]: time="2025-08-19T00:20:29.909253877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:29.911019 containerd[1542]: time="2025-08-19T00:20:29.910980961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 19 00:20:29.912432 containerd[1542]: time="2025-08-19T00:20:29.912390925Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:29.918784 containerd[1542]: time="2025-08-19T00:20:29.918745623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:29.920057 containerd[1542]: time="2025-08-19T00:20:29.919921466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.502632005s" Aug 19 00:20:29.920057 containerd[1542]: time="2025-08-19T00:20:29.919960306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 19 00:20:29.921526 containerd[1542]: time="2025-08-19T00:20:29.921208230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 19 00:20:29.923430 containerd[1542]: time="2025-08-19T00:20:29.923316396Z" level=info msg="CreateContainer within sandbox \"44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 19 00:20:29.946947 containerd[1542]: time="2025-08-19T00:20:29.946902501Z" level=info msg="Container 87c61b6a713cc75a69c0fed8a4161d62e791ac64910b7329210db4a2ac8f0f9d: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:29.953260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260132155.mount: Deactivated successfully. Aug 19 00:20:29.959146 containerd[1542]: time="2025-08-19T00:20:29.959088135Z" level=info msg="CreateContainer within sandbox \"44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"87c61b6a713cc75a69c0fed8a4161d62e791ac64910b7329210db4a2ac8f0f9d\"" Aug 19 00:20:29.963131 containerd[1542]: time="2025-08-19T00:20:29.963066546Z" level=info msg="StartContainer for \"87c61b6a713cc75a69c0fed8a4161d62e791ac64910b7329210db4a2ac8f0f9d\"" Aug 19 00:20:29.969525 containerd[1542]: time="2025-08-19T00:20:29.969484123Z" level=info msg="connecting to shim 87c61b6a713cc75a69c0fed8a4161d62e791ac64910b7329210db4a2ac8f0f9d" address="unix:///run/containerd/s/b223a19b7ac49d970a894f9ccc2d7d7ec2556d1ad609fe9ef4b2d2a60b260c03" protocol=ttrpc version=3 Aug 19 00:20:30.001036 systemd[1]: Started cri-containerd-87c61b6a713cc75a69c0fed8a4161d62e791ac64910b7329210db4a2ac8f0f9d.scope - libcontainer container 87c61b6a713cc75a69c0fed8a4161d62e791ac64910b7329210db4a2ac8f0f9d. Aug 19 00:20:30.051438 containerd[1542]: time="2025-08-19T00:20:30.051135146Z" level=info msg="StartContainer for \"87c61b6a713cc75a69c0fed8a4161d62e791ac64910b7329210db4a2ac8f0f9d\" returns successfully" Aug 19 00:20:30.107161 kubelet[2658]: I0819 00:20:30.106754 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:20:30.108157 kubelet[2658]: E0819 00:20:30.108108 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:30.892067 systemd[1]: Started sshd@8-10.0.0.90:22-10.0.0.1:44908.service - OpenSSH per-connection server daemon (10.0.0.1:44908). Aug 19 00:20:30.936548 systemd-networkd[1437]: cali3ed532e1554: Gained IPv6LL Aug 19 00:20:30.974553 sshd[5017]: Accepted publickey for core from 10.0.0.1 port 44908 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:20:30.976337 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:20:30.982028 systemd-logind[1511]: New session 9 of user core. Aug 19 00:20:30.992011 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 00:20:31.113407 kubelet[2658]: E0819 00:20:31.113371 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:20:31.326466 sshd[5020]: Connection closed by 10.0.0.1 port 44908 Aug 19 00:20:31.326815 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Aug 19 00:20:31.331695 systemd[1]: sshd@8-10.0.0.90:22-10.0.0.1:44908.service: Deactivated successfully. Aug 19 00:20:31.333737 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 00:20:31.335275 systemd-logind[1511]: Session 9 logged out. Waiting for processes to exit. Aug 19 00:20:31.337964 systemd-logind[1511]: Removed session 9. Aug 19 00:20:31.594486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108177709.mount: Deactivated successfully. Aug 19 00:20:32.046642 containerd[1542]: time="2025-08-19T00:20:32.046466444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:32.047397 containerd[1542]: time="2025-08-19T00:20:32.047318206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 19 00:20:32.048226 containerd[1542]: time="2025-08-19T00:20:32.048190608Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:32.050261 containerd[1542]: time="2025-08-19T00:20:32.050218494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:32.051584 containerd[1542]: time="2025-08-19T00:20:32.051534577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.130290587s" Aug 19 00:20:32.051584 containerd[1542]: time="2025-08-19T00:20:32.051573137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 19 00:20:32.054636 containerd[1542]: time="2025-08-19T00:20:32.054225544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 19 00:20:32.055664 containerd[1542]: time="2025-08-19T00:20:32.055590747Z" level=info msg="CreateContainer within sandbox \"926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 19 00:20:32.074181 containerd[1542]: time="2025-08-19T00:20:32.072072949Z" level=info msg="Container dd74b3558333dc3eff7aaf506b1d141398df613fafb16f9bccfe082a317e5b58: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:32.075722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911631206.mount: Deactivated successfully. Aug 19 00:20:32.088865 containerd[1542]: time="2025-08-19T00:20:32.088817552Z" level=info msg="CreateContainer within sandbox \"926c03ea627ef263b68166a66ca56840292d53f32e7f0eb928495b78357a0b84\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"dd74b3558333dc3eff7aaf506b1d141398df613fafb16f9bccfe082a317e5b58\"" Aug 19 00:20:32.089478 containerd[1542]: time="2025-08-19T00:20:32.089441834Z" level=info msg="StartContainer for \"dd74b3558333dc3eff7aaf506b1d141398df613fafb16f9bccfe082a317e5b58\"" Aug 19 00:20:32.090745 containerd[1542]: time="2025-08-19T00:20:32.090702557Z" level=info msg="connecting to shim dd74b3558333dc3eff7aaf506b1d141398df613fafb16f9bccfe082a317e5b58" address="unix:///run/containerd/s/b36b4ab9d7a47546531806721762cc1b0a719c2b756263d5a086fd2aa51fd0ff" protocol=ttrpc version=3 Aug 19 00:20:32.117040 systemd[1]: Started cri-containerd-dd74b3558333dc3eff7aaf506b1d141398df613fafb16f9bccfe082a317e5b58.scope - libcontainer container dd74b3558333dc3eff7aaf506b1d141398df613fafb16f9bccfe082a317e5b58. Aug 19 00:20:32.163323 containerd[1542]: time="2025-08-19T00:20:32.163284982Z" level=info msg="StartContainer for \"dd74b3558333dc3eff7aaf506b1d141398df613fafb16f9bccfe082a317e5b58\" returns successfully" Aug 19 00:20:34.019822 containerd[1542]: time="2025-08-19T00:20:34.019707972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:34.020789 containerd[1542]: time="2025-08-19T00:20:34.020744534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 19 00:20:34.022320 containerd[1542]: time="2025-08-19T00:20:34.021951457Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:34.027705 containerd[1542]: time="2025-08-19T00:20:34.027653231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:34.028858 containerd[1542]: time="2025-08-19T00:20:34.028806314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.97454125s" Aug 19 00:20:34.028858 containerd[1542]: time="2025-08-19T00:20:34.028847794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 19 00:20:34.030836 containerd[1542]: time="2025-08-19T00:20:34.030800599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 19 00:20:34.041511 containerd[1542]: time="2025-08-19T00:20:34.040066221Z" level=info msg="CreateContainer within sandbox \"48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 19 00:20:34.091978 containerd[1542]: time="2025-08-19T00:20:34.091914947Z" level=info msg="Container ee86ce9b6bf5e131b5ee1c721e5f370e05174748000dac1c50e70582a2416051: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:34.103716 containerd[1542]: time="2025-08-19T00:20:34.103597895Z" level=info msg="CreateContainer within sandbox \"48c19c2c502633eee2190d8c10c5cf4b2833133675607f6ebd94de777faa9a8a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ee86ce9b6bf5e131b5ee1c721e5f370e05174748000dac1c50e70582a2416051\"" Aug 19 00:20:34.104878 containerd[1542]: time="2025-08-19T00:20:34.104843338Z" level=info msg="StartContainer for \"ee86ce9b6bf5e131b5ee1c721e5f370e05174748000dac1c50e70582a2416051\"" Aug 19 00:20:34.107249 containerd[1542]: time="2025-08-19T00:20:34.107130144Z" level=info msg="connecting to shim ee86ce9b6bf5e131b5ee1c721e5f370e05174748000dac1c50e70582a2416051" address="unix:///run/containerd/s/3394c697f039842d9e8ab7d29291235e6ec7dd101e67efc13697f32383aee25c" protocol=ttrpc version=3 Aug 19 00:20:34.133621 kubelet[2658]: I0819 00:20:34.133578 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:20:34.135002 systemd[1]: Started cri-containerd-ee86ce9b6bf5e131b5ee1c721e5f370e05174748000dac1c50e70582a2416051.scope - libcontainer container ee86ce9b6bf5e131b5ee1c721e5f370e05174748000dac1c50e70582a2416051. Aug 19 00:20:34.183322 containerd[1542]: time="2025-08-19T00:20:34.183285448Z" level=info msg="StartContainer for \"ee86ce9b6bf5e131b5ee1c721e5f370e05174748000dac1c50e70582a2416051\" returns successfully" Aug 19 00:20:34.371751 containerd[1542]: time="2025-08-19T00:20:34.371711064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd74b3558333dc3eff7aaf506b1d141398df613fafb16f9bccfe082a317e5b58\" id:\"65b7071991d7d01a835d5d8eeea9c3678a480aa62dc5883d2f9fea5b1fa7997c\" pid:5141 exit_status:1 exited_at:{seconds:1755562834 nanos:367810654}" Aug 19 00:20:34.447277 containerd[1542]: time="2025-08-19T00:20:34.447237887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd74b3558333dc3eff7aaf506b1d141398df613fafb16f9bccfe082a317e5b58\" id:\"1618984744aae4f57bc211fc202fbf022ba7cb894ec13914fe4c7984a5ca913f\" pid:5168 exit_status:1 exited_at:{seconds:1755562834 nanos:446948646}" Aug 19 00:20:35.172370 kubelet[2658]: I0819 00:20:35.171102 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-h8ngp" podStartSLOduration=29.708525721 podStartE2EDuration="33.171084908s" podCreationTimestamp="2025-08-19 00:20:02 +0000 UTC" firstStartedPulling="2025-08-19 00:20:28.590063753 +0000 UTC m=+43.878133624" lastFinishedPulling="2025-08-19 00:20:32.05262294 +0000 UTC m=+47.340692811" observedRunningTime="2025-08-19 00:20:33.189006788 +0000 UTC m=+48.477076659" watchObservedRunningTime="2025-08-19 00:20:35.171084908 +0000 UTC m=+50.459154779" Aug 19 00:20:35.238736 containerd[1542]: time="2025-08-19T00:20:35.238608547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee86ce9b6bf5e131b5ee1c721e5f370e05174748000dac1c50e70582a2416051\" id:\"583a339bc1dc6607364fcaa7a4328a52fe867a9ef74a849f8ee8b204ed2e4e47\" pid:5220 exited_at:{seconds:1755562835 nanos:237999026}" Aug 19 00:20:35.262952 containerd[1542]: time="2025-08-19T00:20:35.262902284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:35.263704 containerd[1542]: time="2025-08-19T00:20:35.263431326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 19 00:20:35.265628 containerd[1542]: time="2025-08-19T00:20:35.265572451Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:35.268858 kubelet[2658]: I0819 00:20:35.268744 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-547fbbbbdf-q4hdg" podStartSLOduration=27.493475005 podStartE2EDuration="32.268723178s" podCreationTimestamp="2025-08-19 00:20:03 +0000 UTC" firstStartedPulling="2025-08-19 00:20:29.254649784 +0000 UTC m=+44.542719615" lastFinishedPulling="2025-08-19 00:20:34.029897917 +0000 UTC m=+49.317967788" observedRunningTime="2025-08-19 00:20:35.172698112 +0000 UTC m=+50.460767983" watchObservedRunningTime="2025-08-19 00:20:35.268723178 +0000 UTC m=+50.556793049" Aug 19 00:20:35.279092 containerd[1542]: time="2025-08-19T00:20:35.278906442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:20:35.279903 containerd[1542]: time="2025-08-19T00:20:35.279867124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.249025285s" Aug 19 00:20:35.279973 containerd[1542]: time="2025-08-19T00:20:35.279906284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 19 00:20:35.282643 containerd[1542]: time="2025-08-19T00:20:35.282269410Z" level=info msg="CreateContainer within sandbox \"44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 19 00:20:35.288059 containerd[1542]: time="2025-08-19T00:20:35.288018983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd74b3558333dc3eff7aaf506b1d141398df613fafb16f9bccfe082a317e5b58\" id:\"caf4e1d32f0a394025aaf9c20a38173bc8eae7761c729087a99ee6b9d061fc24\" pid:5210 exit_status:1 exited_at:{seconds:1755562835 nanos:286374140}" Aug 19 00:20:35.299778 containerd[1542]: time="2025-08-19T00:20:35.299711731Z" level=info msg="Container c368cb84bc4961dac3aa5b34a61b7d25a763c1476f3270ad54724ddf5495bf6b: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:20:35.324722 containerd[1542]: time="2025-08-19T00:20:35.324675710Z" level=info msg="CreateContainer within sandbox \"44834f223341a259a900cc6a1c9fdf280415cc15fb5db9b741b18f977a095423\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c368cb84bc4961dac3aa5b34a61b7d25a763c1476f3270ad54724ddf5495bf6b\"" Aug 19 00:20:35.325303 containerd[1542]: time="2025-08-19T00:20:35.325259911Z" level=info msg="StartContainer for \"c368cb84bc4961dac3aa5b34a61b7d25a763c1476f3270ad54724ddf5495bf6b\"" Aug 19 00:20:35.327166 containerd[1542]: time="2025-08-19T00:20:35.327133076Z" level=info msg="connecting to shim c368cb84bc4961dac3aa5b34a61b7d25a763c1476f3270ad54724ddf5495bf6b" address="unix:///run/containerd/s/b223a19b7ac49d970a894f9ccc2d7d7ec2556d1ad609fe9ef4b2d2a60b260c03" protocol=ttrpc version=3 Aug 19 00:20:35.361964 systemd[1]: Started cri-containerd-c368cb84bc4961dac3aa5b34a61b7d25a763c1476f3270ad54724ddf5495bf6b.scope - libcontainer container c368cb84bc4961dac3aa5b34a61b7d25a763c1476f3270ad54724ddf5495bf6b. Aug 19 00:20:35.415925 containerd[1542]: time="2025-08-19T00:20:35.415883685Z" level=info msg="StartContainer for \"c368cb84bc4961dac3aa5b34a61b7d25a763c1476f3270ad54724ddf5495bf6b\" returns successfully" Aug 19 00:20:35.891904 kubelet[2658]: I0819 00:20:35.891793 2658 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 19 00:20:35.891904 kubelet[2658]: I0819 00:20:35.891840 2658 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 19 00:20:36.340723 systemd[1]: Started sshd@9-10.0.0.90:22-10.0.0.1:38174.service - OpenSSH per-connection server daemon (10.0.0.1:38174). Aug 19 00:20:36.428723 sshd[5270]: Accepted publickey for core from 10.0.0.1 port 38174 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:20:36.431053 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:20:36.435711 systemd-logind[1511]: New session 10 of user core. Aug 19 00:20:36.447042 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 00:20:36.710740 sshd[5273]: Connection closed by 10.0.0.1 port 38174 Aug 19 00:20:36.711218 sshd-session[5270]: pam_unix(sshd:session): session closed for user core Aug 19 00:20:36.725219 systemd[1]: sshd@9-10.0.0.90:22-10.0.0.1:38174.service: Deactivated successfully. Aug 19 00:20:36.727304 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 00:20:36.728996 systemd-logind[1511]: Session 10 logged out. Waiting for processes to exit. Aug 19 00:20:36.730923 systemd[1]: Started sshd@10-10.0.0.90:22-10.0.0.1:38184.service - OpenSSH per-connection server daemon (10.0.0.1:38184). Aug 19 00:20:36.733221 systemd-logind[1511]: Removed session 10. Aug 19 00:20:36.798658 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 38184 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:20:36.799937 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:20:36.804423 systemd-logind[1511]: New session 11 of user core. Aug 19 00:20:36.813965 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 00:20:37.089199 sshd[5293]: Connection closed by 10.0.0.1 port 38184 Aug 19 00:20:37.090162 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Aug 19 00:20:37.103192 systemd[1]: sshd@10-10.0.0.90:22-10.0.0.1:38184.service: Deactivated successfully. Aug 19 00:20:37.106480 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 00:20:37.108563 systemd-logind[1511]: Session 11 logged out. Waiting for processes to exit. Aug 19 00:20:37.111859 systemd[1]: Started sshd@11-10.0.0.90:22-10.0.0.1:38200.service - OpenSSH per-connection server daemon (10.0.0.1:38200). Aug 19 00:20:37.113341 systemd-logind[1511]: Removed session 11. Aug 19 00:20:37.174815 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 38200 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:20:37.176568 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:20:37.185289 systemd-logind[1511]: New session 12 of user core. Aug 19 00:20:37.195975 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 00:20:37.380021 sshd[5308]: Connection closed by 10.0.0.1 port 38200 Aug 19 00:20:37.380872 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Aug 19 00:20:37.385227 systemd[1]: sshd@11-10.0.0.90:22-10.0.0.1:38200.service: Deactivated successfully. Aug 19 00:20:37.389269 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 00:20:37.390703 systemd-logind[1511]: Session 12 logged out. Waiting for processes to exit. Aug 19 00:20:37.392359 systemd-logind[1511]: Removed session 12. Aug 19 00:20:42.395031 systemd[1]: Started sshd@12-10.0.0.90:22-10.0.0.1:38208.service - OpenSSH per-connection server daemon (10.0.0.1:38208). Aug 19 00:20:42.459900 sshd[5335]: Accepted publickey for core from 10.0.0.1 port 38208 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:20:42.461081 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:20:42.465074 systemd-logind[1511]: New session 13 of user core. Aug 19 00:20:42.474955 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 00:20:42.618616 sshd[5338]: Connection closed by 10.0.0.1 port 38208 Aug 19 00:20:42.620442 sshd-session[5335]: pam_unix(sshd:session): session closed for user core Aug 19 00:20:42.625054 systemd[1]: sshd@12-10.0.0.90:22-10.0.0.1:38208.service: Deactivated successfully. Aug 19 00:20:42.626665 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 00:20:42.629608 systemd-logind[1511]: Session 13 logged out. Waiting for processes to exit. Aug 19 00:20:42.631271 systemd-logind[1511]: Removed session 13. Aug 19 00:20:47.634661 systemd[1]: Started sshd@13-10.0.0.90:22-10.0.0.1:33532.service - OpenSSH per-connection server daemon (10.0.0.1:33532). Aug 19 00:20:47.708479 sshd[5353]: Accepted publickey for core from 10.0.0.1 port 33532 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:20:47.709945 sshd-session[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:20:47.714258 systemd-logind[1511]: New session 14 of user core. Aug 19 00:20:47.724968 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 00:20:47.854310 sshd[5356]: Connection closed by 10.0.0.1 port 33532 Aug 19 00:20:47.854907 sshd-session[5353]: pam_unix(sshd:session): session closed for user core Aug 19 00:20:47.858981 systemd[1]: sshd@13-10.0.0.90:22-10.0.0.1:33532.service: Deactivated successfully. Aug 19 00:20:47.861177 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 00:20:47.862059 systemd-logind[1511]: Session 14 logged out. Waiting for processes to exit. Aug 19 00:20:47.863422 systemd-logind[1511]: Removed session 14. Aug 19 00:20:52.870048 systemd[1]: Started sshd@14-10.0.0.90:22-10.0.0.1:56668.service - OpenSSH per-connection server daemon (10.0.0.1:56668). Aug 19 00:20:52.947061 sshd[5375]: Accepted publickey for core from 10.0.0.1 port 56668 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:20:52.948500 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:20:52.954630 systemd-logind[1511]: New session 15 of user core. Aug 19 00:20:52.966984 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 00:20:53.143402 sshd[5378]: Connection closed by 10.0.0.1 port 56668 Aug 19 00:20:53.143969 sshd-session[5375]: pam_unix(sshd:session): session closed for user core Aug 19 00:20:53.149766 systemd[1]: sshd@14-10.0.0.90:22-10.0.0.1:56668.service: Deactivated successfully. Aug 19 00:20:53.151907 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 00:20:53.153357 systemd-logind[1511]: Session 15 logged out. Waiting for processes to exit. Aug 19 00:20:53.154683 systemd-logind[1511]: Removed session 15. Aug 19 00:20:54.155849 containerd[1542]: time="2025-08-19T00:20:54.155806797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd74b3558333dc3eff7aaf506b1d141398df613fafb16f9bccfe082a317e5b58\" id:\"076b3db0658f0c35bd4fd880bc91209dc8e075ab2f4a2a0db74216bc77af6884\" pid:5403 exited_at:{seconds:1755562854 nanos:155460637}" Aug 19 00:20:54.504652 containerd[1542]: time="2025-08-19T00:20:54.504487469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e504bccd983c5c72b0063d4bfaa4743aefbb35787a25f3a71d3c65f6b614b2b\" id:\"6ec0908d2251c63bfdb13b32d83a25eeba72022f90464855b1f6552b9b674f58\" pid:5426 exited_at:{seconds:1755562854 nanos:496548778}" Aug 19 00:20:58.172604 systemd[1]: Started sshd@15-10.0.0.90:22-10.0.0.1:56682.service - OpenSSH per-connection server daemon (10.0.0.1:56682). Aug 19 00:20:58.267420 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 56682 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:20:58.269903 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:20:58.275236 systemd-logind[1511]: New session 16 of user core. Aug 19 00:20:58.285976 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 00:20:58.453405 sshd[5443]: Connection closed by 10.0.0.1 port 56682 Aug 19 00:20:58.455109 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Aug 19 00:20:58.466622 systemd[1]: sshd@15-10.0.0.90:22-10.0.0.1:56682.service: Deactivated successfully. Aug 19 00:20:58.469117 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 00:20:58.470208 systemd-logind[1511]: Session 16 logged out. Waiting for processes to exit. Aug 19 00:20:58.473284 systemd[1]: Started sshd@16-10.0.0.90:22-10.0.0.1:56698.service - OpenSSH per-connection server daemon (10.0.0.1:56698). Aug 19 00:20:58.474870 systemd-logind[1511]: Removed session 16. Aug 19 00:20:58.551706 sshd[5457]: Accepted publickey for core from 10.0.0.1 port 56698 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:20:58.553068 sshd-session[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:20:58.557850 systemd-logind[1511]: New session 17 of user core. Aug 19 00:20:58.565954 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 00:20:58.807955 sshd[5460]: Connection closed by 10.0.0.1 port 56698 Aug 19 00:20:58.809035 sshd-session[5457]: pam_unix(sshd:session): session closed for user core Aug 19 00:20:58.823512 systemd[1]: sshd@16-10.0.0.90:22-10.0.0.1:56698.service: Deactivated successfully. Aug 19 00:20:58.827532 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 00:20:58.828871 systemd-logind[1511]: Session 17 logged out. Waiting for processes to exit. Aug 19 00:20:58.833215 systemd-logind[1511]: Removed session 17. Aug 19 00:20:58.834374 systemd[1]: Started sshd@17-10.0.0.90:22-10.0.0.1:56714.service - OpenSSH per-connection server daemon (10.0.0.1:56714). Aug 19 00:20:58.892335 sshd[5474]: Accepted publickey for core from 10.0.0.1 port 56714 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:20:58.894148 sshd-session[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:20:58.901069 systemd-logind[1511]: New session 18 of user core. Aug 19 00:20:58.908079 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 00:20:59.667442 kubelet[2658]: I0819 00:20:59.667155 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:20:59.719493 kubelet[2658]: I0819 00:20:59.719405 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mnt8g" podStartSLOduration=49.855760462 podStartE2EDuration="56.719387928s" podCreationTimestamp="2025-08-19 00:20:03 +0000 UTC" firstStartedPulling="2025-08-19 00:20:28.41701774 +0000 UTC m=+43.705087611" lastFinishedPulling="2025-08-19 00:20:35.280645246 +0000 UTC m=+50.568715077" observedRunningTime="2025-08-19 00:20:36.158724746 +0000 UTC m=+51.446794657" watchObservedRunningTime="2025-08-19 00:20:59.719387928 +0000 UTC m=+75.007457799" Aug 19 00:20:59.722371 sshd[5477]: Connection closed by 10.0.0.1 port 56714 Aug 19 00:20:59.723299 sshd-session[5474]: pam_unix(sshd:session): session closed for user core Aug 19 00:20:59.739531 systemd[1]: sshd@17-10.0.0.90:22-10.0.0.1:56714.service: Deactivated successfully. Aug 19 00:20:59.742625 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 00:20:59.744168 systemd-logind[1511]: Session 18 logged out. Waiting for processes to exit. Aug 19 00:20:59.747416 systemd-logind[1511]: Removed session 18. Aug 19 00:20:59.751348 systemd[1]: Started sshd@18-10.0.0.90:22-10.0.0.1:56730.service - OpenSSH per-connection server daemon (10.0.0.1:56730). Aug 19 00:20:59.823385 sshd[5495]: Accepted publickey for core from 10.0.0.1 port 56730 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:20:59.824970 sshd-session[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:20:59.831227 systemd-logind[1511]: New session 19 of user core. Aug 19 00:20:59.841029 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 00:21:00.234170 sshd[5500]: Connection closed by 10.0.0.1 port 56730 Aug 19 00:21:00.234092 sshd-session[5495]: pam_unix(sshd:session): session closed for user core Aug 19 00:21:00.244153 systemd[1]: sshd@18-10.0.0.90:22-10.0.0.1:56730.service: Deactivated successfully. Aug 19 00:21:00.246586 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 00:21:00.248243 systemd-logind[1511]: Session 19 logged out. Waiting for processes to exit. Aug 19 00:21:00.253253 systemd[1]: Started sshd@19-10.0.0.90:22-10.0.0.1:56742.service - OpenSSH per-connection server daemon (10.0.0.1:56742). Aug 19 00:21:00.254497 systemd-logind[1511]: Removed session 19. Aug 19 00:21:00.315331 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 56742 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:21:00.317461 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:21:00.326011 systemd-logind[1511]: New session 20 of user core. Aug 19 00:21:00.333010 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 00:21:00.503512 sshd[5521]: Connection closed by 10.0.0.1 port 56742 Aug 19 00:21:00.504181 sshd-session[5518]: pam_unix(sshd:session): session closed for user core Aug 19 00:21:00.510099 systemd-logind[1511]: Session 20 logged out. Waiting for processes to exit. Aug 19 00:21:00.510286 systemd[1]: sshd@19-10.0.0.90:22-10.0.0.1:56742.service: Deactivated successfully. Aug 19 00:21:00.514001 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 00:21:00.515326 systemd-logind[1511]: Removed session 20. Aug 19 00:21:01.806011 kubelet[2658]: E0819 00:21:01.805959 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:21:05.205631 containerd[1542]: time="2025-08-19T00:21:05.205566794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee86ce9b6bf5e131b5ee1c721e5f370e05174748000dac1c50e70582a2416051\" id:\"61e481f972136abc323dcfaad9dd18c65fa1dad2fc963f328b37d6ecbb5489a9\" pid:5560 exited_at:{seconds:1755562865 nanos:205028393}" Aug 19 00:21:05.244500 containerd[1542]: time="2025-08-19T00:21:05.244422679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd74b3558333dc3eff7aaf506b1d141398df613fafb16f9bccfe082a317e5b58\" id:\"85bf2a392040c408d12ae41e7a67d83634a95a79af2c3314351f655a47b2819e\" pid:5561 exited_at:{seconds:1755562865 nanos:244112558}" Aug 19 00:21:05.518889 systemd[1]: Started sshd@20-10.0.0.90:22-10.0.0.1:42240.service - OpenSSH per-connection server daemon (10.0.0.1:42240). Aug 19 00:21:05.609536 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 42240 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:21:05.611020 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:21:05.616293 systemd-logind[1511]: New session 21 of user core. Aug 19 00:21:05.623984 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 00:21:05.797766 sshd[5587]: Connection closed by 10.0.0.1 port 42240 Aug 19 00:21:05.798792 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Aug 19 00:21:05.804858 systemd[1]: sshd@20-10.0.0.90:22-10.0.0.1:42240.service: Deactivated successfully. Aug 19 00:21:05.807100 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 00:21:05.810202 systemd-logind[1511]: Session 21 logged out. Waiting for processes to exit. Aug 19 00:21:05.812409 systemd-logind[1511]: Removed session 21. Aug 19 00:21:10.811685 systemd[1]: Started sshd@21-10.0.0.90:22-10.0.0.1:42256.service - OpenSSH per-connection server daemon (10.0.0.1:42256). Aug 19 00:21:10.888851 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 42256 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:21:10.890461 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:21:10.895995 systemd-logind[1511]: New session 22 of user core. Aug 19 00:21:10.905986 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 00:21:11.187359 sshd[5605]: Connection closed by 10.0.0.1 port 42256 Aug 19 00:21:11.187711 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Aug 19 00:21:11.195107 systemd[1]: sshd@21-10.0.0.90:22-10.0.0.1:42256.service: Deactivated successfully. Aug 19 00:21:11.196951 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 00:21:11.201259 systemd-logind[1511]: Session 22 logged out. Waiting for processes to exit. Aug 19 00:21:11.202352 systemd-logind[1511]: Removed session 22. Aug 19 00:21:14.805833 kubelet[2658]: E0819 00:21:14.805729 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:21:15.421964 containerd[1542]: time="2025-08-19T00:21:15.421919535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee86ce9b6bf5e131b5ee1c721e5f370e05174748000dac1c50e70582a2416051\" id:\"0a98059acdba801518e113f70d5450af5a3f034425875f15418ca3e4b535f3a3\" pid:5630 exited_at:{seconds:1755562875 nanos:421570654}" Aug 19 00:21:15.805521 kubelet[2658]: E0819 00:21:15.805475 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:21:16.207332 systemd[1]: Started sshd@22-10.0.0.90:22-10.0.0.1:51042.service - OpenSSH per-connection server daemon (10.0.0.1:51042). Aug 19 00:21:16.280793 sshd[5641]: Accepted publickey for core from 10.0.0.1 port 51042 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:21:16.282076 sshd-session[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:21:16.286367 systemd-logind[1511]: New session 23 of user core. Aug 19 00:21:16.300989 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 19 00:21:16.453587 sshd[5644]: Connection closed by 10.0.0.1 port 51042 Aug 19 00:21:16.454321 sshd-session[5641]: pam_unix(sshd:session): session closed for user core Aug 19 00:21:16.458450 systemd-logind[1511]: Session 23 logged out. Waiting for processes to exit. Aug 19 00:21:16.458984 systemd[1]: sshd@22-10.0.0.90:22-10.0.0.1:51042.service: Deactivated successfully. Aug 19 00:21:16.461758 systemd[1]: session-23.scope: Deactivated successfully. Aug 19 00:21:16.464373 systemd-logind[1511]: Removed session 23.