Jul 12 09:34:31.820002 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 09:34:31.820023 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sat Jul 12 08:24:03 -00 2025 Jul 12 09:34:31.820032 kernel: KASLR enabled Jul 12 09:34:31.820037 kernel: efi: EFI v2.7 by EDK II Jul 12 09:34:31.820043 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 12 09:34:31.820048 kernel: random: crng init done Jul 12 09:34:31.820054 kernel: secureboot: Secure boot disabled Jul 12 09:34:31.820060 kernel: ACPI: Early table checksum verification disabled Jul 12 09:34:31.820066 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 12 09:34:31.820073 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 09:34:31.820079 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:34:31.820092 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:34:31.820098 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:34:31.820104 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:34:31.820111 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:34:31.820119 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:34:31.820125 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:34:31.820131 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:34:31.820137 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:34:31.820143 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 09:34:31.820148 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 12 09:34:31.820155 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 09:34:31.820160 kernel: NODE_DATA(0) allocated [mem 0xdc964a00-0xdc96bfff] Jul 12 09:34:31.820166 kernel: Zone ranges: Jul 12 09:34:31.820172 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 09:34:31.820179 kernel: DMA32 empty Jul 12 09:34:31.820185 kernel: Normal empty Jul 12 09:34:31.820191 kernel: Device empty Jul 12 09:34:31.820197 kernel: Movable zone start for each node Jul 12 09:34:31.820203 kernel: Early memory node ranges Jul 12 09:34:31.820209 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 12 09:34:31.820215 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 12 09:34:31.820221 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 12 09:34:31.820226 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 12 09:34:31.820232 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 12 09:34:31.820238 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 12 09:34:31.820244 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 12 09:34:31.820251 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 12 09:34:31.820257 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 12 09:34:31.820263 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 12 09:34:31.820272 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 12 09:34:31.820278 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 12 09:34:31.820284 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 12 09:34:31.820292 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 09:34:31.820298 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 09:34:31.820305 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 12 09:34:31.820311 kernel: psci: probing for conduit method from ACPI. Jul 12 09:34:31.820317 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 09:34:31.820323 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 09:34:31.820330 kernel: psci: Trusted OS migration not required Jul 12 09:34:31.820336 kernel: psci: SMC Calling Convention v1.1 Jul 12 09:34:31.820342 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 09:34:31.820349 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 12 09:34:31.820356 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 12 09:34:31.820362 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 09:34:31.820369 kernel: Detected PIPT I-cache on CPU0 Jul 12 09:34:31.820375 kernel: CPU features: detected: GIC system register CPU interface Jul 12 09:34:31.820381 kernel: CPU features: detected: Spectre-v4 Jul 12 09:34:31.820388 kernel: CPU features: detected: Spectre-BHB Jul 12 09:34:31.820394 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 09:34:31.820400 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 09:34:31.820407 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 09:34:31.820413 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 09:34:31.820419 kernel: alternatives: applying boot alternatives Jul 12 09:34:31.820427 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2eed6122ab9d95fa96c8f5511b96c1220a0caf18bbf7b84035ef573d9ba90496 Jul 12 09:34:31.820435 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 09:34:31.820441 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 09:34:31.820448 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 09:34:31.820454 kernel: Fallback order for Node 0: 0 Jul 12 09:34:31.820460 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 12 09:34:31.820466 kernel: Policy zone: DMA Jul 12 09:34:31.820473 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 09:34:31.820479 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 12 09:34:31.820485 kernel: software IO TLB: area num 4. Jul 12 09:34:31.820492 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 12 09:34:31.820498 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 12 09:34:31.820506 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 09:34:31.820512 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 09:34:31.820519 kernel: rcu: RCU event tracing is enabled. Jul 12 09:34:31.820526 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 09:34:31.820532 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 09:34:31.820539 kernel: Tracing variant of Tasks RCU enabled. Jul 12 09:34:31.820545 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 09:34:31.820551 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 09:34:31.820558 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 09:34:31.820564 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 09:34:31.820571 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 09:34:31.820578 kernel: GICv3: 256 SPIs implemented Jul 12 09:34:31.820585 kernel: GICv3: 0 Extended SPIs implemented Jul 12 09:34:31.820591 kernel: Root IRQ handler: gic_handle_irq Jul 12 09:34:31.820598 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 09:34:31.820604 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 12 09:34:31.820610 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 09:34:31.820616 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 09:34:31.820623 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 12 09:34:31.820629 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 12 09:34:31.820636 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 12 09:34:31.820642 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 12 09:34:31.820649 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 09:34:31.820656 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 09:34:31.820663 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 09:34:31.820669 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 09:34:31.820676 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 09:34:31.820682 kernel: arm-pv: using stolen time PV Jul 12 09:34:31.820689 kernel: Console: colour dummy device 80x25 Jul 12 09:34:31.820695 kernel: ACPI: Core revision 20240827 Jul 12 09:34:31.820702 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 09:34:31.820708 kernel: pid_max: default: 32768 minimum: 301 Jul 12 09:34:31.820715 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 12 09:34:31.820723 kernel: landlock: Up and running. Jul 12 09:34:31.820729 kernel: SELinux: Initializing. Jul 12 09:34:31.820736 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 09:34:31.820742 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 09:34:31.820749 kernel: rcu: Hierarchical SRCU implementation. Jul 12 09:34:31.820756 kernel: rcu: Max phase no-delay instances is 400. Jul 12 09:34:31.820762 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 12 09:34:31.820768 kernel: Remapping and enabling EFI services. Jul 12 09:34:31.820775 kernel: smp: Bringing up secondary CPUs ... Jul 12 09:34:31.820787 kernel: Detected PIPT I-cache on CPU1 Jul 12 09:34:31.820794 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 09:34:31.820801 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 12 09:34:31.820829 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 09:34:31.820837 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 09:34:31.820844 kernel: Detected PIPT I-cache on CPU2 Jul 12 09:34:31.820851 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 09:34:31.820858 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 12 09:34:31.820868 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 09:34:31.820875 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 09:34:31.820882 kernel: Detected PIPT I-cache on CPU3 Jul 12 09:34:31.820889 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 09:34:31.820896 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 12 09:34:31.820903 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 09:34:31.820909 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 09:34:31.820916 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 09:34:31.820923 kernel: SMP: Total of 4 processors activated. Jul 12 09:34:31.820931 kernel: CPU: All CPU(s) started at EL1 Jul 12 09:34:31.820938 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 09:34:31.820945 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 09:34:31.820952 kernel: CPU features: detected: Common not Private translations Jul 12 09:34:31.820959 kernel: CPU features: detected: CRC32 instructions Jul 12 09:34:31.820966 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 09:34:31.820972 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 09:34:31.820979 kernel: CPU features: detected: LSE atomic instructions Jul 12 09:34:31.820986 kernel: CPU features: detected: Privileged Access Never Jul 12 09:34:31.820994 kernel: CPU features: detected: RAS Extension Support Jul 12 09:34:31.821001 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 09:34:31.821008 kernel: alternatives: applying system-wide alternatives Jul 12 09:34:31.821015 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 12 09:34:31.821023 kernel: Memory: 2424028K/2572288K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 125924K reserved, 16384K cma-reserved) Jul 12 09:34:31.821030 kernel: devtmpfs: initialized Jul 12 09:34:31.821037 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 09:34:31.821044 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 09:34:31.821051 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 09:34:31.821059 kernel: 0 pages in range for non-PLT usage Jul 12 09:34:31.821066 kernel: 508448 pages in range for PLT usage Jul 12 09:34:31.821072 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 09:34:31.821079 kernel: SMBIOS 3.0.0 present. Jul 12 09:34:31.821092 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 12 09:34:31.821099 kernel: DMI: Memory slots populated: 1/1 Jul 12 09:34:31.821105 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 09:34:31.821112 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 09:34:31.821120 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 09:34:31.821129 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 09:34:31.821135 kernel: audit: initializing netlink subsys (disabled) Jul 12 09:34:31.821142 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 12 09:34:31.821149 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 09:34:31.821156 kernel: cpuidle: using governor menu Jul 12 09:34:31.821163 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 09:34:31.821170 kernel: ASID allocator initialised with 32768 entries Jul 12 09:34:31.821177 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 09:34:31.821183 kernel: Serial: AMBA PL011 UART driver Jul 12 09:34:31.821191 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 09:34:31.821198 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 09:34:31.821205 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 09:34:31.821212 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 09:34:31.821219 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 09:34:31.821226 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 09:34:31.821233 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 09:34:31.821240 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 09:34:31.821246 kernel: ACPI: Added _OSI(Module Device) Jul 12 09:34:31.821254 kernel: ACPI: Added _OSI(Processor Device) Jul 12 09:34:31.821261 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 09:34:31.821268 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 09:34:31.821275 kernel: ACPI: Interpreter enabled Jul 12 09:34:31.821282 kernel: ACPI: Using GIC for interrupt routing Jul 12 09:34:31.821288 kernel: ACPI: MCFG table detected, 1 entries Jul 12 09:34:31.821295 kernel: ACPI: CPU0 has been hot-added Jul 12 09:34:31.821302 kernel: ACPI: CPU1 has been hot-added Jul 12 09:34:31.821309 kernel: ACPI: CPU2 has been hot-added Jul 12 09:34:31.821316 kernel: ACPI: CPU3 has been hot-added Jul 12 09:34:31.821324 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 09:34:31.821331 kernel: printk: legacy console [ttyAMA0] enabled Jul 12 09:34:31.821338 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 09:34:31.821463 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 09:34:31.821527 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 09:34:31.821586 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 09:34:31.821643 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 09:34:31.821703 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 09:34:31.821712 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 09:34:31.821719 kernel: PCI host bridge to bus 0000:00 Jul 12 09:34:31.821782 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 09:34:31.821865 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 09:34:31.821920 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 09:34:31.821972 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 09:34:31.822054 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 12 09:34:31.822132 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 12 09:34:31.822195 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 12 09:34:31.822254 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 12 09:34:31.822314 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 09:34:31.822374 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 12 09:34:31.822433 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 12 09:34:31.822495 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 12 09:34:31.822551 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 09:34:31.822603 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 09:34:31.822657 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 09:34:31.822666 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 09:34:31.822674 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 09:34:31.822680 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 09:34:31.822689 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 09:34:31.822696 kernel: iommu: Default domain type: Translated Jul 12 09:34:31.822703 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 09:34:31.822710 kernel: efivars: Registered efivars operations Jul 12 09:34:31.822717 kernel: vgaarb: loaded Jul 12 09:34:31.822724 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 09:34:31.822731 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 09:34:31.822738 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 09:34:31.822744 kernel: pnp: PnP ACPI init Jul 12 09:34:31.822829 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 09:34:31.822840 kernel: pnp: PnP ACPI: found 1 devices Jul 12 09:34:31.822847 kernel: NET: Registered PF_INET protocol family Jul 12 09:34:31.822854 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 09:34:31.822861 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 09:34:31.822868 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 09:34:31.822875 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 09:34:31.822882 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 09:34:31.822891 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 09:34:31.822898 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 09:34:31.822905 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 09:34:31.822912 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 09:34:31.822919 kernel: PCI: CLS 0 bytes, default 64 Jul 12 09:34:31.822926 kernel: kvm [1]: HYP mode not available Jul 12 09:34:31.822933 kernel: Initialise system trusted keyrings Jul 12 09:34:31.822940 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 09:34:31.822947 kernel: Key type asymmetric registered Jul 12 09:34:31.822955 kernel: Asymmetric key parser 'x509' registered Jul 12 09:34:31.822962 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 12 09:34:31.822969 kernel: io scheduler mq-deadline registered Jul 12 09:34:31.822976 kernel: io scheduler kyber registered Jul 12 09:34:31.822983 kernel: io scheduler bfq registered Jul 12 09:34:31.822990 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 09:34:31.822997 kernel: ACPI: button: Power Button [PWRB] Jul 12 09:34:31.823004 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 09:34:31.823067 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 09:34:31.823078 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 09:34:31.823090 kernel: thunder_xcv, ver 1.0 Jul 12 09:34:31.823097 kernel: thunder_bgx, ver 1.0 Jul 12 09:34:31.823104 kernel: nicpf, ver 1.0 Jul 12 09:34:31.823111 kernel: nicvf, ver 1.0 Jul 12 09:34:31.823179 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 09:34:31.823238 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T09:34:31 UTC (1752312871) Jul 12 09:34:31.823247 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 09:34:31.823258 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 12 09:34:31.823267 kernel: watchdog: NMI not fully supported Jul 12 09:34:31.823275 kernel: watchdog: Hard watchdog permanently disabled Jul 12 09:34:31.823282 kernel: NET: Registered PF_INET6 protocol family Jul 12 09:34:31.823288 kernel: Segment Routing with IPv6 Jul 12 09:34:31.823296 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 09:34:31.823302 kernel: NET: Registered PF_PACKET protocol family Jul 12 09:34:31.823309 kernel: Key type dns_resolver registered Jul 12 09:34:31.823316 kernel: registered taskstats version 1 Jul 12 09:34:31.823323 kernel: Loading compiled-in X.509 certificates Jul 12 09:34:31.823331 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 5833903fd926e330df1283c2ccd9d99e7cfa4219' Jul 12 09:34:31.823338 kernel: Demotion targets for Node 0: null Jul 12 09:34:31.823345 kernel: Key type .fscrypt registered Jul 12 09:34:31.823351 kernel: Key type fscrypt-provisioning registered Jul 12 09:34:31.823358 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 09:34:31.823365 kernel: ima: Allocated hash algorithm: sha1 Jul 12 09:34:31.823372 kernel: ima: No architecture policies found Jul 12 09:34:31.823379 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 09:34:31.823387 kernel: clk: Disabling unused clocks Jul 12 09:34:31.823394 kernel: PM: genpd: Disabling unused power domains Jul 12 09:34:31.823401 kernel: Warning: unable to open an initial console. Jul 12 09:34:31.823408 kernel: Freeing unused kernel memory: 39424K Jul 12 09:34:31.823415 kernel: Run /init as init process Jul 12 09:34:31.823421 kernel: with arguments: Jul 12 09:34:31.823428 kernel: /init Jul 12 09:34:31.823435 kernel: with environment: Jul 12 09:34:31.823442 kernel: HOME=/ Jul 12 09:34:31.823448 kernel: TERM=linux Jul 12 09:34:31.823456 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 09:34:31.823464 systemd[1]: Successfully made /usr/ read-only. Jul 12 09:34:31.823474 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 09:34:31.823482 systemd[1]: Detected virtualization kvm. Jul 12 09:34:31.823490 systemd[1]: Detected architecture arm64. Jul 12 09:34:31.823497 systemd[1]: Running in initrd. Jul 12 09:34:31.823504 systemd[1]: No hostname configured, using default hostname. Jul 12 09:34:31.823515 systemd[1]: Hostname set to . Jul 12 09:34:31.823523 systemd[1]: Initializing machine ID from VM UUID. Jul 12 09:34:31.823530 systemd[1]: Queued start job for default target initrd.target. Jul 12 09:34:31.823538 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 09:34:31.823545 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 09:34:31.823553 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 09:34:31.823561 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 09:34:31.823568 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 09:34:31.823578 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 09:34:31.823586 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 09:34:31.823594 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 09:34:31.823601 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 09:34:31.823609 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 09:34:31.823616 systemd[1]: Reached target paths.target - Path Units. Jul 12 09:34:31.823624 systemd[1]: Reached target slices.target - Slice Units. Jul 12 09:34:31.823632 systemd[1]: Reached target swap.target - Swaps. Jul 12 09:34:31.823639 systemd[1]: Reached target timers.target - Timer Units. Jul 12 09:34:31.823647 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 09:34:31.823654 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 09:34:31.823662 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 09:34:31.823669 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 12 09:34:31.823677 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 09:34:31.823684 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 09:34:31.823693 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 09:34:31.823700 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 09:34:31.823708 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 09:34:31.823715 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 09:34:31.823723 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 09:34:31.823730 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 12 09:34:31.823738 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 09:34:31.823745 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 09:34:31.823753 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 09:34:31.823761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 09:34:31.823769 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 09:34:31.823777 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 09:34:31.823784 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 09:34:31.823793 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 09:34:31.823830 systemd-journald[246]: Collecting audit messages is disabled. Jul 12 09:34:31.823849 systemd-journald[246]: Journal started Jul 12 09:34:31.823869 systemd-journald[246]: Runtime Journal (/run/log/journal/77be8f09f1454b6a8b9225d9af5c1f5c) is 6M, max 48.5M, 42.4M free. Jul 12 09:34:31.813097 systemd-modules-load[247]: Inserted module 'overlay' Jul 12 09:34:31.825632 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 09:34:31.826923 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 09:34:31.830371 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 09:34:31.829585 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 09:34:31.833222 systemd-modules-load[247]: Inserted module 'br_netfilter' Jul 12 09:34:31.834131 kernel: Bridge firewalling registered Jul 12 09:34:31.833520 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 09:34:31.835722 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 09:34:31.844305 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 09:34:31.845638 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 09:34:31.849042 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 09:34:31.853968 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 09:34:31.856233 systemd-tmpfiles[269]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 12 09:34:31.858772 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 09:34:31.863255 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 09:34:31.864627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 09:34:31.867562 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 09:34:31.869858 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 09:34:31.891910 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2eed6122ab9d95fa96c8f5511b96c1220a0caf18bbf7b84035ef573d9ba90496 Jul 12 09:34:31.909556 systemd-resolved[288]: Positive Trust Anchors: Jul 12 09:34:31.909571 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 09:34:31.909604 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 09:34:31.914279 systemd-resolved[288]: Defaulting to hostname 'linux'. Jul 12 09:34:31.915222 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 09:34:31.919979 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 09:34:31.970840 kernel: SCSI subsystem initialized Jul 12 09:34:31.978825 kernel: Loading iSCSI transport class v2.0-870. Jul 12 09:34:31.985827 kernel: iscsi: registered transport (tcp) Jul 12 09:34:32.000840 kernel: iscsi: registered transport (qla4xxx) Jul 12 09:34:32.000886 kernel: QLogic iSCSI HBA Driver Jul 12 09:34:32.017798 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 09:34:32.044079 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 09:34:32.045592 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 09:34:32.092249 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 09:34:32.095851 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 09:34:32.161845 kernel: raid6: neonx8 gen() 15736 MB/s Jul 12 09:34:32.178829 kernel: raid6: neonx4 gen() 15780 MB/s Jul 12 09:34:32.195829 kernel: raid6: neonx2 gen() 13155 MB/s Jul 12 09:34:32.212835 kernel: raid6: neonx1 gen() 10426 MB/s Jul 12 09:34:32.229829 kernel: raid6: int64x8 gen() 6875 MB/s Jul 12 09:34:32.246826 kernel: raid6: int64x4 gen() 7324 MB/s Jul 12 09:34:32.263832 kernel: raid6: int64x2 gen() 6089 MB/s Jul 12 09:34:32.280955 kernel: raid6: int64x1 gen() 5037 MB/s Jul 12 09:34:32.280972 kernel: raid6: using algorithm neonx4 gen() 15780 MB/s Jul 12 09:34:32.298900 kernel: raid6: .... xor() 12320 MB/s, rmw enabled Jul 12 09:34:32.298913 kernel: raid6: using neon recovery algorithm Jul 12 09:34:32.303826 kernel: xor: measuring software checksum speed Jul 12 09:34:32.305043 kernel: 8regs : 18903 MB/sec Jul 12 09:34:32.305058 kernel: 32regs : 21664 MB/sec Jul 12 09:34:32.306333 kernel: arm64_neon : 26535 MB/sec Jul 12 09:34:32.306350 kernel: xor: using function: arm64_neon (26535 MB/sec) Jul 12 09:34:32.363833 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 09:34:32.369815 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 09:34:32.372562 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 09:34:32.401715 systemd-udevd[497]: Using default interface naming scheme 'v255'. Jul 12 09:34:32.405888 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 09:34:32.408227 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 09:34:32.434755 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jul 12 09:34:32.456606 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 09:34:32.458991 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 09:34:32.514331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 09:34:32.516997 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 09:34:32.564476 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 12 09:34:32.564650 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 09:34:32.572499 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 09:34:32.572554 kernel: GPT:9289727 != 19775487 Jul 12 09:34:32.572567 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 09:34:32.573006 kernel: GPT:9289727 != 19775487 Jul 12 09:34:32.573405 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 09:34:32.576357 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 09:34:32.576380 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 09:34:32.573524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 09:34:32.575700 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 09:34:32.578171 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 09:34:32.605704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 09:34:32.612783 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 09:34:32.622585 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 09:34:32.629487 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 09:34:32.630750 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 09:34:32.639940 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 09:34:32.647999 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 09:34:32.649294 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 09:34:32.651813 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 09:34:32.653984 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 09:34:32.656713 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 09:34:32.658559 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 09:34:32.679564 disk-uuid[588]: Primary Header is updated. Jul 12 09:34:32.679564 disk-uuid[588]: Secondary Entries is updated. Jul 12 09:34:32.679564 disk-uuid[588]: Secondary Header is updated. Jul 12 09:34:32.683710 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 09:34:32.687482 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 09:34:33.693861 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 09:34:33.694776 disk-uuid[593]: The operation has completed successfully. Jul 12 09:34:33.714482 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 09:34:33.714576 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 09:34:33.747783 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 09:34:33.764706 sh[609]: Success Jul 12 09:34:33.782100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 09:34:33.782144 kernel: device-mapper: uevent: version 1.0.3 Jul 12 09:34:33.783765 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 12 09:34:33.794830 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 12 09:34:33.822912 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 09:34:33.825915 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 09:34:33.835508 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 09:34:33.845386 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 12 09:34:33.845426 kernel: BTRFS: device fsid 61a6979b-5b23-4687-8775-cb04acb91b0a devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (621) Jul 12 09:34:33.846767 kernel: BTRFS info (device dm-0): first mount of filesystem 61a6979b-5b23-4687-8775-cb04acb91b0a Jul 12 09:34:33.846782 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 09:34:33.848345 kernel: BTRFS info (device dm-0): using free-space-tree Jul 12 09:34:33.851550 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 09:34:33.852847 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 12 09:34:33.854234 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 09:34:33.854987 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 09:34:33.856496 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 09:34:33.886850 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (652) Jul 12 09:34:33.889577 kernel: BTRFS info (device vda6): first mount of filesystem e5a719e8-42e4-4055-8ce0-9ce9f50475f2 Jul 12 09:34:33.889609 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 09:34:33.889620 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 09:34:33.895825 kernel: BTRFS info (device vda6): last unmount of filesystem e5a719e8-42e4-4055-8ce0-9ce9f50475f2 Jul 12 09:34:33.897524 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 09:34:33.900042 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 09:34:33.959958 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 09:34:33.964489 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 09:34:34.002781 systemd-networkd[791]: lo: Link UP Jul 12 09:34:34.002791 systemd-networkd[791]: lo: Gained carrier Jul 12 09:34:34.003599 systemd-networkd[791]: Enumeration completed Jul 12 09:34:34.003676 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 09:34:34.005375 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 09:34:34.005379 systemd-networkd[791]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 09:34:34.005803 systemd-networkd[791]: eth0: Link UP Jul 12 09:34:34.005824 systemd-networkd[791]: eth0: Gained carrier Jul 12 09:34:34.005831 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 09:34:34.006018 systemd[1]: Reached target network.target - Network. Jul 12 09:34:34.033855 systemd-networkd[791]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 09:34:34.055457 ignition[701]: Ignition 2.21.0 Jul 12 09:34:34.055472 ignition[701]: Stage: fetch-offline Jul 12 09:34:34.055507 ignition[701]: no configs at "/usr/lib/ignition/base.d" Jul 12 09:34:34.055514 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 09:34:34.055835 ignition[701]: parsed url from cmdline: "" Jul 12 09:34:34.055840 ignition[701]: no config URL provided Jul 12 09:34:34.055845 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 09:34:34.055854 ignition[701]: no config at "/usr/lib/ignition/user.ign" Jul 12 09:34:34.055871 ignition[701]: op(1): [started] loading QEMU firmware config module Jul 12 09:34:34.055875 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 09:34:34.068682 ignition[701]: op(1): [finished] loading QEMU firmware config module Jul 12 09:34:34.106146 ignition[701]: parsing config with SHA512: a6109faf4899688d1a8fc13f943263e75a14146a5baec6088561270fe33db9787d92c6c86a9ee887ac59d9fb236550986e3f0af93cbb8c480a68b0a015ea9835 Jul 12 09:34:34.109982 unknown[701]: fetched base config from "system" Jul 12 09:34:34.109993 unknown[701]: fetched user config from "qemu" Jul 12 09:34:34.110365 ignition[701]: fetch-offline: fetch-offline passed Jul 12 09:34:34.110415 ignition[701]: Ignition finished successfully Jul 12 09:34:34.113716 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 09:34:34.115268 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 09:34:34.118091 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 09:34:34.146695 ignition[807]: Ignition 2.21.0 Jul 12 09:34:34.146711 ignition[807]: Stage: kargs Jul 12 09:34:34.146850 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jul 12 09:34:34.146858 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 09:34:34.148793 ignition[807]: kargs: kargs passed Jul 12 09:34:34.148857 ignition[807]: Ignition finished successfully Jul 12 09:34:34.152139 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 09:34:34.154935 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 09:34:34.177348 ignition[816]: Ignition 2.21.0 Jul 12 09:34:34.177365 ignition[816]: Stage: disks Jul 12 09:34:34.177481 ignition[816]: no configs at "/usr/lib/ignition/base.d" Jul 12 09:34:34.177490 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 09:34:34.179753 ignition[816]: disks: disks passed Jul 12 09:34:34.182180 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 09:34:34.179821 ignition[816]: Ignition finished successfully Jul 12 09:34:34.183280 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 09:34:34.184967 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 09:34:34.186656 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 09:34:34.188449 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 09:34:34.190355 systemd[1]: Reached target basic.target - Basic System. Jul 12 09:34:34.192736 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 09:34:34.216155 systemd-fsck[826]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 12 09:34:34.220888 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 09:34:34.225005 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 09:34:34.295824 kernel: EXT4-fs (vda9): mounted filesystem 016d0f7f-22a0-4255-85cc-97a6d773acb9 r/w with ordered data mode. Quota mode: none. Jul 12 09:34:34.296626 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 09:34:34.297864 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 09:34:34.302005 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 09:34:34.303590 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 09:34:34.304628 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 09:34:34.304665 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 09:34:34.304701 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 09:34:34.321333 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 09:34:34.324419 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 09:34:34.330327 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (834) Jul 12 09:34:34.330348 kernel: BTRFS info (device vda6): first mount of filesystem e5a719e8-42e4-4055-8ce0-9ce9f50475f2 Jul 12 09:34:34.330363 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 09:34:34.330372 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 09:34:34.331695 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 09:34:34.368269 initrd-setup-root[858]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 09:34:34.371197 initrd-setup-root[865]: cut: /sysroot/etc/group: No such file or directory Jul 12 09:34:34.375095 initrd-setup-root[872]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 09:34:34.378913 initrd-setup-root[879]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 09:34:34.443470 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 09:34:34.445439 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 09:34:34.446912 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 09:34:34.472832 kernel: BTRFS info (device vda6): last unmount of filesystem e5a719e8-42e4-4055-8ce0-9ce9f50475f2 Jul 12 09:34:34.486452 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 09:34:34.493896 ignition[948]: INFO : Ignition 2.21.0 Jul 12 09:34:34.493896 ignition[948]: INFO : Stage: mount Jul 12 09:34:34.496497 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 09:34:34.496497 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 09:34:34.496497 ignition[948]: INFO : mount: mount passed Jul 12 09:34:34.496497 ignition[948]: INFO : Ignition finished successfully Jul 12 09:34:34.497180 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 09:34:34.499135 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 09:34:34.844225 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 09:34:34.845800 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 09:34:34.879266 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (960) Jul 12 09:34:34.879303 kernel: BTRFS info (device vda6): first mount of filesystem e5a719e8-42e4-4055-8ce0-9ce9f50475f2 Jul 12 09:34:34.879313 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 09:34:34.880250 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 09:34:34.883537 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 09:34:34.912737 ignition[977]: INFO : Ignition 2.21.0 Jul 12 09:34:34.912737 ignition[977]: INFO : Stage: files Jul 12 09:34:34.914750 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 09:34:34.914750 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 09:34:34.916914 ignition[977]: DEBUG : files: compiled without relabeling support, skipping Jul 12 09:34:34.916914 ignition[977]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 09:34:34.916914 ignition[977]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 09:34:34.920938 ignition[977]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 09:34:34.920938 ignition[977]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 09:34:34.920938 ignition[977]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 09:34:34.920938 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 09:34:34.920938 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 12 09:34:34.918618 unknown[977]: wrote ssh authorized keys file for user: core Jul 12 09:34:34.965985 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 09:34:35.143194 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 09:34:35.145243 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 09:34:35.145243 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 09:34:35.145243 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 09:34:35.145243 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 09:34:35.145243 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 09:34:35.145243 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 09:34:35.145243 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 09:34:35.145243 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 09:34:35.158873 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 09:34:35.158873 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 09:34:35.158873 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 09:34:35.158873 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 09:34:35.158873 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 09:34:35.158873 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 12 09:34:35.710579 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 09:34:35.836960 systemd-networkd[791]: eth0: Gained IPv6LL Jul 12 09:34:36.329785 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 09:34:36.329785 ignition[977]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 09:34:36.333556 ignition[977]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 09:34:36.333556 ignition[977]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 09:34:36.333556 ignition[977]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 09:34:36.333556 ignition[977]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 12 09:34:36.333556 ignition[977]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 09:34:36.333556 ignition[977]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 09:34:36.333556 ignition[977]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 12 09:34:36.333556 ignition[977]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 09:34:36.349667 ignition[977]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 09:34:36.352938 ignition[977]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 09:34:36.355527 ignition[977]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 09:34:36.355527 ignition[977]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 12 09:34:36.355527 ignition[977]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 09:34:36.355527 ignition[977]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 09:34:36.355527 ignition[977]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 09:34:36.355527 ignition[977]: INFO : files: files passed Jul 12 09:34:36.355527 ignition[977]: INFO : Ignition finished successfully Jul 12 09:34:36.356427 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 09:34:36.358920 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 09:34:36.360922 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 09:34:36.370523 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 09:34:36.373754 initrd-setup-root-after-ignition[1005]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 09:34:36.370599 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 09:34:36.376011 initrd-setup-root-after-ignition[1008]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 09:34:36.376011 initrd-setup-root-after-ignition[1008]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 09:34:36.375852 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 09:34:36.382943 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 09:34:36.377270 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 09:34:36.380163 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 09:34:36.422832 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 09:34:36.422957 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 09:34:36.425117 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 09:34:36.426823 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 09:34:36.428647 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 09:34:36.429423 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 09:34:36.443124 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 09:34:36.445479 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 09:34:36.461572 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 09:34:36.463796 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 09:34:36.465008 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 09:34:36.466842 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 09:34:36.466974 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 09:34:36.469436 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 09:34:36.471390 systemd[1]: Stopped target basic.target - Basic System. Jul 12 09:34:36.472999 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 09:34:36.474726 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 09:34:36.476707 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 09:34:36.478664 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 12 09:34:36.480582 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 09:34:36.482445 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 09:34:36.484379 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 09:34:36.486300 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 09:34:36.487996 systemd[1]: Stopped target swap.target - Swaps. Jul 12 09:34:36.489505 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 09:34:36.489635 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 09:34:36.491913 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 09:34:36.493829 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 09:34:36.495755 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 09:34:36.496896 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 09:34:36.498801 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 09:34:36.498945 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 09:34:36.501611 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 09:34:36.501741 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 09:34:36.503730 systemd[1]: Stopped target paths.target - Path Units. Jul 12 09:34:36.505279 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 09:34:36.508869 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 09:34:36.510138 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 09:34:36.512176 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 09:34:36.513698 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 09:34:36.513786 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 09:34:36.515317 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 09:34:36.515398 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 09:34:36.516892 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 09:34:36.517017 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 09:34:36.518735 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 09:34:36.518853 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 09:34:36.521150 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 09:34:36.523530 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 09:34:36.524691 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 09:34:36.524838 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 09:34:36.526646 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 09:34:36.526748 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 09:34:36.531918 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 09:34:36.539981 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 09:34:36.548148 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 09:34:36.550850 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 09:34:36.550947 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 09:34:36.553676 ignition[1032]: INFO : Ignition 2.21.0 Jul 12 09:34:36.553676 ignition[1032]: INFO : Stage: umount Jul 12 09:34:36.553676 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 09:34:36.553676 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 09:34:36.559114 ignition[1032]: INFO : umount: umount passed Jul 12 09:34:36.559114 ignition[1032]: INFO : Ignition finished successfully Jul 12 09:34:36.555923 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 09:34:36.556019 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 09:34:36.559004 systemd[1]: Stopped target network.target - Network. Jul 12 09:34:36.559917 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 09:34:36.559990 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 09:34:36.561477 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 09:34:36.561527 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 09:34:36.563929 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 09:34:36.563987 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 09:34:36.565046 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 09:34:36.565104 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 09:34:36.566992 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 09:34:36.567057 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 09:34:36.568888 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 09:34:36.570860 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 09:34:36.578527 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 09:34:36.578640 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 09:34:36.582529 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 12 09:34:36.582708 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 09:34:36.582926 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 09:34:36.587643 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 12 09:34:36.587990 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 12 09:34:36.589756 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 09:34:36.589789 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 09:34:36.592571 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 09:34:36.593611 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 09:34:36.593675 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 09:34:36.595831 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 09:34:36.595876 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 09:34:36.598533 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 09:34:36.598573 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 09:34:36.599651 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 09:34:36.599697 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 09:34:36.602769 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 09:34:36.607897 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 09:34:36.607955 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 12 09:34:36.620540 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 09:34:36.620643 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 09:34:36.622644 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 09:34:36.622767 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 09:34:36.624712 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 09:34:36.624770 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 09:34:36.625969 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 09:34:36.625998 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 09:34:36.630604 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 09:34:36.630651 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 09:34:36.633228 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 09:34:36.633273 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 09:34:36.635744 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 09:34:36.635796 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 09:34:36.639262 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 09:34:36.640702 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 12 09:34:36.640753 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 09:34:36.643767 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 09:34:36.643824 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 09:34:36.646962 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 12 09:34:36.647002 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 09:34:36.650121 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 09:34:36.650160 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 09:34:36.652544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 09:34:36.652586 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 09:34:36.656413 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 12 09:34:36.656459 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 12 09:34:36.656486 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 12 09:34:36.656514 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 12 09:34:36.660964 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 09:34:36.662850 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 09:34:36.664971 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 09:34:36.667441 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 09:34:36.691971 systemd[1]: Switching root. Jul 12 09:34:36.725700 systemd-journald[246]: Journal stopped Jul 12 09:34:37.497445 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). Jul 12 09:34:37.497496 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 09:34:37.497515 kernel: SELinux: policy capability open_perms=1 Jul 12 09:34:37.497526 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 09:34:37.497539 kernel: SELinux: policy capability always_check_network=0 Jul 12 09:34:37.497550 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 09:34:37.497560 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 09:34:37.497569 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 09:34:37.497581 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 09:34:37.497594 kernel: SELinux: policy capability userspace_initial_context=0 Jul 12 09:34:37.497603 kernel: audit: type=1403 audit(1752312876.910:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 09:34:37.497618 systemd[1]: Successfully loaded SELinux policy in 69.806ms. Jul 12 09:34:37.497642 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.341ms. Jul 12 09:34:37.497655 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 09:34:37.497665 systemd[1]: Detected virtualization kvm. Jul 12 09:34:37.497677 systemd[1]: Detected architecture arm64. Jul 12 09:34:37.497689 systemd[1]: Detected first boot. Jul 12 09:34:37.497698 systemd[1]: Initializing machine ID from VM UUID. Jul 12 09:34:37.497709 zram_generator::config[1077]: No configuration found. Jul 12 09:34:37.497719 kernel: NET: Registered PF_VSOCK protocol family Jul 12 09:34:37.497730 systemd[1]: Populated /etc with preset unit settings. Jul 12 09:34:37.497741 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 12 09:34:37.497750 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 09:34:37.497761 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 09:34:37.497770 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 09:34:37.497784 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 09:34:37.497794 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 09:34:37.497804 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 09:34:37.497823 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 09:34:37.497834 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 09:34:37.497844 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 09:34:37.497854 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 09:34:37.497867 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 09:34:37.497877 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 09:34:37.497890 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 09:34:37.497900 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 09:34:37.497910 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 09:34:37.497921 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 09:34:37.497931 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 09:34:37.497942 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 09:34:37.497951 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 09:34:37.497961 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 09:34:37.497972 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 09:34:37.497982 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 09:34:37.497993 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 09:34:37.498003 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 09:34:37.498012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 09:34:37.498022 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 09:34:37.498032 systemd[1]: Reached target slices.target - Slice Units. Jul 12 09:34:37.498042 systemd[1]: Reached target swap.target - Swaps. Jul 12 09:34:37.498052 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 09:34:37.498062 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 09:34:37.498078 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 12 09:34:37.498088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 09:34:37.498100 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 09:34:37.498110 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 09:34:37.498120 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 09:34:37.498130 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 09:34:37.498142 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 09:34:37.498152 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 09:34:37.498162 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 09:34:37.498172 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 09:34:37.498182 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 09:34:37.498194 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 09:34:37.498203 systemd[1]: Reached target machines.target - Containers. Jul 12 09:34:37.498215 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 09:34:37.498225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 09:34:37.498237 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 09:34:37.498248 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 09:34:37.498258 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 09:34:37.498268 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 09:34:37.498279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 09:34:37.498289 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 09:34:37.498299 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 09:34:37.498310 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 09:34:37.498320 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 09:34:37.498330 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 09:34:37.498340 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 09:34:37.498350 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 09:34:37.498364 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 09:34:37.498374 kernel: fuse: init (API version 7.41) Jul 12 09:34:37.498383 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 09:34:37.498393 kernel: loop: module loaded Jul 12 09:34:37.498402 kernel: ACPI: bus type drm_connector registered Jul 12 09:34:37.498412 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 09:34:37.498422 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 09:34:37.498432 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 09:34:37.498442 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 12 09:34:37.498454 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 09:34:37.498464 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 09:34:37.498474 systemd[1]: Stopped verity-setup.service. Jul 12 09:34:37.498504 systemd-journald[1149]: Collecting audit messages is disabled. Jul 12 09:34:37.498527 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 09:34:37.498538 systemd-journald[1149]: Journal started Jul 12 09:34:37.498558 systemd-journald[1149]: Runtime Journal (/run/log/journal/77be8f09f1454b6a8b9225d9af5c1f5c) is 6M, max 48.5M, 42.4M free. Jul 12 09:34:37.275628 systemd[1]: Queued start job for default target multi-user.target. Jul 12 09:34:37.298760 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 09:34:37.299114 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 09:34:37.500964 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 09:34:37.501548 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 09:34:37.502760 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 09:34:37.503868 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 09:34:37.504977 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 09:34:37.506135 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 09:34:37.507312 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 09:34:37.508678 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 09:34:37.510120 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 09:34:37.510278 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 09:34:37.511617 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 09:34:37.511765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 09:34:37.513165 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 09:34:37.513308 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 09:34:37.514523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 09:34:37.514670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 09:34:37.516118 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 09:34:37.516286 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 09:34:37.517533 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 09:34:37.517686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 09:34:37.518996 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 09:34:37.520883 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 09:34:37.522399 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 09:34:37.523990 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 12 09:34:37.535442 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 09:34:37.537782 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 09:34:37.539757 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 09:34:37.541052 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 09:34:37.541091 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 09:34:37.542878 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 12 09:34:37.552562 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 09:34:37.553974 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 09:34:37.555165 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 09:34:37.557053 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 09:34:37.558290 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 09:34:37.561496 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 09:34:37.562607 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 09:34:37.564106 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 09:34:37.566524 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 09:34:37.567099 systemd-journald[1149]: Time spent on flushing to /var/log/journal/77be8f09f1454b6a8b9225d9af5c1f5c is 20.561ms for 887 entries. Jul 12 09:34:37.567099 systemd-journald[1149]: System Journal (/var/log/journal/77be8f09f1454b6a8b9225d9af5c1f5c) is 8M, max 195.6M, 187.6M free. Jul 12 09:34:37.593767 systemd-journald[1149]: Received client request to flush runtime journal. Jul 12 09:34:37.577934 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 09:34:37.598860 kernel: loop0: detected capacity change from 0 to 207008 Jul 12 09:34:37.582493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 09:34:37.583941 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 09:34:37.585180 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 09:34:37.596979 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 09:34:37.600651 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 09:34:37.604148 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 09:34:37.606996 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jul 12 09:34:37.607261 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jul 12 09:34:37.610955 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 12 09:34:37.613205 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 09:34:37.613857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 09:34:37.623027 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 09:34:37.626757 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 09:34:37.650297 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 12 09:34:37.652832 kernel: loop1: detected capacity change from 0 to 134232 Jul 12 09:34:37.669667 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 09:34:37.672133 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 09:34:37.678281 kernel: loop2: detected capacity change from 0 to 105936 Jul 12 09:34:37.689671 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jul 12 09:34:37.689687 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jul 12 09:34:37.693309 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 09:34:37.701839 kernel: loop3: detected capacity change from 0 to 207008 Jul 12 09:34:37.709831 kernel: loop4: detected capacity change from 0 to 134232 Jul 12 09:34:37.717832 kernel: loop5: detected capacity change from 0 to 105936 Jul 12 09:34:37.722098 (sd-merge)[1219]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 09:34:37.722478 (sd-merge)[1219]: Merged extensions into '/usr'. Jul 12 09:34:37.725907 systemd[1]: Reload requested from client PID 1194 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 09:34:37.725922 systemd[1]: Reloading... Jul 12 09:34:37.786850 zram_generator::config[1249]: No configuration found. Jul 12 09:34:37.851870 ldconfig[1189]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 09:34:37.863087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 09:34:37.925889 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 09:34:37.926024 systemd[1]: Reloading finished in 199 ms. Jul 12 09:34:37.962576 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 09:34:37.964056 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 09:34:37.976945 systemd[1]: Starting ensure-sysext.service... Jul 12 09:34:37.978691 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 09:34:37.991362 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 12 09:34:37.991393 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 12 09:34:37.991636 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 09:34:37.991859 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 09:34:37.992534 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 09:34:37.992625 systemd[1]: Reload requested from client PID 1281 ('systemctl') (unit ensure-sysext.service)... Jul 12 09:34:37.992641 systemd[1]: Reloading... Jul 12 09:34:37.992757 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Jul 12 09:34:37.992882 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Jul 12 09:34:38.000033 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 09:34:38.000046 systemd-tmpfiles[1282]: Skipping /boot Jul 12 09:34:38.006153 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 09:34:38.006167 systemd-tmpfiles[1282]: Skipping /boot Jul 12 09:34:38.034901 zram_generator::config[1309]: No configuration found. Jul 12 09:34:38.099131 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 09:34:38.160591 systemd[1]: Reloading finished in 167 ms. Jul 12 09:34:38.181455 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 09:34:38.187429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 09:34:38.193632 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 09:34:38.196274 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 09:34:38.198464 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 09:34:38.201105 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 09:34:38.205307 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 09:34:38.208905 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 09:34:38.223051 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 09:34:38.227424 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 09:34:38.228539 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 09:34:38.230943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 09:34:38.233169 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 09:34:38.234383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 09:34:38.234575 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 09:34:38.238229 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 09:34:38.240274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 09:34:38.240482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 09:34:38.243132 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 09:34:38.243348 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 09:34:38.247754 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Jul 12 09:34:38.248437 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 09:34:38.250569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 09:34:38.250799 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 09:34:38.257286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 09:34:38.258868 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 09:34:38.262033 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 09:34:38.265023 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 09:34:38.266190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 09:34:38.266337 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 09:34:38.270618 augenrules[1382]: No rules Jul 12 09:34:38.272872 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 09:34:38.275429 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 09:34:38.277310 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 09:34:38.277588 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 09:34:38.280116 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 09:34:38.281797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 09:34:38.281983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 09:34:38.283612 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 09:34:38.283752 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 09:34:38.285438 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 09:34:38.285590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 09:34:38.291216 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 09:34:38.296786 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 09:34:38.306170 systemd[1]: Finished ensure-sysext.service. Jul 12 09:34:38.318835 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 09:34:38.319880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 09:34:38.320885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 09:34:38.328519 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 09:34:38.331858 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 09:34:38.333959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 09:34:38.335127 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 09:34:38.335172 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 09:34:38.336883 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 09:34:38.344110 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 09:34:38.345750 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 09:34:38.353995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 09:34:38.354172 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 09:34:38.356136 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 09:34:38.356313 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 09:34:38.358947 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 09:34:38.359200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 09:34:38.359348 augenrules[1426]: /sbin/augenrules: No change Jul 12 09:34:38.360946 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 09:34:38.361200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 09:34:38.368329 augenrules[1458]: No rules Jul 12 09:34:38.370738 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 09:34:38.375175 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 09:34:38.377771 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 09:34:38.385321 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 09:34:38.385379 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 09:34:38.399449 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 09:34:38.402409 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 09:34:38.436119 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 09:34:38.473351 systemd-networkd[1433]: lo: Link UP Jul 12 09:34:38.473359 systemd-networkd[1433]: lo: Gained carrier Jul 12 09:34:38.474296 systemd-networkd[1433]: Enumeration completed Jul 12 09:34:38.476364 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 09:34:38.478050 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 09:34:38.480837 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 09:34:38.480845 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 09:34:38.481028 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 09:34:38.481397 systemd-networkd[1433]: eth0: Link UP Jul 12 09:34:38.481504 systemd-networkd[1433]: eth0: Gained carrier Jul 12 09:34:38.481521 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 09:34:38.485008 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 12 09:34:38.487215 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 09:34:38.490903 systemd-resolved[1349]: Positive Trust Anchors: Jul 12 09:34:38.490920 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 09:34:38.490952 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 09:34:38.497146 systemd-resolved[1349]: Defaulting to hostname 'linux'. Jul 12 09:34:38.500678 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 09:34:38.502254 systemd[1]: Reached target network.target - Network. Jul 12 09:34:38.503226 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 09:34:38.504457 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 09:34:38.506049 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 09:34:38.507845 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 09:34:38.511167 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 09:34:38.512354 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 09:34:38.513603 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 09:34:38.514865 systemd-networkd[1433]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 09:34:38.515041 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 09:34:38.515075 systemd[1]: Reached target paths.target - Path Units. Jul 12 09:34:38.515348 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Jul 12 09:34:38.516180 systemd-timesyncd[1439]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 09:34:38.516223 systemd-timesyncd[1439]: Initial clock synchronization to Sat 2025-07-12 09:34:38.531657 UTC. Jul 12 09:34:38.516881 systemd[1]: Reached target timers.target - Timer Units. Jul 12 09:34:38.518613 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 09:34:38.521954 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 09:34:38.525612 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 12 09:34:38.527137 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 12 09:34:38.528444 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 12 09:34:38.532888 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 09:34:38.534168 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 12 09:34:38.536175 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 12 09:34:38.537548 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 09:34:38.545802 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 09:34:38.546762 systemd[1]: Reached target basic.target - Basic System. Jul 12 09:34:38.547754 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 09:34:38.547788 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 09:34:38.548705 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 09:34:38.550644 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 09:34:38.552511 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 09:34:38.557658 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 09:34:38.559620 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 09:34:38.560689 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 09:34:38.561669 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 09:34:38.565989 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 09:34:38.568429 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 09:34:38.569710 jq[1495]: false Jul 12 09:34:38.571392 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 09:34:38.578075 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 09:34:38.581695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 09:34:38.583583 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 09:34:38.583983 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 09:34:38.585941 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 09:34:38.587676 extend-filesystems[1496]: Found /dev/vda6 Jul 12 09:34:38.588947 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 09:34:38.594547 extend-filesystems[1496]: Found /dev/vda9 Jul 12 09:34:38.597249 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 09:34:38.598987 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 09:34:38.599189 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 09:34:38.599424 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 09:34:38.599583 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 09:34:38.603899 jq[1515]: true Jul 12 09:34:38.601533 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 09:34:38.604075 extend-filesystems[1496]: Checking size of /dev/vda9 Jul 12 09:34:38.601696 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 09:34:38.620177 jq[1522]: true Jul 12 09:34:38.621821 (ntainerd)[1532]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 09:34:38.623124 extend-filesystems[1496]: Resized partition /dev/vda9 Jul 12 09:34:38.633787 extend-filesystems[1539]: resize2fs 1.47.2 (1-Jan-2025) Jul 12 09:34:38.639940 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 09:34:38.646225 tar[1520]: linux-arm64/LICENSE Jul 12 09:34:38.646533 tar[1520]: linux-arm64/helm Jul 12 09:34:38.671828 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 09:34:38.675986 update_engine[1513]: I20250712 09:34:38.675769 1513 main.cc:92] Flatcar Update Engine starting Jul 12 09:34:38.677007 dbus-daemon[1493]: [system] SELinux support is enabled Jul 12 09:34:38.677190 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 09:34:38.688274 update_engine[1513]: I20250712 09:34:38.685150 1513 update_check_scheduler.cc:74] Next update check in 11m57s Jul 12 09:34:38.682790 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 09:34:38.682832 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 09:34:38.686790 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 09:34:38.686823 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 09:34:38.687784 systemd-logind[1506]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 09:34:38.691853 extend-filesystems[1539]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 09:34:38.691853 extend-filesystems[1539]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 09:34:38.691853 extend-filesystems[1539]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 09:34:38.695194 extend-filesystems[1496]: Resized filesystem in /dev/vda9 Jul 12 09:34:38.689827 systemd-logind[1506]: New seat seat0. Jul 12 09:34:38.690474 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 09:34:38.690695 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 09:34:38.696413 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 09:34:38.704092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 09:34:38.709024 systemd[1]: Started update-engine.service - Update Engine. Jul 12 09:34:38.712574 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 09:34:38.718461 bash[1556]: Updated "/home/core/.ssh/authorized_keys" Jul 12 09:34:38.719436 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 09:34:38.721960 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 09:34:38.775754 locksmithd[1561]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 09:34:38.861842 containerd[1532]: time="2025-07-12T09:34:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 12 09:34:38.862467 containerd[1532]: time="2025-07-12T09:34:38.862428360Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 12 09:34:38.872565 containerd[1532]: time="2025-07-12T09:34:38.872476280Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.12µs" Jul 12 09:34:38.872565 containerd[1532]: time="2025-07-12T09:34:38.872554480Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 12 09:34:38.872565 containerd[1532]: time="2025-07-12T09:34:38.872572080Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 12 09:34:38.872796 containerd[1532]: time="2025-07-12T09:34:38.872766040Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 12 09:34:38.872796 containerd[1532]: time="2025-07-12T09:34:38.872790160Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 12 09:34:38.872868 containerd[1532]: time="2025-07-12T09:34:38.872828400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 09:34:38.872969 containerd[1532]: time="2025-07-12T09:34:38.872937280Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 09:34:38.872996 containerd[1532]: time="2025-07-12T09:34:38.872972640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 09:34:38.873332 containerd[1532]: time="2025-07-12T09:34:38.873301880Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 09:34:38.873332 containerd[1532]: time="2025-07-12T09:34:38.873324640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 09:34:38.873371 containerd[1532]: time="2025-07-12T09:34:38.873344320Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 09:34:38.873371 containerd[1532]: time="2025-07-12T09:34:38.873353720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 12 09:34:38.873505 containerd[1532]: time="2025-07-12T09:34:38.873486680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 12 09:34:38.873958 containerd[1532]: time="2025-07-12T09:34:38.873937080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 09:34:38.873989 containerd[1532]: time="2025-07-12T09:34:38.873975200Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 09:34:38.873989 containerd[1532]: time="2025-07-12T09:34:38.873986280Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 12 09:34:38.874086 containerd[1532]: time="2025-07-12T09:34:38.874069320Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 12 09:34:38.874409 containerd[1532]: time="2025-07-12T09:34:38.874385480Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 12 09:34:38.874527 containerd[1532]: time="2025-07-12T09:34:38.874509040Z" level=info msg="metadata content store policy set" policy=shared Jul 12 09:34:38.877514 containerd[1532]: time="2025-07-12T09:34:38.877485960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 12 09:34:38.877592 containerd[1532]: time="2025-07-12T09:34:38.877536120Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 12 09:34:38.877592 containerd[1532]: time="2025-07-12T09:34:38.877548560Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 12 09:34:38.877592 containerd[1532]: time="2025-07-12T09:34:38.877559840Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 12 09:34:38.877592 containerd[1532]: time="2025-07-12T09:34:38.877577280Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 12 09:34:38.877592 containerd[1532]: time="2025-07-12T09:34:38.877590280Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 12 09:34:38.877677 containerd[1532]: time="2025-07-12T09:34:38.877601800Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 12 09:34:38.877677 containerd[1532]: time="2025-07-12T09:34:38.877613720Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 12 09:34:38.877677 containerd[1532]: time="2025-07-12T09:34:38.877625240Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 12 09:34:38.877677 containerd[1532]: time="2025-07-12T09:34:38.877635440Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 12 09:34:38.877677 containerd[1532]: time="2025-07-12T09:34:38.877645280Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 12 09:34:38.877677 containerd[1532]: time="2025-07-12T09:34:38.877660080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 12 09:34:38.877831 containerd[1532]: time="2025-07-12T09:34:38.877770360Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 12 09:34:38.877831 containerd[1532]: time="2025-07-12T09:34:38.877795840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 12 09:34:38.877831 containerd[1532]: time="2025-07-12T09:34:38.877828920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 12 09:34:38.877887 containerd[1532]: time="2025-07-12T09:34:38.877841320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 12 09:34:38.877887 containerd[1532]: time="2025-07-12T09:34:38.877851920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 12 09:34:38.877887 containerd[1532]: time="2025-07-12T09:34:38.877863320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 12 09:34:38.877887 containerd[1532]: time="2025-07-12T09:34:38.877874400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 12 09:34:38.877887 containerd[1532]: time="2025-07-12T09:34:38.877884560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 12 09:34:38.877976 containerd[1532]: time="2025-07-12T09:34:38.877899280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 12 09:34:38.877976 containerd[1532]: time="2025-07-12T09:34:38.877911080Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 12 09:34:38.877976 containerd[1532]: time="2025-07-12T09:34:38.877922440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 12 09:34:38.878136 containerd[1532]: time="2025-07-12T09:34:38.878116760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 12 09:34:38.878175 containerd[1532]: time="2025-07-12T09:34:38.878137480Z" level=info msg="Start snapshots syncer" Jul 12 09:34:38.878175 containerd[1532]: time="2025-07-12T09:34:38.878161720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 12 09:34:38.878406 containerd[1532]: time="2025-07-12T09:34:38.878374640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 12 09:34:38.878534 containerd[1532]: time="2025-07-12T09:34:38.878424480Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 12 09:34:38.880131 containerd[1532]: time="2025-07-12T09:34:38.879924960Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 12 09:34:38.880131 containerd[1532]: time="2025-07-12T09:34:38.880095800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 12 09:34:38.880208 containerd[1532]: time="2025-07-12T09:34:38.880143960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 12 09:34:38.880208 containerd[1532]: time="2025-07-12T09:34:38.880164600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 12 09:34:38.880208 containerd[1532]: time="2025-07-12T09:34:38.880179000Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 12 09:34:38.880208 containerd[1532]: time="2025-07-12T09:34:38.880197320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 12 09:34:38.880301 containerd[1532]: time="2025-07-12T09:34:38.880212840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 12 09:34:38.880301 containerd[1532]: time="2025-07-12T09:34:38.880227640Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 12 09:34:38.880301 containerd[1532]: time="2025-07-12T09:34:38.880264840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 12 09:34:38.880301 containerd[1532]: time="2025-07-12T09:34:38.880280880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 12 09:34:38.880301 containerd[1532]: time="2025-07-12T09:34:38.880297000Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 12 09:34:38.880383 containerd[1532]: time="2025-07-12T09:34:38.880343720Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 09:34:38.880383 containerd[1532]: time="2025-07-12T09:34:38.880363200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 09:34:38.880383 containerd[1532]: time="2025-07-12T09:34:38.880377080Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 09:34:38.880451 containerd[1532]: time="2025-07-12T09:34:38.880390880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 09:34:38.880451 containerd[1532]: time="2025-07-12T09:34:38.880400000Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 12 09:34:38.880451 containerd[1532]: time="2025-07-12T09:34:38.880415480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 12 09:34:38.880451 containerd[1532]: time="2025-07-12T09:34:38.880439840Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 12 09:34:38.880755 containerd[1532]: time="2025-07-12T09:34:38.880732400Z" level=info msg="runtime interface created" Jul 12 09:34:38.880957 containerd[1532]: time="2025-07-12T09:34:38.880842640Z" level=info msg="created NRI interface" Jul 12 09:34:38.880957 containerd[1532]: time="2025-07-12T09:34:38.880864280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 12 09:34:38.880957 containerd[1532]: time="2025-07-12T09:34:38.880883840Z" level=info msg="Connect containerd service" Jul 12 09:34:38.880957 containerd[1532]: time="2025-07-12T09:34:38.880920000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 09:34:38.881854 containerd[1532]: time="2025-07-12T09:34:38.881829640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 09:34:38.992483 containerd[1532]: time="2025-07-12T09:34:38.992371600Z" level=info msg="Start subscribing containerd event" Jul 12 09:34:38.992819 containerd[1532]: time="2025-07-12T09:34:38.992663640Z" level=info msg="Start recovering state" Jul 12 09:34:38.992819 containerd[1532]: time="2025-07-12T09:34:38.992768360Z" level=info msg="Start event monitor" Jul 12 09:34:38.992819 containerd[1532]: time="2025-07-12T09:34:38.992783160Z" level=info msg="Start cni network conf syncer for default" Jul 12 09:34:38.992819 containerd[1532]: time="2025-07-12T09:34:38.992795760Z" level=info msg="Start streaming server" Jul 12 09:34:38.992937 containerd[1532]: time="2025-07-12T09:34:38.992923720Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 12 09:34:38.992981 containerd[1532]: time="2025-07-12T09:34:38.992971680Z" level=info msg="runtime interface starting up..." Jul 12 09:34:38.993372 containerd[1532]: time="2025-07-12T09:34:38.993011280Z" level=info msg="starting plugins..." Jul 12 09:34:38.993372 containerd[1532]: time="2025-07-12T09:34:38.993033200Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 12 09:34:38.993614 containerd[1532]: time="2025-07-12T09:34:38.993588040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 09:34:38.993791 containerd[1532]: time="2025-07-12T09:34:38.993739040Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 09:34:38.993921 containerd[1532]: time="2025-07-12T09:34:38.993909840Z" level=info msg="containerd successfully booted in 0.132536s" Jul 12 09:34:38.994955 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 09:34:38.999817 tar[1520]: linux-arm64/README.md Jul 12 09:34:39.020865 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 09:34:39.868932 systemd-networkd[1433]: eth0: Gained IPv6LL Jul 12 09:34:39.871407 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 09:34:39.873241 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 09:34:39.876729 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 09:34:39.879700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:34:39.890056 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 09:34:39.907571 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 09:34:39.907836 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 09:34:39.909621 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 09:34:39.919716 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 09:34:40.090344 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 09:34:40.110358 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 09:34:40.113255 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 09:34:40.134460 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 09:34:40.134684 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 09:34:40.138866 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 09:34:40.158621 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 09:34:40.161748 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 09:34:40.164578 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 09:34:40.166114 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 09:34:40.431183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:34:40.432793 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 09:34:40.434100 systemd[1]: Startup finished in 2.106s (kernel) + 5.260s (initrd) + 3.596s (userspace) = 10.964s. Jul 12 09:34:40.434692 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 09:34:40.812949 kubelet[1633]: E0712 09:34:40.812832 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 09:34:40.815191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 09:34:40.815318 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 09:34:40.815764 systemd[1]: kubelet.service: Consumed 778ms CPU time, 256.3M memory peak. Jul 12 09:34:45.012116 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 09:34:45.013058 systemd[1]: Started sshd@0-10.0.0.56:22-10.0.0.1:33880.service - OpenSSH per-connection server daemon (10.0.0.1:33880). Jul 12 09:34:45.077979 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 33880 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:34:45.079616 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:34:45.085348 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 09:34:45.086133 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 09:34:45.092079 systemd-logind[1506]: New session 1 of user core. Jul 12 09:34:45.114899 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 09:34:45.117354 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 09:34:45.134609 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 09:34:45.136416 systemd-logind[1506]: New session c1 of user core. Jul 12 09:34:45.241832 systemd[1651]: Queued start job for default target default.target. Jul 12 09:34:45.260737 systemd[1651]: Created slice app.slice - User Application Slice. Jul 12 09:34:45.260763 systemd[1651]: Reached target paths.target - Paths. Jul 12 09:34:45.260801 systemd[1651]: Reached target timers.target - Timers. Jul 12 09:34:45.261900 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 09:34:45.270992 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 09:34:45.271054 systemd[1651]: Reached target sockets.target - Sockets. Jul 12 09:34:45.271099 systemd[1651]: Reached target basic.target - Basic System. Jul 12 09:34:45.271128 systemd[1651]: Reached target default.target - Main User Target. Jul 12 09:34:45.271158 systemd[1651]: Startup finished in 129ms. Jul 12 09:34:45.271266 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 09:34:45.273598 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 09:34:45.340238 systemd[1]: Started sshd@1-10.0.0.56:22-10.0.0.1:33886.service - OpenSSH per-connection server daemon (10.0.0.1:33886). Jul 12 09:34:45.405132 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 33886 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:34:45.406266 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:34:45.410279 systemd-logind[1506]: New session 2 of user core. Jul 12 09:34:45.423055 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 09:34:45.475008 sshd[1665]: Connection closed by 10.0.0.1 port 33886 Jul 12 09:34:45.475424 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Jul 12 09:34:45.489920 systemd[1]: sshd@1-10.0.0.56:22-10.0.0.1:33886.service: Deactivated successfully. Jul 12 09:34:45.493297 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 09:34:45.494007 systemd-logind[1506]: Session 2 logged out. Waiting for processes to exit. Jul 12 09:34:45.495984 systemd[1]: Started sshd@2-10.0.0.56:22-10.0.0.1:33888.service - OpenSSH per-connection server daemon (10.0.0.1:33888). Jul 12 09:34:45.498415 systemd-logind[1506]: Removed session 2. Jul 12 09:34:45.543018 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 33888 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:34:45.544469 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:34:45.548167 systemd-logind[1506]: New session 3 of user core. Jul 12 09:34:45.561963 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 09:34:45.609016 sshd[1674]: Connection closed by 10.0.0.1 port 33888 Jul 12 09:34:45.609301 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Jul 12 09:34:45.618661 systemd[1]: sshd@2-10.0.0.56:22-10.0.0.1:33888.service: Deactivated successfully. Jul 12 09:34:45.622002 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 09:34:45.622613 systemd-logind[1506]: Session 3 logged out. Waiting for processes to exit. Jul 12 09:34:45.626042 systemd[1]: Started sshd@3-10.0.0.56:22-10.0.0.1:33898.service - OpenSSH per-connection server daemon (10.0.0.1:33898). Jul 12 09:34:45.627070 systemd-logind[1506]: Removed session 3. Jul 12 09:34:45.675692 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 33898 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:34:45.677039 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:34:45.681359 systemd-logind[1506]: New session 4 of user core. Jul 12 09:34:45.690990 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 09:34:45.742559 sshd[1683]: Connection closed by 10.0.0.1 port 33898 Jul 12 09:34:45.742860 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Jul 12 09:34:45.751617 systemd[1]: sshd@3-10.0.0.56:22-10.0.0.1:33898.service: Deactivated successfully. Jul 12 09:34:45.754036 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 09:34:45.754737 systemd-logind[1506]: Session 4 logged out. Waiting for processes to exit. Jul 12 09:34:45.757008 systemd[1]: Started sshd@4-10.0.0.56:22-10.0.0.1:33900.service - OpenSSH per-connection server daemon (10.0.0.1:33900). Jul 12 09:34:45.757573 systemd-logind[1506]: Removed session 4. Jul 12 09:34:45.808849 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 33900 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:34:45.810298 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:34:45.813894 systemd-logind[1506]: New session 5 of user core. Jul 12 09:34:45.821020 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 09:34:45.883115 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 09:34:45.883383 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 09:34:45.899646 sudo[1693]: pam_unix(sudo:session): session closed for user root Jul 12 09:34:45.901386 sshd[1692]: Connection closed by 10.0.0.1 port 33900 Jul 12 09:34:45.901292 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Jul 12 09:34:45.911617 systemd[1]: sshd@4-10.0.0.56:22-10.0.0.1:33900.service: Deactivated successfully. Jul 12 09:34:45.914060 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 09:34:45.914706 systemd-logind[1506]: Session 5 logged out. Waiting for processes to exit. Jul 12 09:34:45.917850 systemd[1]: Started sshd@5-10.0.0.56:22-10.0.0.1:33914.service - OpenSSH per-connection server daemon (10.0.0.1:33914). Jul 12 09:34:45.918478 systemd-logind[1506]: Removed session 5. Jul 12 09:34:45.972519 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 33914 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:34:45.973565 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:34:45.977833 systemd-logind[1506]: New session 6 of user core. Jul 12 09:34:45.993019 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 09:34:46.045078 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 09:34:46.045338 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 09:34:46.117861 sudo[1704]: pam_unix(sudo:session): session closed for user root Jul 12 09:34:46.123084 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 12 09:34:46.123633 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 09:34:46.132077 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 09:34:46.164655 augenrules[1726]: No rules Jul 12 09:34:46.166043 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 09:34:46.166246 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 09:34:46.167552 sudo[1703]: pam_unix(sudo:session): session closed for user root Jul 12 09:34:46.168766 sshd[1702]: Connection closed by 10.0.0.1 port 33914 Jul 12 09:34:46.169131 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Jul 12 09:34:46.181788 systemd[1]: sshd@5-10.0.0.56:22-10.0.0.1:33914.service: Deactivated successfully. Jul 12 09:34:46.185052 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 09:34:46.186131 systemd-logind[1506]: Session 6 logged out. Waiting for processes to exit. Jul 12 09:34:46.189038 systemd[1]: Started sshd@6-10.0.0.56:22-10.0.0.1:33930.service - OpenSSH per-connection server daemon (10.0.0.1:33930). Jul 12 09:34:46.190266 systemd-logind[1506]: Removed session 6. Jul 12 09:34:46.236962 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 33930 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:34:46.238097 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:34:46.242236 systemd-logind[1506]: New session 7 of user core. Jul 12 09:34:46.253964 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 09:34:46.305271 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 09:34:46.305530 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 09:34:46.637998 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 09:34:46.660137 (dockerd)[1759]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 09:34:46.907273 dockerd[1759]: time="2025-07-12T09:34:46.907139121Z" level=info msg="Starting up" Jul 12 09:34:46.908201 dockerd[1759]: time="2025-07-12T09:34:46.908178843Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 12 09:34:46.918083 dockerd[1759]: time="2025-07-12T09:34:46.918052461Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 12 09:34:46.946548 dockerd[1759]: time="2025-07-12T09:34:46.946513156Z" level=info msg="Loading containers: start." Jul 12 09:34:46.954832 kernel: Initializing XFRM netlink socket Jul 12 09:34:47.181395 systemd-networkd[1433]: docker0: Link UP Jul 12 09:34:47.184336 dockerd[1759]: time="2025-07-12T09:34:47.184246531Z" level=info msg="Loading containers: done." Jul 12 09:34:47.199782 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1335387908-merged.mount: Deactivated successfully. Jul 12 09:34:47.200500 dockerd[1759]: time="2025-07-12T09:34:47.200451033Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 09:34:47.200569 dockerd[1759]: time="2025-07-12T09:34:47.200528238Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 12 09:34:47.200626 dockerd[1759]: time="2025-07-12T09:34:47.200603961Z" level=info msg="Initializing buildkit" Jul 12 09:34:47.224683 dockerd[1759]: time="2025-07-12T09:34:47.224636435Z" level=info msg="Completed buildkit initialization" Jul 12 09:34:47.229422 dockerd[1759]: time="2025-07-12T09:34:47.229381703Z" level=info msg="Daemon has completed initialization" Jul 12 09:34:47.229567 dockerd[1759]: time="2025-07-12T09:34:47.229444139Z" level=info msg="API listen on /run/docker.sock" Jul 12 09:34:47.229618 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 09:34:47.869747 containerd[1532]: time="2025-07-12T09:34:47.869707150Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 12 09:34:48.475596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2734209144.mount: Deactivated successfully. Jul 12 09:34:49.669344 containerd[1532]: time="2025-07-12T09:34:49.669286112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:49.669803 containerd[1532]: time="2025-07-12T09:34:49.669772839Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 12 09:34:49.670478 containerd[1532]: time="2025-07-12T09:34:49.670439819Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:49.672653 containerd[1532]: time="2025-07-12T09:34:49.672620529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:49.674434 containerd[1532]: time="2025-07-12T09:34:49.674402675Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.804635171s" Jul 12 09:34:49.674589 containerd[1532]: time="2025-07-12T09:34:49.674520175Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 12 09:34:49.675199 containerd[1532]: time="2025-07-12T09:34:49.675170266Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 12 09:34:50.790122 containerd[1532]: time="2025-07-12T09:34:50.789933466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:50.791182 containerd[1532]: time="2025-07-12T09:34:50.791147805Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 12 09:34:50.792027 containerd[1532]: time="2025-07-12T09:34:50.791984044Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:50.794640 containerd[1532]: time="2025-07-12T09:34:50.794597811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:50.795605 containerd[1532]: time="2025-07-12T09:34:50.795577559Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.120278347s" Jul 12 09:34:50.795605 containerd[1532]: time="2025-07-12T09:34:50.795606573Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 12 09:34:50.796280 containerd[1532]: time="2025-07-12T09:34:50.796065632Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 12 09:34:50.877912 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 09:34:50.879430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:34:51.002586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:34:51.018069 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 09:34:51.054474 kubelet[2040]: E0712 09:34:51.054359 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 09:34:51.057667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 09:34:51.057797 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 09:34:51.058266 systemd[1]: kubelet.service: Consumed 147ms CPU time, 107.9M memory peak. Jul 12 09:34:51.973140 containerd[1532]: time="2025-07-12T09:34:51.972941667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:51.974261 containerd[1532]: time="2025-07-12T09:34:51.974192746Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 12 09:34:51.974785 containerd[1532]: time="2025-07-12T09:34:51.974744233Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:51.978086 containerd[1532]: time="2025-07-12T09:34:51.977992046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:51.978711 containerd[1532]: time="2025-07-12T09:34:51.978685796Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.182590671s" Jul 12 09:34:51.978747 containerd[1532]: time="2025-07-12T09:34:51.978715129Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 12 09:34:51.979295 containerd[1532]: time="2025-07-12T09:34:51.979265455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 12 09:34:52.905494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233664080.mount: Deactivated successfully. Jul 12 09:34:53.133921 containerd[1532]: time="2025-07-12T09:34:53.133866288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:53.134820 containerd[1532]: time="2025-07-12T09:34:53.134766242Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 12 09:34:53.135595 containerd[1532]: time="2025-07-12T09:34:53.135562275Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:53.137125 containerd[1532]: time="2025-07-12T09:34:53.137092676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:53.137596 containerd[1532]: time="2025-07-12T09:34:53.137572465Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.158251705s" Jul 12 09:34:53.137652 containerd[1532]: time="2025-07-12T09:34:53.137598955Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 12 09:34:53.138280 containerd[1532]: time="2025-07-12T09:34:53.138145770Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 09:34:53.653133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1769895778.mount: Deactivated successfully. Jul 12 09:34:54.534101 containerd[1532]: time="2025-07-12T09:34:54.534025443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:54.534567 containerd[1532]: time="2025-07-12T09:34:54.534531670Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 12 09:34:54.535445 containerd[1532]: time="2025-07-12T09:34:54.535387545Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:54.538038 containerd[1532]: time="2025-07-12T09:34:54.537986583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:54.538860 containerd[1532]: time="2025-07-12T09:34:54.538828733Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.400653552s" Jul 12 09:34:54.538983 containerd[1532]: time="2025-07-12T09:34:54.538946497Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 09:34:54.539570 containerd[1532]: time="2025-07-12T09:34:54.539469650Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 09:34:54.954630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631702791.mount: Deactivated successfully. Jul 12 09:34:54.957884 containerd[1532]: time="2025-07-12T09:34:54.957822610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 09:34:54.958335 containerd[1532]: time="2025-07-12T09:34:54.958299946Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 12 09:34:54.959146 containerd[1532]: time="2025-07-12T09:34:54.959111085Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 09:34:54.960951 containerd[1532]: time="2025-07-12T09:34:54.960918752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 09:34:54.961708 containerd[1532]: time="2025-07-12T09:34:54.961673270Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 422.131234ms" Jul 12 09:34:54.961744 containerd[1532]: time="2025-07-12T09:34:54.961709243Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 09:34:54.962157 containerd[1532]: time="2025-07-12T09:34:54.962113472Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 12 09:34:55.518501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1297885959.mount: Deactivated successfully. Jul 12 09:34:57.260686 containerd[1532]: time="2025-07-12T09:34:57.260637754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:57.261479 containerd[1532]: time="2025-07-12T09:34:57.261450681Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 12 09:34:57.262568 containerd[1532]: time="2025-07-12T09:34:57.262514244Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:57.265577 containerd[1532]: time="2025-07-12T09:34:57.265527479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:34:57.266997 containerd[1532]: time="2025-07-12T09:34:57.266934867Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.304787183s" Jul 12 09:34:57.266997 containerd[1532]: time="2025-07-12T09:34:57.266972918Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 12 09:35:01.127909 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 09:35:01.129273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:35:01.288959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:35:01.300073 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 09:35:01.335395 kubelet[2202]: E0712 09:35:01.335334 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 09:35:01.337800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 09:35:01.337959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 09:35:01.338258 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107.6M memory peak. Jul 12 09:35:01.931568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:35:01.931706 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107.6M memory peak. Jul 12 09:35:01.933507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:35:01.955298 systemd[1]: Reload requested from client PID 2218 ('systemctl') (unit session-7.scope)... Jul 12 09:35:01.955315 systemd[1]: Reloading... Jul 12 09:35:02.026848 zram_generator::config[2261]: No configuration found. Jul 12 09:35:02.211341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 09:35:02.297679 systemd[1]: Reloading finished in 342 ms. Jul 12 09:35:02.357295 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 09:35:02.357376 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 09:35:02.357605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:35:02.357654 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95M memory peak. Jul 12 09:35:02.359250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:35:02.482093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:35:02.485780 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 09:35:02.521651 kubelet[2306]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 09:35:02.521651 kubelet[2306]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 09:35:02.521651 kubelet[2306]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 09:35:02.522003 kubelet[2306]: I0712 09:35:02.521687 2306 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 09:35:03.013774 kubelet[2306]: I0712 09:35:03.013728 2306 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 09:35:03.013774 kubelet[2306]: I0712 09:35:03.013763 2306 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 09:35:03.014095 kubelet[2306]: I0712 09:35:03.014068 2306 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 09:35:03.058993 kubelet[2306]: E0712 09:35:03.058956 2306 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jul 12 09:35:03.060569 kubelet[2306]: I0712 09:35:03.060459 2306 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 09:35:03.066495 kubelet[2306]: I0712 09:35:03.066471 2306 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 09:35:03.071770 kubelet[2306]: I0712 09:35:03.071509 2306 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 09:35:03.072595 kubelet[2306]: I0712 09:35:03.072375 2306 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 09:35:03.072777 kubelet[2306]: I0712 09:35:03.072596 2306 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 09:35:03.072928 kubelet[2306]: I0712 09:35:03.072914 2306 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 09:35:03.072928 kubelet[2306]: I0712 09:35:03.072928 2306 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 09:35:03.073197 kubelet[2306]: I0712 09:35:03.073173 2306 state_mem.go:36] "Initialized new in-memory state store" Jul 12 09:35:03.076132 kubelet[2306]: I0712 09:35:03.076083 2306 kubelet.go:446] "Attempting to sync node with API server" Jul 12 09:35:03.076132 kubelet[2306]: I0712 09:35:03.076109 2306 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 09:35:03.077275 kubelet[2306]: I0712 09:35:03.077166 2306 kubelet.go:352] "Adding apiserver pod source" Jul 12 09:35:03.077275 kubelet[2306]: I0712 09:35:03.077189 2306 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 09:35:03.085206 kubelet[2306]: W0712 09:35:03.085148 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jul 12 09:35:03.085306 kubelet[2306]: E0712 09:35:03.085227 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jul 12 09:35:03.085306 kubelet[2306]: W0712 09:35:03.085212 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jul 12 09:35:03.085306 kubelet[2306]: E0712 09:35:03.085286 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jul 12 09:35:03.085306 kubelet[2306]: I0712 09:35:03.085167 2306 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 12 09:35:03.092063 kubelet[2306]: I0712 09:35:03.089981 2306 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 09:35:03.092063 kubelet[2306]: W0712 09:35:03.090168 2306 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 09:35:03.092063 kubelet[2306]: I0712 09:35:03.091601 2306 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 09:35:03.092063 kubelet[2306]: I0712 09:35:03.091629 2306 server.go:1287] "Started kubelet" Jul 12 09:35:03.092063 kubelet[2306]: I0712 09:35:03.091861 2306 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 09:35:03.092881 kubelet[2306]: I0712 09:35:03.092660 2306 server.go:479] "Adding debug handlers to kubelet server" Jul 12 09:35:03.094948 kubelet[2306]: I0712 09:35:03.094899 2306 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 09:35:03.095854 kubelet[2306]: I0712 09:35:03.095800 2306 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 09:35:03.095962 kubelet[2306]: E0712 09:35:03.095704 2306 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.56:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.56:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1851774d1c1be63a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 09:35:03.091611194 +0000 UTC m=+0.602591909,LastTimestamp:2025-07-12 09:35:03.091611194 +0000 UTC m=+0.602591909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 09:35:03.096447 kubelet[2306]: I0712 09:35:03.096425 2306 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 09:35:03.096556 kubelet[2306]: I0712 09:35:03.096491 2306 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 09:35:03.097780 kubelet[2306]: I0712 09:35:03.097601 2306 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 09:35:03.097780 kubelet[2306]: I0712 09:35:03.097703 2306 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 09:35:03.097780 kubelet[2306]: I0712 09:35:03.097754 2306 reconciler.go:26] "Reconciler: start to sync state" Jul 12 09:35:03.097901 kubelet[2306]: E0712 09:35:03.097859 2306 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 09:35:03.097967 kubelet[2306]: I0712 09:35:03.097938 2306 factory.go:221] Registration of the systemd container factory successfully Jul 12 09:35:03.098059 kubelet[2306]: I0712 09:35:03.098035 2306 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 09:35:03.098127 kubelet[2306]: W0712 09:35:03.098076 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jul 12 09:35:03.098162 kubelet[2306]: E0712 09:35:03.098135 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jul 12 09:35:03.098162 kubelet[2306]: E0712 09:35:03.098146 2306 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 09:35:03.098431 kubelet[2306]: E0712 09:35:03.098197 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="200ms" Jul 12 09:35:03.099201 kubelet[2306]: I0712 09:35:03.099161 2306 factory.go:221] Registration of the containerd container factory successfully Jul 12 09:35:03.108066 kubelet[2306]: I0712 09:35:03.108041 2306 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 09:35:03.108385 kubelet[2306]: I0712 09:35:03.108165 2306 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 09:35:03.108385 kubelet[2306]: I0712 09:35:03.108186 2306 state_mem.go:36] "Initialized new in-memory state store" Jul 12 09:35:03.111404 kubelet[2306]: I0712 09:35:03.111343 2306 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 09:35:03.112408 kubelet[2306]: I0712 09:35:03.112362 2306 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 09:35:03.112408 kubelet[2306]: I0712 09:35:03.112387 2306 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 09:35:03.112408 kubelet[2306]: I0712 09:35:03.112408 2306 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 09:35:03.112696 kubelet[2306]: I0712 09:35:03.112415 2306 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 09:35:03.112696 kubelet[2306]: E0712 09:35:03.112452 2306 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 09:35:03.181231 kubelet[2306]: I0712 09:35:03.181197 2306 policy_none.go:49] "None policy: Start" Jul 12 09:35:03.181652 kubelet[2306]: I0712 09:35:03.181378 2306 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 09:35:03.181652 kubelet[2306]: I0712 09:35:03.181399 2306 state_mem.go:35] "Initializing new in-memory state store" Jul 12 09:35:03.181733 kubelet[2306]: W0712 09:35:03.181674 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jul 12 09:35:03.181763 kubelet[2306]: E0712 09:35:03.181740 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jul 12 09:35:03.187269 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 09:35:03.198839 kubelet[2306]: E0712 09:35:03.198796 2306 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 09:35:03.202610 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 09:35:03.206075 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 09:35:03.213364 kubelet[2306]: E0712 09:35:03.213320 2306 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 09:35:03.215030 kubelet[2306]: I0712 09:35:03.214823 2306 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 09:35:03.215103 kubelet[2306]: I0712 09:35:03.215063 2306 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 09:35:03.215148 kubelet[2306]: I0712 09:35:03.215107 2306 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 09:35:03.215480 kubelet[2306]: I0712 09:35:03.215457 2306 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 09:35:03.217894 kubelet[2306]: E0712 09:35:03.217869 2306 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 09:35:03.218035 kubelet[2306]: E0712 09:35:03.218014 2306 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 09:35:03.299036 kubelet[2306]: E0712 09:35:03.298921 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="400ms" Jul 12 09:35:03.317176 kubelet[2306]: I0712 09:35:03.317149 2306 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 09:35:03.317719 kubelet[2306]: E0712 09:35:03.317694 2306 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Jul 12 09:35:03.424754 systemd[1]: Created slice kubepods-burstable-pod45a74d781ebc3e1a0681825b1280ec42.slice - libcontainer container kubepods-burstable-pod45a74d781ebc3e1a0681825b1280ec42.slice. Jul 12 09:35:03.435529 kubelet[2306]: E0712 09:35:03.435496 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:35:03.438780 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 12 09:35:03.440678 kubelet[2306]: E0712 09:35:03.440659 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:35:03.443270 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 12 09:35:03.444952 kubelet[2306]: E0712 09:35:03.444927 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:35:03.499367 kubelet[2306]: I0712 09:35:03.499326 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:03.499367 kubelet[2306]: I0712 09:35:03.499367 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:03.499456 kubelet[2306]: I0712 09:35:03.499389 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:03.499456 kubelet[2306]: I0712 09:35:03.499408 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 12 09:35:03.499456 kubelet[2306]: I0712 09:35:03.499423 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45a74d781ebc3e1a0681825b1280ec42-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"45a74d781ebc3e1a0681825b1280ec42\") " pod="kube-system/kube-apiserver-localhost" Jul 12 09:35:03.499456 kubelet[2306]: I0712 09:35:03.499439 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45a74d781ebc3e1a0681825b1280ec42-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"45a74d781ebc3e1a0681825b1280ec42\") " pod="kube-system/kube-apiserver-localhost" Jul 12 09:35:03.499456 kubelet[2306]: I0712 09:35:03.499455 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45a74d781ebc3e1a0681825b1280ec42-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"45a74d781ebc3e1a0681825b1280ec42\") " pod="kube-system/kube-apiserver-localhost" Jul 12 09:35:03.499559 kubelet[2306]: I0712 09:35:03.499471 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:03.499559 kubelet[2306]: I0712 09:35:03.499487 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:03.519455 kubelet[2306]: I0712 09:35:03.519430 2306 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 09:35:03.519881 kubelet[2306]: E0712 09:35:03.519854 2306 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Jul 12 09:35:03.700323 kubelet[2306]: E0712 09:35:03.700277 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="800ms" Jul 12 09:35:03.736587 kubelet[2306]: E0712 09:35:03.736557 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:03.737393 containerd[1532]: time="2025-07-12T09:35:03.737129041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:45a74d781ebc3e1a0681825b1280ec42,Namespace:kube-system,Attempt:0,}" Jul 12 09:35:03.741361 kubelet[2306]: E0712 09:35:03.741339 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:03.741705 containerd[1532]: time="2025-07-12T09:35:03.741671578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 12 09:35:03.745977 kubelet[2306]: E0712 09:35:03.745934 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:03.746507 containerd[1532]: time="2025-07-12T09:35:03.746376668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 12 09:35:03.757585 containerd[1532]: time="2025-07-12T09:35:03.757544811Z" level=info msg="connecting to shim 1fdd58aeca34c02c838fd4a50bbe40e9d8a0fc4acf36edf5e247c5cd3c492b76" address="unix:///run/containerd/s/388a4f08f5e6de32119c7ac114e8a08939779dad855852d07d8e690eb62086c3" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:03.768286 containerd[1532]: time="2025-07-12T09:35:03.768248899Z" level=info msg="connecting to shim 6cfad7d10e2ce1d208f60180b0d80785d6e6538d44c970a7bfb5f8cc04a7de6d" address="unix:///run/containerd/s/93630c43856cc41f5743cb24b0afa7530e63557621ea55816e56f2610ff0563d" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:03.780516 containerd[1532]: time="2025-07-12T09:35:03.779981078Z" level=info msg="connecting to shim a19a52e19f271236b1b027307f6dc942cfe7f5ca6a1747fb77a5ffe289257ba5" address="unix:///run/containerd/s/433d1951e12688b7c8477a348a13bdb031f73da778c1da42da299a23962feb9a" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:03.789113 systemd[1]: Started cri-containerd-1fdd58aeca34c02c838fd4a50bbe40e9d8a0fc4acf36edf5e247c5cd3c492b76.scope - libcontainer container 1fdd58aeca34c02c838fd4a50bbe40e9d8a0fc4acf36edf5e247c5cd3c492b76. Jul 12 09:35:03.796497 systemd[1]: Started cri-containerd-6cfad7d10e2ce1d208f60180b0d80785d6e6538d44c970a7bfb5f8cc04a7de6d.scope - libcontainer container 6cfad7d10e2ce1d208f60180b0d80785d6e6538d44c970a7bfb5f8cc04a7de6d. Jul 12 09:35:03.815189 systemd[1]: Started cri-containerd-a19a52e19f271236b1b027307f6dc942cfe7f5ca6a1747fb77a5ffe289257ba5.scope - libcontainer container a19a52e19f271236b1b027307f6dc942cfe7f5ca6a1747fb77a5ffe289257ba5. Jul 12 09:35:03.833712 containerd[1532]: time="2025-07-12T09:35:03.833670071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:45a74d781ebc3e1a0681825b1280ec42,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fdd58aeca34c02c838fd4a50bbe40e9d8a0fc4acf36edf5e247c5cd3c492b76\"" Jul 12 09:35:03.835725 kubelet[2306]: E0712 09:35:03.835683 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:03.838185 containerd[1532]: time="2025-07-12T09:35:03.838113467Z" level=info msg="CreateContainer within sandbox \"1fdd58aeca34c02c838fd4a50bbe40e9d8a0fc4acf36edf5e247c5cd3c492b76\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 09:35:03.848451 containerd[1532]: time="2025-07-12T09:35:03.848420513Z" level=info msg="Container 157c827ec415092b5d36e9d03399d3a16f5ca38035ccff65b1b1e54d1dd5afb5: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:03.849876 containerd[1532]: time="2025-07-12T09:35:03.849841286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cfad7d10e2ce1d208f60180b0d80785d6e6538d44c970a7bfb5f8cc04a7de6d\"" Jul 12 09:35:03.852241 kubelet[2306]: E0712 09:35:03.852122 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:03.853961 containerd[1532]: time="2025-07-12T09:35:03.853924208Z" level=info msg="CreateContainer within sandbox \"6cfad7d10e2ce1d208f60180b0d80785d6e6538d44c970a7bfb5f8cc04a7de6d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 09:35:03.857438 containerd[1532]: time="2025-07-12T09:35:03.856981558Z" level=info msg="CreateContainer within sandbox \"1fdd58aeca34c02c838fd4a50bbe40e9d8a0fc4acf36edf5e247c5cd3c492b76\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"157c827ec415092b5d36e9d03399d3a16f5ca38035ccff65b1b1e54d1dd5afb5\"" Jul 12 09:35:03.858081 containerd[1532]: time="2025-07-12T09:35:03.858058460Z" level=info msg="StartContainer for \"157c827ec415092b5d36e9d03399d3a16f5ca38035ccff65b1b1e54d1dd5afb5\"" Jul 12 09:35:03.859160 containerd[1532]: time="2025-07-12T09:35:03.859116878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a19a52e19f271236b1b027307f6dc942cfe7f5ca6a1747fb77a5ffe289257ba5\"" Jul 12 09:35:03.859512 containerd[1532]: time="2025-07-12T09:35:03.859200216Z" level=info msg="connecting to shim 157c827ec415092b5d36e9d03399d3a16f5ca38035ccff65b1b1e54d1dd5afb5" address="unix:///run/containerd/s/388a4f08f5e6de32119c7ac114e8a08939779dad855852d07d8e690eb62086c3" protocol=ttrpc version=3 Jul 12 09:35:03.859790 kubelet[2306]: E0712 09:35:03.859757 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:03.861470 containerd[1532]: time="2025-07-12T09:35:03.861434276Z" level=info msg="CreateContainer within sandbox \"a19a52e19f271236b1b027307f6dc942cfe7f5ca6a1747fb77a5ffe289257ba5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 09:35:03.861828 containerd[1532]: time="2025-07-12T09:35:03.861748341Z" level=info msg="Container 52d91fcc01d0f9b2fd9555239facaa79fbd9776a110cc9c667963c60f3160a90: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:03.870519 containerd[1532]: time="2025-07-12T09:35:03.870479102Z" level=info msg="CreateContainer within sandbox \"6cfad7d10e2ce1d208f60180b0d80785d6e6538d44c970a7bfb5f8cc04a7de6d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"52d91fcc01d0f9b2fd9555239facaa79fbd9776a110cc9c667963c60f3160a90\"" Jul 12 09:35:03.871346 containerd[1532]: time="2025-07-12T09:35:03.871169484Z" level=info msg="StartContainer for \"52d91fcc01d0f9b2fd9555239facaa79fbd9776a110cc9c667963c60f3160a90\"" Jul 12 09:35:03.872014 containerd[1532]: time="2025-07-12T09:35:03.871974290Z" level=info msg="Container 8a24c821e34edb19aa8a8c9988b51cf7b9317b58f815ad4311ecab59f818f3e9: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:03.872278 containerd[1532]: time="2025-07-12T09:35:03.872160769Z" level=info msg="connecting to shim 52d91fcc01d0f9b2fd9555239facaa79fbd9776a110cc9c667963c60f3160a90" address="unix:///run/containerd/s/93630c43856cc41f5743cb24b0afa7530e63557621ea55816e56f2610ff0563d" protocol=ttrpc version=3 Jul 12 09:35:03.876981 systemd[1]: Started cri-containerd-157c827ec415092b5d36e9d03399d3a16f5ca38035ccff65b1b1e54d1dd5afb5.scope - libcontainer container 157c827ec415092b5d36e9d03399d3a16f5ca38035ccff65b1b1e54d1dd5afb5. Jul 12 09:35:03.878427 containerd[1532]: time="2025-07-12T09:35:03.878375410Z" level=info msg="CreateContainer within sandbox \"a19a52e19f271236b1b027307f6dc942cfe7f5ca6a1747fb77a5ffe289257ba5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a24c821e34edb19aa8a8c9988b51cf7b9317b58f815ad4311ecab59f818f3e9\"" Jul 12 09:35:03.879056 containerd[1532]: time="2025-07-12T09:35:03.879027505Z" level=info msg="StartContainer for \"8a24c821e34edb19aa8a8c9988b51cf7b9317b58f815ad4311ecab59f818f3e9\"" Jul 12 09:35:03.880362 containerd[1532]: time="2025-07-12T09:35:03.880292406Z" level=info msg="connecting to shim 8a24c821e34edb19aa8a8c9988b51cf7b9317b58f815ad4311ecab59f818f3e9" address="unix:///run/containerd/s/433d1951e12688b7c8477a348a13bdb031f73da778c1da42da299a23962feb9a" protocol=ttrpc version=3 Jul 12 09:35:03.891957 systemd[1]: Started cri-containerd-52d91fcc01d0f9b2fd9555239facaa79fbd9776a110cc9c667963c60f3160a90.scope - libcontainer container 52d91fcc01d0f9b2fd9555239facaa79fbd9776a110cc9c667963c60f3160a90. Jul 12 09:35:03.896934 systemd[1]: Started cri-containerd-8a24c821e34edb19aa8a8c9988b51cf7b9317b58f815ad4311ecab59f818f3e9.scope - libcontainer container 8a24c821e34edb19aa8a8c9988b51cf7b9317b58f815ad4311ecab59f818f3e9. Jul 12 09:35:03.921746 kubelet[2306]: I0712 09:35:03.921713 2306 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 09:35:03.922158 kubelet[2306]: E0712 09:35:03.922118 2306 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Jul 12 09:35:03.941497 containerd[1532]: time="2025-07-12T09:35:03.941420612Z" level=info msg="StartContainer for \"157c827ec415092b5d36e9d03399d3a16f5ca38035ccff65b1b1e54d1dd5afb5\" returns successfully" Jul 12 09:35:03.957247 containerd[1532]: time="2025-07-12T09:35:03.957128252Z" level=info msg="StartContainer for \"8a24c821e34edb19aa8a8c9988b51cf7b9317b58f815ad4311ecab59f818f3e9\" returns successfully" Jul 12 09:35:03.958452 containerd[1532]: time="2025-07-12T09:35:03.958350464Z" level=info msg="StartContainer for \"52d91fcc01d0f9b2fd9555239facaa79fbd9776a110cc9c667963c60f3160a90\" returns successfully" Jul 12 09:35:04.123294 kubelet[2306]: E0712 09:35:04.123239 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:35:04.123414 kubelet[2306]: E0712 09:35:04.123372 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:04.124243 kubelet[2306]: E0712 09:35:04.124220 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:35:04.124389 kubelet[2306]: E0712 09:35:04.124312 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:04.128005 kubelet[2306]: E0712 09:35:04.127907 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:35:04.128005 kubelet[2306]: E0712 09:35:04.128008 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:04.724524 kubelet[2306]: I0712 09:35:04.724482 2306 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 09:35:05.129427 kubelet[2306]: E0712 09:35:05.129397 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:35:05.129538 kubelet[2306]: E0712 09:35:05.129511 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:05.129732 kubelet[2306]: E0712 09:35:05.129713 2306 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:35:05.129855 kubelet[2306]: E0712 09:35:05.129839 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:05.434866 kubelet[2306]: E0712 09:35:05.434513 2306 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 09:35:05.484723 kubelet[2306]: I0712 09:35:05.484682 2306 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 09:35:05.498402 kubelet[2306]: I0712 09:35:05.498368 2306 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 09:35:05.511429 kubelet[2306]: E0712 09:35:05.511384 2306 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 09:35:05.511429 kubelet[2306]: I0712 09:35:05.511409 2306 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:05.514646 kubelet[2306]: E0712 09:35:05.514600 2306 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:05.514646 kubelet[2306]: I0712 09:35:05.514628 2306 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 09:35:05.520926 kubelet[2306]: E0712 09:35:05.520890 2306 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 12 09:35:06.086230 kubelet[2306]: I0712 09:35:06.086188 2306 apiserver.go:52] "Watching apiserver" Jul 12 09:35:06.098654 kubelet[2306]: I0712 09:35:06.098624 2306 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 09:35:06.129433 kubelet[2306]: I0712 09:35:06.129406 2306 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 09:35:06.131273 kubelet[2306]: E0712 09:35:06.131233 2306 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 12 09:35:06.131383 kubelet[2306]: E0712 09:35:06.131359 2306 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:07.451929 systemd[1]: Reload requested from client PID 2584 ('systemctl') (unit session-7.scope)... Jul 12 09:35:07.451942 systemd[1]: Reloading... Jul 12 09:35:07.516839 zram_generator::config[2627]: No configuration found. Jul 12 09:35:07.580109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 09:35:07.674008 systemd[1]: Reloading finished in 221 ms. Jul 12 09:35:07.706945 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:35:07.716921 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 09:35:07.717162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:35:07.717214 systemd[1]: kubelet.service: Consumed 990ms CPU time, 128.8M memory peak. Jul 12 09:35:07.718668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:35:07.833645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:35:07.837355 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 09:35:07.874355 kubelet[2669]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 09:35:07.874355 kubelet[2669]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 09:35:07.874355 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 09:35:07.874685 kubelet[2669]: I0712 09:35:07.874379 2669 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 09:35:07.881242 kubelet[2669]: I0712 09:35:07.881201 2669 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 09:35:07.881242 kubelet[2669]: I0712 09:35:07.881232 2669 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 09:35:07.881487 kubelet[2669]: I0712 09:35:07.881462 2669 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 09:35:07.882668 kubelet[2669]: I0712 09:35:07.882646 2669 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 09:35:07.886432 kubelet[2669]: I0712 09:35:07.886398 2669 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 09:35:07.889635 kubelet[2669]: I0712 09:35:07.889591 2669 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 09:35:07.892078 kubelet[2669]: I0712 09:35:07.892058 2669 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 09:35:07.892282 kubelet[2669]: I0712 09:35:07.892259 2669 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 09:35:07.892439 kubelet[2669]: I0712 09:35:07.892284 2669 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 09:35:07.892515 kubelet[2669]: I0712 09:35:07.892450 2669 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 09:35:07.892515 kubelet[2669]: I0712 09:35:07.892458 2669 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 09:35:07.892515 kubelet[2669]: I0712 09:35:07.892503 2669 state_mem.go:36] "Initialized new in-memory state store" Jul 12 09:35:07.892632 kubelet[2669]: I0712 09:35:07.892620 2669 kubelet.go:446] "Attempting to sync node with API server" Jul 12 09:35:07.892660 kubelet[2669]: I0712 09:35:07.892640 2669 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 09:35:07.892682 kubelet[2669]: I0712 09:35:07.892661 2669 kubelet.go:352] "Adding apiserver pod source" Jul 12 09:35:07.892682 kubelet[2669]: I0712 09:35:07.892672 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 09:35:07.893162 kubelet[2669]: I0712 09:35:07.893132 2669 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 12 09:35:07.893761 kubelet[2669]: I0712 09:35:07.893742 2669 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 09:35:07.894201 kubelet[2669]: I0712 09:35:07.894172 2669 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 09:35:07.894201 kubelet[2669]: I0712 09:35:07.894204 2669 server.go:1287] "Started kubelet" Jul 12 09:35:07.894739 kubelet[2669]: I0712 09:35:07.894699 2669 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 09:35:07.895334 kubelet[2669]: I0712 09:35:07.895302 2669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 09:35:07.895474 kubelet[2669]: I0712 09:35:07.895456 2669 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 09:35:07.895643 kubelet[2669]: I0712 09:35:07.895603 2669 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 09:35:07.896395 kubelet[2669]: I0712 09:35:07.896364 2669 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 09:35:07.896982 kubelet[2669]: I0712 09:35:07.896958 2669 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 09:35:07.897218 kubelet[2669]: E0712 09:35:07.897190 2669 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 09:35:07.897564 kubelet[2669]: I0712 09:35:07.897533 2669 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 09:35:07.897699 kubelet[2669]: I0712 09:35:07.897683 2669 reconciler.go:26] "Reconciler: start to sync state" Jul 12 09:35:07.907814 kubelet[2669]: I0712 09:35:07.906652 2669 factory.go:221] Registration of the systemd container factory successfully Jul 12 09:35:07.907814 kubelet[2669]: I0712 09:35:07.906744 2669 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 09:35:07.910318 kubelet[2669]: I0712 09:35:07.910237 2669 factory.go:221] Registration of the containerd container factory successfully Jul 12 09:35:07.916537 kubelet[2669]: I0712 09:35:07.916460 2669 server.go:479] "Adding debug handlers to kubelet server" Jul 12 09:35:07.920422 kubelet[2669]: I0712 09:35:07.920260 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 09:35:07.921545 kubelet[2669]: I0712 09:35:07.921526 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 09:35:07.921902 kubelet[2669]: I0712 09:35:07.921614 2669 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 09:35:07.921902 kubelet[2669]: I0712 09:35:07.921634 2669 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 09:35:07.921902 kubelet[2669]: I0712 09:35:07.921640 2669 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 09:35:07.921902 kubelet[2669]: E0712 09:35:07.921828 2669 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 09:35:07.922392 kubelet[2669]: E0712 09:35:07.922220 2669 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 09:35:07.946186 kubelet[2669]: I0712 09:35:07.946165 2669 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 09:35:07.946186 kubelet[2669]: I0712 09:35:07.946182 2669 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 09:35:07.946302 kubelet[2669]: I0712 09:35:07.946199 2669 state_mem.go:36] "Initialized new in-memory state store" Jul 12 09:35:07.946351 kubelet[2669]: I0712 09:35:07.946335 2669 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 09:35:07.946379 kubelet[2669]: I0712 09:35:07.946350 2669 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 09:35:07.946379 kubelet[2669]: I0712 09:35:07.946368 2669 policy_none.go:49] "None policy: Start" Jul 12 09:35:07.946379 kubelet[2669]: I0712 09:35:07.946376 2669 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 09:35:07.946445 kubelet[2669]: I0712 09:35:07.946384 2669 state_mem.go:35] "Initializing new in-memory state store" Jul 12 09:35:07.946477 kubelet[2669]: I0712 09:35:07.946464 2669 state_mem.go:75] "Updated machine memory state" Jul 12 09:35:07.949851 kubelet[2669]: I0712 09:35:07.949832 2669 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 09:35:07.950096 kubelet[2669]: I0712 09:35:07.950081 2669 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 09:35:07.950195 kubelet[2669]: I0712 09:35:07.950167 2669 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 09:35:07.950392 kubelet[2669]: I0712 09:35:07.950378 2669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 09:35:07.952389 kubelet[2669]: E0712 09:35:07.952224 2669 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 09:35:08.023691 kubelet[2669]: I0712 09:35:08.023343 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 09:35:08.023691 kubelet[2669]: I0712 09:35:08.023440 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:08.023691 kubelet[2669]: I0712 09:35:08.023457 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 09:35:08.055499 kubelet[2669]: I0712 09:35:08.055456 2669 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 09:35:08.061307 kubelet[2669]: I0712 09:35:08.061283 2669 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 12 09:35:08.061387 kubelet[2669]: I0712 09:35:08.061350 2669 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 09:35:08.099148 kubelet[2669]: I0712 09:35:08.099092 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:08.099252 kubelet[2669]: I0712 09:35:08.099167 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:08.099252 kubelet[2669]: I0712 09:35:08.099208 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:08.099252 kubelet[2669]: I0712 09:35:08.099234 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:08.099373 kubelet[2669]: I0712 09:35:08.099253 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:35:08.099373 kubelet[2669]: I0712 09:35:08.099274 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 12 09:35:08.099373 kubelet[2669]: I0712 09:35:08.099307 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45a74d781ebc3e1a0681825b1280ec42-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"45a74d781ebc3e1a0681825b1280ec42\") " pod="kube-system/kube-apiserver-localhost" Jul 12 09:35:08.099373 kubelet[2669]: I0712 09:35:08.099334 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45a74d781ebc3e1a0681825b1280ec42-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"45a74d781ebc3e1a0681825b1280ec42\") " pod="kube-system/kube-apiserver-localhost" Jul 12 09:35:08.099373 kubelet[2669]: I0712 09:35:08.099357 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45a74d781ebc3e1a0681825b1280ec42-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"45a74d781ebc3e1a0681825b1280ec42\") " pod="kube-system/kube-apiserver-localhost" Jul 12 09:35:08.334578 kubelet[2669]: E0712 09:35:08.334491 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:08.334755 kubelet[2669]: E0712 09:35:08.334716 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:08.335553 kubelet[2669]: E0712 09:35:08.335520 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:08.893661 kubelet[2669]: I0712 09:35:08.893616 2669 apiserver.go:52] "Watching apiserver" Jul 12 09:35:08.898510 kubelet[2669]: I0712 09:35:08.898458 2669 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 09:35:08.936938 kubelet[2669]: E0712 09:35:08.936541 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:08.937134 kubelet[2669]: I0712 09:35:08.937104 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 09:35:08.937373 kubelet[2669]: E0712 09:35:08.937342 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:08.941947 kubelet[2669]: E0712 09:35:08.941918 2669 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 12 09:35:08.942651 kubelet[2669]: E0712 09:35:08.942542 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:08.967659 kubelet[2669]: I0712 09:35:08.967573 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.967557122 podStartE2EDuration="967.557122ms" podCreationTimestamp="2025-07-12 09:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:35:08.958139515 +0000 UTC m=+1.117389114" watchObservedRunningTime="2025-07-12 09:35:08.967557122 +0000 UTC m=+1.126806681" Jul 12 09:35:09.008844 kubelet[2669]: I0712 09:35:09.008666 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.0086499309999999 podStartE2EDuration="1.008649931s" podCreationTimestamp="2025-07-12 09:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:35:08.967717506 +0000 UTC m=+1.126967065" watchObservedRunningTime="2025-07-12 09:35:09.008649931 +0000 UTC m=+1.167899490" Jul 12 09:35:09.027886 kubelet[2669]: I0712 09:35:09.027625 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.027607946 podStartE2EDuration="1.027607946s" podCreationTimestamp="2025-07-12 09:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:35:09.008643171 +0000 UTC m=+1.167892770" watchObservedRunningTime="2025-07-12 09:35:09.027607946 +0000 UTC m=+1.186857505" Jul 12 09:35:09.937864 kubelet[2669]: E0712 09:35:09.937753 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:09.938237 kubelet[2669]: E0712 09:35:09.937890 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:10.938998 kubelet[2669]: E0712 09:35:10.938970 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:13.784983 kubelet[2669]: I0712 09:35:13.784938 2669 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 09:35:13.785725 containerd[1532]: time="2025-07-12T09:35:13.785684046Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 09:35:13.786096 kubelet[2669]: I0712 09:35:13.786074 2669 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 09:35:14.714739 systemd[1]: Created slice kubepods-besteffort-pod786300c7_e7ba_4598_adf4_9da86485bbc5.slice - libcontainer container kubepods-besteffort-pod786300c7_e7ba_4598_adf4_9da86485bbc5.slice. Jul 12 09:35:14.744109 kubelet[2669]: I0712 09:35:14.744057 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/786300c7-e7ba-4598-adf4-9da86485bbc5-kube-proxy\") pod \"kube-proxy-tr6xs\" (UID: \"786300c7-e7ba-4598-adf4-9da86485bbc5\") " pod="kube-system/kube-proxy-tr6xs" Jul 12 09:35:14.744294 kubelet[2669]: I0712 09:35:14.744236 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/786300c7-e7ba-4598-adf4-9da86485bbc5-lib-modules\") pod \"kube-proxy-tr6xs\" (UID: \"786300c7-e7ba-4598-adf4-9da86485bbc5\") " pod="kube-system/kube-proxy-tr6xs" Jul 12 09:35:14.744294 kubelet[2669]: I0712 09:35:14.744274 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfk7\" (UniqueName: \"kubernetes.io/projected/786300c7-e7ba-4598-adf4-9da86485bbc5-kube-api-access-ndfk7\") pod \"kube-proxy-tr6xs\" (UID: \"786300c7-e7ba-4598-adf4-9da86485bbc5\") " pod="kube-system/kube-proxy-tr6xs" Jul 12 09:35:14.744463 kubelet[2669]: I0712 09:35:14.744412 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/786300c7-e7ba-4598-adf4-9da86485bbc5-xtables-lock\") pod \"kube-proxy-tr6xs\" (UID: \"786300c7-e7ba-4598-adf4-9da86485bbc5\") " pod="kube-system/kube-proxy-tr6xs" Jul 12 09:35:14.926543 systemd[1]: Created slice kubepods-besteffort-pod7e5995dc_473a_4a84_8bd3_a7fb3bce76a1.slice - libcontainer container kubepods-besteffort-pod7e5995dc_473a_4a84_8bd3_a7fb3bce76a1.slice. Jul 12 09:35:14.946324 kubelet[2669]: I0712 09:35:14.946236 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvslh\" (UniqueName: \"kubernetes.io/projected/7e5995dc-473a-4a84-8bd3-a7fb3bce76a1-kube-api-access-gvslh\") pod \"tigera-operator-747864d56d-srmwk\" (UID: \"7e5995dc-473a-4a84-8bd3-a7fb3bce76a1\") " pod="tigera-operator/tigera-operator-747864d56d-srmwk" Jul 12 09:35:14.946324 kubelet[2669]: I0712 09:35:14.946273 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e5995dc-473a-4a84-8bd3-a7fb3bce76a1-var-lib-calico\") pod \"tigera-operator-747864d56d-srmwk\" (UID: \"7e5995dc-473a-4a84-8bd3-a7fb3bce76a1\") " pod="tigera-operator/tigera-operator-747864d56d-srmwk" Jul 12 09:35:14.959038 kubelet[2669]: E0712 09:35:14.959014 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:15.034990 kubelet[2669]: E0712 09:35:15.034871 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:15.035680 containerd[1532]: time="2025-07-12T09:35:15.035638339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tr6xs,Uid:786300c7-e7ba-4598-adf4-9da86485bbc5,Namespace:kube-system,Attempt:0,}" Jul 12 09:35:15.053428 containerd[1532]: time="2025-07-12T09:35:15.052867737Z" level=info msg="connecting to shim 71462c55034dd5f1a57ab430888ba3899d60452a0194eed49bf8d3dbe4ab5fbf" address="unix:///run/containerd/s/c4bdd6b3c64e481e5081e0838eb25762d4fccc2913d84b82387960b097bf6658" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:15.076993 systemd[1]: Started cri-containerd-71462c55034dd5f1a57ab430888ba3899d60452a0194eed49bf8d3dbe4ab5fbf.scope - libcontainer container 71462c55034dd5f1a57ab430888ba3899d60452a0194eed49bf8d3dbe4ab5fbf. Jul 12 09:35:15.096830 containerd[1532]: time="2025-07-12T09:35:15.096773511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tr6xs,Uid:786300c7-e7ba-4598-adf4-9da86485bbc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"71462c55034dd5f1a57ab430888ba3899d60452a0194eed49bf8d3dbe4ab5fbf\"" Jul 12 09:35:15.097522 kubelet[2669]: E0712 09:35:15.097498 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:15.101059 containerd[1532]: time="2025-07-12T09:35:15.100974550Z" level=info msg="CreateContainer within sandbox \"71462c55034dd5f1a57ab430888ba3899d60452a0194eed49bf8d3dbe4ab5fbf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 09:35:15.111977 containerd[1532]: time="2025-07-12T09:35:15.111937073Z" level=info msg="Container c7bfd80601f7072778c157dc885c067ad1703dd4f8da40464caf6b0efce62729: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:15.118265 containerd[1532]: time="2025-07-12T09:35:15.118229111Z" level=info msg="CreateContainer within sandbox \"71462c55034dd5f1a57ab430888ba3899d60452a0194eed49bf8d3dbe4ab5fbf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7bfd80601f7072778c157dc885c067ad1703dd4f8da40464caf6b0efce62729\"" Jul 12 09:35:15.119042 containerd[1532]: time="2025-07-12T09:35:15.118964941Z" level=info msg="StartContainer for \"c7bfd80601f7072778c157dc885c067ad1703dd4f8da40464caf6b0efce62729\"" Jul 12 09:35:15.120338 containerd[1532]: time="2025-07-12T09:35:15.120310709Z" level=info msg="connecting to shim c7bfd80601f7072778c157dc885c067ad1703dd4f8da40464caf6b0efce62729" address="unix:///run/containerd/s/c4bdd6b3c64e481e5081e0838eb25762d4fccc2913d84b82387960b097bf6658" protocol=ttrpc version=3 Jul 12 09:35:15.146520 systemd[1]: Started cri-containerd-c7bfd80601f7072778c157dc885c067ad1703dd4f8da40464caf6b0efce62729.scope - libcontainer container c7bfd80601f7072778c157dc885c067ad1703dd4f8da40464caf6b0efce62729. Jul 12 09:35:15.177433 containerd[1532]: time="2025-07-12T09:35:15.177397576Z" level=info msg="StartContainer for \"c7bfd80601f7072778c157dc885c067ad1703dd4f8da40464caf6b0efce62729\" returns successfully" Jul 12 09:35:15.230439 containerd[1532]: time="2025-07-12T09:35:15.230403376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-srmwk,Uid:7e5995dc-473a-4a84-8bd3-a7fb3bce76a1,Namespace:tigera-operator,Attempt:0,}" Jul 12 09:35:15.246784 containerd[1532]: time="2025-07-12T09:35:15.246688564Z" level=info msg="connecting to shim c65093500c3834b113f5c96f6a9f0cfb903709ee21a149c1eb7f6f8410155c77" address="unix:///run/containerd/s/1899737c261d54f633a5262ae8b6657ee773c3f0e2e00d3c720d82f92f856bdc" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:15.275969 systemd[1]: Started cri-containerd-c65093500c3834b113f5c96f6a9f0cfb903709ee21a149c1eb7f6f8410155c77.scope - libcontainer container c65093500c3834b113f5c96f6a9f0cfb903709ee21a149c1eb7f6f8410155c77. Jul 12 09:35:15.309751 containerd[1532]: time="2025-07-12T09:35:15.309525578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-srmwk,Uid:7e5995dc-473a-4a84-8bd3-a7fb3bce76a1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c65093500c3834b113f5c96f6a9f0cfb903709ee21a149c1eb7f6f8410155c77\"" Jul 12 09:35:15.311302 containerd[1532]: time="2025-07-12T09:35:15.311273224Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 09:35:15.421558 kubelet[2669]: E0712 09:35:15.421527 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:15.949792 kubelet[2669]: E0712 09:35:15.949757 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:15.950122 kubelet[2669]: E0712 09:35:15.949872 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:15.950122 kubelet[2669]: E0712 09:35:15.949895 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:15.966225 kubelet[2669]: I0712 09:35:15.966156 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tr6xs" podStartSLOduration=1.966140765 podStartE2EDuration="1.966140765s" podCreationTimestamp="2025-07-12 09:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:35:15.965626756 +0000 UTC m=+8.124876315" watchObservedRunningTime="2025-07-12 09:35:15.966140765 +0000 UTC m=+8.125390324" Jul 12 09:35:16.462472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2384377569.mount: Deactivated successfully. Jul 12 09:35:16.743914 containerd[1532]: time="2025-07-12T09:35:16.743668913Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:16.744652 containerd[1532]: time="2025-07-12T09:35:16.744421300Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 09:35:16.745256 containerd[1532]: time="2025-07-12T09:35:16.745210411Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:16.747723 containerd[1532]: time="2025-07-12T09:35:16.747686191Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:16.748312 containerd[1532]: time="2025-07-12T09:35:16.748287165Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.436980217s" Jul 12 09:35:16.748394 containerd[1532]: time="2025-07-12T09:35:16.748380453Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 09:35:16.755134 containerd[1532]: time="2025-07-12T09:35:16.754982762Z" level=info msg="CreateContainer within sandbox \"c65093500c3834b113f5c96f6a9f0cfb903709ee21a149c1eb7f6f8410155c77\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 09:35:16.761826 containerd[1532]: time="2025-07-12T09:35:16.761167753Z" level=info msg="Container 34535f5abcd33e5854d571182dceac4f9c4026c39803a4c4e0ce78721bdc36e3: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:16.766695 containerd[1532]: time="2025-07-12T09:35:16.766657762Z" level=info msg="CreateContainer within sandbox \"c65093500c3834b113f5c96f6a9f0cfb903709ee21a149c1eb7f6f8410155c77\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"34535f5abcd33e5854d571182dceac4f9c4026c39803a4c4e0ce78721bdc36e3\"" Jul 12 09:35:16.767962 containerd[1532]: time="2025-07-12T09:35:16.767816305Z" level=info msg="StartContainer for \"34535f5abcd33e5854d571182dceac4f9c4026c39803a4c4e0ce78721bdc36e3\"" Jul 12 09:35:16.768689 containerd[1532]: time="2025-07-12T09:35:16.768653220Z" level=info msg="connecting to shim 34535f5abcd33e5854d571182dceac4f9c4026c39803a4c4e0ce78721bdc36e3" address="unix:///run/containerd/s/1899737c261d54f633a5262ae8b6657ee773c3f0e2e00d3c720d82f92f856bdc" protocol=ttrpc version=3 Jul 12 09:35:16.785963 systemd[1]: Started cri-containerd-34535f5abcd33e5854d571182dceac4f9c4026c39803a4c4e0ce78721bdc36e3.scope - libcontainer container 34535f5abcd33e5854d571182dceac4f9c4026c39803a4c4e0ce78721bdc36e3. Jul 12 09:35:16.811486 containerd[1532]: time="2025-07-12T09:35:16.811450075Z" level=info msg="StartContainer for \"34535f5abcd33e5854d571182dceac4f9c4026c39803a4c4e0ce78721bdc36e3\" returns successfully" Jul 12 09:35:16.953290 kubelet[2669]: E0712 09:35:16.953196 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:19.682172 kubelet[2669]: E0712 09:35:19.682131 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:19.684521 kubelet[2669]: I0712 09:35:19.684457 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-srmwk" podStartSLOduration=4.241828179 podStartE2EDuration="5.684444302s" podCreationTimestamp="2025-07-12 09:35:14 +0000 UTC" firstStartedPulling="2025-07-12 09:35:15.310803699 +0000 UTC m=+7.470053258" lastFinishedPulling="2025-07-12 09:35:16.753419822 +0000 UTC m=+8.912669381" observedRunningTime="2025-07-12 09:35:16.96050228 +0000 UTC m=+9.119751839" watchObservedRunningTime="2025-07-12 09:35:19.684444302 +0000 UTC m=+11.843693861" Jul 12 09:35:19.958671 kubelet[2669]: E0712 09:35:19.958513 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:22.022702 sudo[1739]: pam_unix(sudo:session): session closed for user root Jul 12 09:35:22.026269 sshd[1738]: Connection closed by 10.0.0.1 port 33930 Jul 12 09:35:22.026725 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Jul 12 09:35:22.031146 systemd[1]: sshd@6-10.0.0.56:22-10.0.0.1:33930.service: Deactivated successfully. Jul 12 09:35:22.032784 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 09:35:22.032961 systemd[1]: session-7.scope: Consumed 6.601s CPU time, 223.5M memory peak. Jul 12 09:35:22.034576 systemd-logind[1506]: Session 7 logged out. Waiting for processes to exit. Jul 12 09:35:22.035701 systemd-logind[1506]: Removed session 7. Jul 12 09:35:23.471352 update_engine[1513]: I20250712 09:35:23.470849 1513 update_attempter.cc:509] Updating boot flags... Jul 12 09:35:25.297556 systemd[1]: Created slice kubepods-besteffort-pod586314aa_5d29_44b4_913f_a01930dcebb0.slice - libcontainer container kubepods-besteffort-pod586314aa_5d29_44b4_913f_a01930dcebb0.slice. Jul 12 09:35:25.315312 kubelet[2669]: I0712 09:35:25.315263 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/586314aa-5d29-44b4-913f-a01930dcebb0-typha-certs\") pod \"calico-typha-6dd9c99cf4-knpsw\" (UID: \"586314aa-5d29-44b4-913f-a01930dcebb0\") " pod="calico-system/calico-typha-6dd9c99cf4-knpsw" Jul 12 09:35:25.315887 kubelet[2669]: I0712 09:35:25.315854 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmm5l\" (UniqueName: \"kubernetes.io/projected/586314aa-5d29-44b4-913f-a01930dcebb0-kube-api-access-qmm5l\") pod \"calico-typha-6dd9c99cf4-knpsw\" (UID: \"586314aa-5d29-44b4-913f-a01930dcebb0\") " pod="calico-system/calico-typha-6dd9c99cf4-knpsw" Jul 12 09:35:25.315929 kubelet[2669]: I0712 09:35:25.315912 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/586314aa-5d29-44b4-913f-a01930dcebb0-tigera-ca-bundle\") pod \"calico-typha-6dd9c99cf4-knpsw\" (UID: \"586314aa-5d29-44b4-913f-a01930dcebb0\") " pod="calico-system/calico-typha-6dd9c99cf4-knpsw" Jul 12 09:35:25.549146 systemd[1]: Created slice kubepods-besteffort-podf44d2ae5_d63b_4df2_954c_e29a9a170d01.slice - libcontainer container kubepods-besteffort-podf44d2ae5_d63b_4df2_954c_e29a9a170d01.slice. Jul 12 09:35:25.602473 kubelet[2669]: E0712 09:35:25.602433 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:25.603055 containerd[1532]: time="2025-07-12T09:35:25.602948254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dd9c99cf4-knpsw,Uid:586314aa-5d29-44b4-913f-a01930dcebb0,Namespace:calico-system,Attempt:0,}" Jul 12 09:35:25.618166 kubelet[2669]: I0712 09:35:25.618083 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f44d2ae5-d63b-4df2-954c-e29a9a170d01-cni-bin-dir\") pod \"calico-node-jjk6q\" (UID: \"f44d2ae5-d63b-4df2-954c-e29a9a170d01\") " pod="calico-system/calico-node-jjk6q" Jul 12 09:35:25.618166 kubelet[2669]: I0712 09:35:25.618123 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f44d2ae5-d63b-4df2-954c-e29a9a170d01-cni-log-dir\") pod \"calico-node-jjk6q\" (UID: \"f44d2ae5-d63b-4df2-954c-e29a9a170d01\") " pod="calico-system/calico-node-jjk6q" Jul 12 09:35:25.618166 kubelet[2669]: I0712 09:35:25.618141 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f44d2ae5-d63b-4df2-954c-e29a9a170d01-flexvol-driver-host\") pod \"calico-node-jjk6q\" (UID: \"f44d2ae5-d63b-4df2-954c-e29a9a170d01\") " pod="calico-system/calico-node-jjk6q" Jul 12 09:35:25.618166 kubelet[2669]: I0712 09:35:25.618170 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f44d2ae5-d63b-4df2-954c-e29a9a170d01-node-certs\") pod \"calico-node-jjk6q\" (UID: \"f44d2ae5-d63b-4df2-954c-e29a9a170d01\") " pod="calico-system/calico-node-jjk6q" Jul 12 09:35:25.618433 kubelet[2669]: I0712 09:35:25.618188 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f44d2ae5-d63b-4df2-954c-e29a9a170d01-tigera-ca-bundle\") pod \"calico-node-jjk6q\" (UID: \"f44d2ae5-d63b-4df2-954c-e29a9a170d01\") " pod="calico-system/calico-node-jjk6q" Jul 12 09:35:25.618433 kubelet[2669]: I0712 09:35:25.618205 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f44d2ae5-d63b-4df2-954c-e29a9a170d01-var-run-calico\") pod \"calico-node-jjk6q\" (UID: \"f44d2ae5-d63b-4df2-954c-e29a9a170d01\") " pod="calico-system/calico-node-jjk6q" Jul 12 09:35:25.618433 kubelet[2669]: I0712 09:35:25.618222 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f44d2ae5-d63b-4df2-954c-e29a9a170d01-var-lib-calico\") pod \"calico-node-jjk6q\" (UID: \"f44d2ae5-d63b-4df2-954c-e29a9a170d01\") " pod="calico-system/calico-node-jjk6q" Jul 12 09:35:25.618433 kubelet[2669]: I0712 09:35:25.618247 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f44d2ae5-d63b-4df2-954c-e29a9a170d01-lib-modules\") pod \"calico-node-jjk6q\" (UID: \"f44d2ae5-d63b-4df2-954c-e29a9a170d01\") " pod="calico-system/calico-node-jjk6q" Jul 12 09:35:25.618433 kubelet[2669]: I0712 09:35:25.618262 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f44d2ae5-d63b-4df2-954c-e29a9a170d01-policysync\") pod \"calico-node-jjk6q\" (UID: \"f44d2ae5-d63b-4df2-954c-e29a9a170d01\") " pod="calico-system/calico-node-jjk6q" Jul 12 09:35:25.618897 kubelet[2669]: I0712 09:35:25.618863 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjckd\" (UniqueName: \"kubernetes.io/projected/f44d2ae5-d63b-4df2-954c-e29a9a170d01-kube-api-access-sjckd\") pod \"calico-node-jjk6q\" (UID: \"f44d2ae5-d63b-4df2-954c-e29a9a170d01\") " pod="calico-system/calico-node-jjk6q" Jul 12 09:35:25.618970 kubelet[2669]: I0712 09:35:25.618903 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f44d2ae5-d63b-4df2-954c-e29a9a170d01-cni-net-dir\") pod \"calico-node-jjk6q\" (UID: \"f44d2ae5-d63b-4df2-954c-e29a9a170d01\") " pod="calico-system/calico-node-jjk6q" Jul 12 09:35:25.619048 kubelet[2669]: I0712 09:35:25.619033 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f44d2ae5-d63b-4df2-954c-e29a9a170d01-xtables-lock\") pod \"calico-node-jjk6q\" (UID: \"f44d2ae5-d63b-4df2-954c-e29a9a170d01\") " pod="calico-system/calico-node-jjk6q" Jul 12 09:35:25.648360 containerd[1532]: time="2025-07-12T09:35:25.648293756Z" level=info msg="connecting to shim 550745ec9de9df38014018f052e59f270dbb625a4d4ade2b4f0c178c8f51340d" address="unix:///run/containerd/s/c9e71e980aa2cf29ce12c76528009f8e52d880f67cfdd2832ffdeb7a7f327e77" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:25.722145 kubelet[2669]: E0712 09:35:25.722105 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.722145 kubelet[2669]: W0712 09:35:25.722130 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.723241 systemd[1]: Started cri-containerd-550745ec9de9df38014018f052e59f270dbb625a4d4ade2b4f0c178c8f51340d.scope - libcontainer container 550745ec9de9df38014018f052e59f270dbb625a4d4ade2b4f0c178c8f51340d. Jul 12 09:35:25.723744 kubelet[2669]: E0712 09:35:25.723711 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.731375 kubelet[2669]: E0712 09:35:25.731351 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.731375 kubelet[2669]: W0712 09:35:25.731371 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.731498 kubelet[2669]: E0712 09:35:25.731389 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.735467 kubelet[2669]: E0712 09:35:25.735443 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.735467 kubelet[2669]: W0712 09:35:25.735461 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.735643 kubelet[2669]: E0712 09:35:25.735477 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.771527 containerd[1532]: time="2025-07-12T09:35:25.771474618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dd9c99cf4-knpsw,Uid:586314aa-5d29-44b4-913f-a01930dcebb0,Namespace:calico-system,Attempt:0,} returns sandbox id \"550745ec9de9df38014018f052e59f270dbb625a4d4ade2b4f0c178c8f51340d\"" Jul 12 09:35:25.772333 kubelet[2669]: E0712 09:35:25.772307 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:25.776960 containerd[1532]: time="2025-07-12T09:35:25.776917609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 09:35:25.837729 kubelet[2669]: E0712 09:35:25.836746 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tsbtk" podUID="cffc9272-54c6-4b71-afb4-040238028993" Jul 12 09:35:25.852131 containerd[1532]: time="2025-07-12T09:35:25.852094438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jjk6q,Uid:f44d2ae5-d63b-4df2-954c-e29a9a170d01,Namespace:calico-system,Attempt:0,}" Jul 12 09:35:25.891084 containerd[1532]: time="2025-07-12T09:35:25.891037740Z" level=info msg="connecting to shim 65c5d5d16c4426c58a8b45e01197ee8ee3017d846cef2f427d71903378de67c5" address="unix:///run/containerd/s/08f59b616a20de4be604305071f637d16e8ddd7cc4825a291259d7495b7f7aeb" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:25.910972 systemd[1]: Started cri-containerd-65c5d5d16c4426c58a8b45e01197ee8ee3017d846cef2f427d71903378de67c5.scope - libcontainer container 65c5d5d16c4426c58a8b45e01197ee8ee3017d846cef2f427d71903378de67c5. Jul 12 09:35:25.913065 kubelet[2669]: E0712 09:35:25.913040 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.913065 kubelet[2669]: W0712 09:35:25.913061 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.913271 kubelet[2669]: E0712 09:35:25.913081 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.913271 kubelet[2669]: E0712 09:35:25.913284 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.913271 kubelet[2669]: W0712 09:35:25.913293 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.913271 kubelet[2669]: E0712 09:35:25.913335 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.914557 kubelet[2669]: E0712 09:35:25.913622 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.914557 kubelet[2669]: W0712 09:35:25.913633 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.914557 kubelet[2669]: E0712 09:35:25.913643 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.914557 kubelet[2669]: E0712 09:35:25.913875 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.914557 kubelet[2669]: W0712 09:35:25.913885 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.914557 kubelet[2669]: E0712 09:35:25.913901 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.914557 kubelet[2669]: E0712 09:35:25.914055 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.914557 kubelet[2669]: W0712 09:35:25.914063 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.914557 kubelet[2669]: E0712 09:35:25.914072 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.914557 kubelet[2669]: E0712 09:35:25.914517 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.914857 kubelet[2669]: W0712 09:35:25.914545 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.914857 kubelet[2669]: E0712 09:35:25.914562 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.914857 kubelet[2669]: E0712 09:35:25.914781 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.914857 kubelet[2669]: W0712 09:35:25.914790 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.914857 kubelet[2669]: E0712 09:35:25.914831 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.915622 kubelet[2669]: E0712 09:35:25.915592 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.915622 kubelet[2669]: W0712 09:35:25.915617 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.915725 kubelet[2669]: E0712 09:35:25.915631 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.915968 kubelet[2669]: E0712 09:35:25.915948 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.915968 kubelet[2669]: W0712 09:35:25.915963 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.916061 kubelet[2669]: E0712 09:35:25.915974 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.916213 kubelet[2669]: E0712 09:35:25.916199 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.916213 kubelet[2669]: W0712 09:35:25.916211 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.916270 kubelet[2669]: E0712 09:35:25.916221 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.916372 kubelet[2669]: E0712 09:35:25.916359 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.916410 kubelet[2669]: W0712 09:35:25.916383 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.916410 kubelet[2669]: E0712 09:35:25.916393 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.916601 kubelet[2669]: E0712 09:35:25.916586 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.916601 kubelet[2669]: W0712 09:35:25.916597 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.916661 kubelet[2669]: E0712 09:35:25.916624 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.916827 kubelet[2669]: E0712 09:35:25.916797 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.916894 kubelet[2669]: W0712 09:35:25.916860 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.916894 kubelet[2669]: E0712 09:35:25.916893 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.917063 kubelet[2669]: E0712 09:35:25.917047 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.917063 kubelet[2669]: W0712 09:35:25.917058 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.917117 kubelet[2669]: E0712 09:35:25.917067 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.917224 kubelet[2669]: E0712 09:35:25.917210 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.917224 kubelet[2669]: W0712 09:35:25.917221 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.917298 kubelet[2669]: E0712 09:35:25.917231 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.917374 kubelet[2669]: E0712 09:35:25.917362 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.917374 kubelet[2669]: W0712 09:35:25.917373 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.917426 kubelet[2669]: E0712 09:35:25.917395 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.917597 kubelet[2669]: E0712 09:35:25.917583 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.917597 kubelet[2669]: W0712 09:35:25.917594 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.917671 kubelet[2669]: E0712 09:35:25.917605 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.917778 kubelet[2669]: E0712 09:35:25.917764 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.917778 kubelet[2669]: W0712 09:35:25.917776 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.917839 kubelet[2669]: E0712 09:35:25.917791 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.918021 kubelet[2669]: E0712 09:35:25.917993 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.918021 kubelet[2669]: W0712 09:35:25.918008 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.918021 kubelet[2669]: E0712 09:35:25.918018 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.918187 kubelet[2669]: E0712 09:35:25.918174 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.918187 kubelet[2669]: W0712 09:35:25.918184 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.918241 kubelet[2669]: E0712 09:35:25.918193 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.921602 kubelet[2669]: E0712 09:35:25.921549 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.921602 kubelet[2669]: W0712 09:35:25.921568 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.921602 kubelet[2669]: E0712 09:35:25.921581 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.921868 kubelet[2669]: I0712 09:35:25.921608 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cffc9272-54c6-4b71-afb4-040238028993-socket-dir\") pod \"csi-node-driver-tsbtk\" (UID: \"cffc9272-54c6-4b71-afb4-040238028993\") " pod="calico-system/csi-node-driver-tsbtk" Jul 12 09:35:25.921868 kubelet[2669]: E0712 09:35:25.921834 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.921868 kubelet[2669]: W0712 09:35:25.921846 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.921868 kubelet[2669]: E0712 09:35:25.921864 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.922240 kubelet[2669]: I0712 09:35:25.921896 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cffc9272-54c6-4b71-afb4-040238028993-varrun\") pod \"csi-node-driver-tsbtk\" (UID: \"cffc9272-54c6-4b71-afb4-040238028993\") " pod="calico-system/csi-node-driver-tsbtk" Jul 12 09:35:25.922240 kubelet[2669]: E0712 09:35:25.922060 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.922240 kubelet[2669]: W0712 09:35:25.922075 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.922240 kubelet[2669]: E0712 09:35:25.922093 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.922240 kubelet[2669]: E0712 09:35:25.922222 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.922240 kubelet[2669]: W0712 09:35:25.922230 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.922240 kubelet[2669]: E0712 09:35:25.922244 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.922636 kubelet[2669]: E0712 09:35:25.922378 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.922636 kubelet[2669]: W0712 09:35:25.922386 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.922636 kubelet[2669]: E0712 09:35:25.922398 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.922636 kubelet[2669]: I0712 09:35:25.922417 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cffc9272-54c6-4b71-afb4-040238028993-kubelet-dir\") pod \"csi-node-driver-tsbtk\" (UID: \"cffc9272-54c6-4b71-afb4-040238028993\") " pod="calico-system/csi-node-driver-tsbtk" Jul 12 09:35:25.922636 kubelet[2669]: E0712 09:35:25.922557 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.922636 kubelet[2669]: W0712 09:35:25.922565 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.922636 kubelet[2669]: E0712 09:35:25.922578 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.922636 kubelet[2669]: I0712 09:35:25.922593 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg9jp\" (UniqueName: \"kubernetes.io/projected/cffc9272-54c6-4b71-afb4-040238028993-kube-api-access-dg9jp\") pod \"csi-node-driver-tsbtk\" (UID: \"cffc9272-54c6-4b71-afb4-040238028993\") " pod="calico-system/csi-node-driver-tsbtk" Jul 12 09:35:25.923256 kubelet[2669]: E0712 09:35:25.922791 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.923256 kubelet[2669]: W0712 09:35:25.922819 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.923256 kubelet[2669]: E0712 09:35:25.922844 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.923256 kubelet[2669]: E0712 09:35:25.923036 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.923256 kubelet[2669]: W0712 09:35:25.923045 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.923256 kubelet[2669]: E0712 09:35:25.923061 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.923919 kubelet[2669]: E0712 09:35:25.923411 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.923919 kubelet[2669]: W0712 09:35:25.923422 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.923919 kubelet[2669]: E0712 09:35:25.923439 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.923919 kubelet[2669]: E0712 09:35:25.923672 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.923919 kubelet[2669]: W0712 09:35:25.923682 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.923919 kubelet[2669]: E0712 09:35:25.923702 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.924042 kubelet[2669]: E0712 09:35:25.923927 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.924042 kubelet[2669]: W0712 09:35:25.923944 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.924042 kubelet[2669]: E0712 09:35:25.923990 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.924042 kubelet[2669]: I0712 09:35:25.924010 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cffc9272-54c6-4b71-afb4-040238028993-registration-dir\") pod \"csi-node-driver-tsbtk\" (UID: \"cffc9272-54c6-4b71-afb4-040238028993\") " pod="calico-system/csi-node-driver-tsbtk" Jul 12 09:35:25.924277 kubelet[2669]: E0712 09:35:25.924254 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.924277 kubelet[2669]: W0712 09:35:25.924269 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.924336 kubelet[2669]: E0712 09:35:25.924299 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.925043 kubelet[2669]: E0712 09:35:25.925022 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.925043 kubelet[2669]: W0712 09:35:25.925044 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.925130 kubelet[2669]: E0712 09:35:25.925061 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.925246 kubelet[2669]: E0712 09:35:25.925232 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.925246 kubelet[2669]: W0712 09:35:25.925245 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.925303 kubelet[2669]: E0712 09:35:25.925254 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.925764 kubelet[2669]: E0712 09:35:25.925748 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:25.925764 kubelet[2669]: W0712 09:35:25.925762 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:25.925861 kubelet[2669]: E0712 09:35:25.925772 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:25.940949 containerd[1532]: time="2025-07-12T09:35:25.940900626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jjk6q,Uid:f44d2ae5-d63b-4df2-954c-e29a9a170d01,Namespace:calico-system,Attempt:0,} returns sandbox id \"65c5d5d16c4426c58a8b45e01197ee8ee3017d846cef2f427d71903378de67c5\"" Jul 12 09:35:26.025220 kubelet[2669]: E0712 09:35:26.025194 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.025220 kubelet[2669]: W0712 09:35:26.025215 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.025394 kubelet[2669]: E0712 09:35:26.025236 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.025483 kubelet[2669]: E0712 09:35:26.025471 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.025511 kubelet[2669]: W0712 09:35:26.025483 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.025511 kubelet[2669]: E0712 09:35:26.025502 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.025709 kubelet[2669]: E0712 09:35:26.025688 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.025709 kubelet[2669]: W0712 09:35:26.025707 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.025773 kubelet[2669]: E0712 09:35:26.025723 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.025899 kubelet[2669]: E0712 09:35:26.025880 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.025929 kubelet[2669]: W0712 09:35:26.025919 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.025949 kubelet[2669]: E0712 09:35:26.025930 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.026122 kubelet[2669]: E0712 09:35:26.026111 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.026122 kubelet[2669]: W0712 09:35:26.026122 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.026177 kubelet[2669]: E0712 09:35:26.026134 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.026316 kubelet[2669]: E0712 09:35:26.026305 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.026316 kubelet[2669]: W0712 09:35:26.026316 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.026367 kubelet[2669]: E0712 09:35:26.026335 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.026514 kubelet[2669]: E0712 09:35:26.026503 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.026514 kubelet[2669]: W0712 09:35:26.026514 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.026575 kubelet[2669]: E0712 09:35:26.026534 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.026688 kubelet[2669]: E0712 09:35:26.026678 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.026720 kubelet[2669]: W0712 09:35:26.026688 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.026790 kubelet[2669]: E0712 09:35:26.026768 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.026895 kubelet[2669]: E0712 09:35:26.026880 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.026895 kubelet[2669]: W0712 09:35:26.026892 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.026955 kubelet[2669]: E0712 09:35:26.026934 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.027079 kubelet[2669]: E0712 09:35:26.027065 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.027079 kubelet[2669]: W0712 09:35:26.027075 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.027167 kubelet[2669]: E0712 09:35:26.027104 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.027230 kubelet[2669]: E0712 09:35:26.027215 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.027230 kubelet[2669]: W0712 09:35:26.027227 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.027280 kubelet[2669]: E0712 09:35:26.027264 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.027386 kubelet[2669]: E0712 09:35:26.027374 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.027386 kubelet[2669]: W0712 09:35:26.027384 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.027471 kubelet[2669]: E0712 09:35:26.027410 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.027563 kubelet[2669]: E0712 09:35:26.027549 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.027563 kubelet[2669]: W0712 09:35:26.027560 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.027614 kubelet[2669]: E0712 09:35:26.027573 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.028464 kubelet[2669]: E0712 09:35:26.028425 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.028464 kubelet[2669]: W0712 09:35:26.028448 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.028464 kubelet[2669]: E0712 09:35:26.028470 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.028663 kubelet[2669]: E0712 09:35:26.028628 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.028663 kubelet[2669]: W0712 09:35:26.028654 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.028735 kubelet[2669]: E0712 09:35:26.028686 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.028846 kubelet[2669]: E0712 09:35:26.028828 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.028846 kubelet[2669]: W0712 09:35:26.028840 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.028897 kubelet[2669]: E0712 09:35:26.028862 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.029017 kubelet[2669]: E0712 09:35:26.028995 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.029017 kubelet[2669]: W0712 09:35:26.029009 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.029127 kubelet[2669]: E0712 09:35:26.029030 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.036040 kubelet[2669]: E0712 09:35:26.036005 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.036040 kubelet[2669]: W0712 09:35:26.036031 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.036201 kubelet[2669]: E0712 09:35:26.036172 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.036336 kubelet[2669]: E0712 09:35:26.036301 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.036336 kubelet[2669]: W0712 09:35:26.036317 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.036398 kubelet[2669]: E0712 09:35:26.036369 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.036643 kubelet[2669]: E0712 09:35:26.036527 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.036643 kubelet[2669]: W0712 09:35:26.036550 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.036643 kubelet[2669]: E0712 09:35:26.036579 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.036950 kubelet[2669]: E0712 09:35:26.036762 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.036950 kubelet[2669]: W0712 09:35:26.036771 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.036950 kubelet[2669]: E0712 09:35:26.036863 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.037040 kubelet[2669]: E0712 09:35:26.036956 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.037040 kubelet[2669]: W0712 09:35:26.036966 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.037040 kubelet[2669]: E0712 09:35:26.036982 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.037496 kubelet[2669]: E0712 09:35:26.037479 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.037496 kubelet[2669]: W0712 09:35:26.037495 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.037566 kubelet[2669]: E0712 09:35:26.037509 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.038099 kubelet[2669]: E0712 09:35:26.038021 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.038099 kubelet[2669]: W0712 09:35:26.038036 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.038099 kubelet[2669]: E0712 09:35:26.038053 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.038291 kubelet[2669]: E0712 09:35:26.038273 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.038291 kubelet[2669]: W0712 09:35:26.038286 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.038348 kubelet[2669]: E0712 09:35:26.038296 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.041640 kubelet[2669]: E0712 09:35:26.041610 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:26.041640 kubelet[2669]: W0712 09:35:26.041629 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:26.041640 kubelet[2669]: E0712 09:35:26.041643 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:26.752190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4157376912.mount: Deactivated successfully. Jul 12 09:35:27.246305 containerd[1532]: time="2025-07-12T09:35:27.246258042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:27.246889 containerd[1532]: time="2025-07-12T09:35:27.246862028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 12 09:35:27.247446 containerd[1532]: time="2025-07-12T09:35:27.247415732Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:27.249335 containerd[1532]: time="2025-07-12T09:35:27.249301495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:27.250160 containerd[1532]: time="2025-07-12T09:35:27.250137612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.47318124s" Jul 12 09:35:27.250216 containerd[1532]: time="2025-07-12T09:35:27.250165653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 09:35:27.251415 containerd[1532]: time="2025-07-12T09:35:27.251381306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 09:35:27.262853 containerd[1532]: time="2025-07-12T09:35:27.262513274Z" level=info msg="CreateContainer within sandbox \"550745ec9de9df38014018f052e59f270dbb625a4d4ade2b4f0c178c8f51340d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 09:35:27.273041 containerd[1532]: time="2025-07-12T09:35:27.272993813Z" level=info msg="Container fb181beb9b095f077e135cc0a9bdd5497ddd98881164fb9b6c7dd1f325a2a333: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:27.278803 containerd[1532]: time="2025-07-12T09:35:27.278741105Z" level=info msg="CreateContainer within sandbox \"550745ec9de9df38014018f052e59f270dbb625a4d4ade2b4f0c178c8f51340d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fb181beb9b095f077e135cc0a9bdd5497ddd98881164fb9b6c7dd1f325a2a333\"" Jul 12 09:35:27.279796 containerd[1532]: time="2025-07-12T09:35:27.279759430Z" level=info msg="StartContainer for \"fb181beb9b095f077e135cc0a9bdd5497ddd98881164fb9b6c7dd1f325a2a333\"" Jul 12 09:35:27.280763 containerd[1532]: time="2025-07-12T09:35:27.280726672Z" level=info msg="connecting to shim fb181beb9b095f077e135cc0a9bdd5497ddd98881164fb9b6c7dd1f325a2a333" address="unix:///run/containerd/s/c9e71e980aa2cf29ce12c76528009f8e52d880f67cfdd2832ffdeb7a7f327e77" protocol=ttrpc version=3 Jul 12 09:35:27.304974 systemd[1]: Started cri-containerd-fb181beb9b095f077e135cc0a9bdd5497ddd98881164fb9b6c7dd1f325a2a333.scope - libcontainer container fb181beb9b095f077e135cc0a9bdd5497ddd98881164fb9b6c7dd1f325a2a333. Jul 12 09:35:27.356459 containerd[1532]: time="2025-07-12T09:35:27.356000811Z" level=info msg="StartContainer for \"fb181beb9b095f077e135cc0a9bdd5497ddd98881164fb9b6c7dd1f325a2a333\" returns successfully" Jul 12 09:35:27.923216 kubelet[2669]: E0712 09:35:27.922845 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tsbtk" podUID="cffc9272-54c6-4b71-afb4-040238028993" Jul 12 09:35:27.976915 kubelet[2669]: E0712 09:35:27.976888 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:28.031918 kubelet[2669]: E0712 09:35:28.031881 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.031918 kubelet[2669]: W0712 09:35:28.031903 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.031918 kubelet[2669]: E0712 09:35:28.031924 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.032088 kubelet[2669]: E0712 09:35:28.032073 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.032128 kubelet[2669]: W0712 09:35:28.032081 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.032128 kubelet[2669]: E0712 09:35:28.032125 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.032292 kubelet[2669]: E0712 09:35:28.032271 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.032292 kubelet[2669]: W0712 09:35:28.032281 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.032292 kubelet[2669]: E0712 09:35:28.032289 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.032421 kubelet[2669]: E0712 09:35:28.032402 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.032421 kubelet[2669]: W0712 09:35:28.032412 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.032421 kubelet[2669]: E0712 09:35:28.032420 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.032567 kubelet[2669]: E0712 09:35:28.032556 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.032567 kubelet[2669]: W0712 09:35:28.032565 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.032616 kubelet[2669]: E0712 09:35:28.032573 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.032692 kubelet[2669]: E0712 09:35:28.032682 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.032692 kubelet[2669]: W0712 09:35:28.032691 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.032736 kubelet[2669]: E0712 09:35:28.032701 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.032855 kubelet[2669]: E0712 09:35:28.032840 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.032855 kubelet[2669]: W0712 09:35:28.032850 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.032911 kubelet[2669]: E0712 09:35:28.032858 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.032987 kubelet[2669]: E0712 09:35:28.032975 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.032987 kubelet[2669]: W0712 09:35:28.032986 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.033034 kubelet[2669]: E0712 09:35:28.032994 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.033124 kubelet[2669]: E0712 09:35:28.033113 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.033124 kubelet[2669]: W0712 09:35:28.033123 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.033172 kubelet[2669]: E0712 09:35:28.033132 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.033246 kubelet[2669]: E0712 09:35:28.033236 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.033269 kubelet[2669]: W0712 09:35:28.033245 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.033269 kubelet[2669]: E0712 09:35:28.033253 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.033378 kubelet[2669]: E0712 09:35:28.033368 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.033378 kubelet[2669]: W0712 09:35:28.033377 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.033420 kubelet[2669]: E0712 09:35:28.033384 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.033505 kubelet[2669]: E0712 09:35:28.033496 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.033528 kubelet[2669]: W0712 09:35:28.033505 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.033528 kubelet[2669]: E0712 09:35:28.033512 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.033637 kubelet[2669]: E0712 09:35:28.033628 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.033637 kubelet[2669]: W0712 09:35:28.033637 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.033679 kubelet[2669]: E0712 09:35:28.033644 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.033769 kubelet[2669]: E0712 09:35:28.033758 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.033798 kubelet[2669]: W0712 09:35:28.033768 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.033798 kubelet[2669]: E0712 09:35:28.033776 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.033932 kubelet[2669]: E0712 09:35:28.033919 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.033932 kubelet[2669]: W0712 09:35:28.033930 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.033975 kubelet[2669]: E0712 09:35:28.033938 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.041272 kubelet[2669]: E0712 09:35:28.041230 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.041272 kubelet[2669]: W0712 09:35:28.041248 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.041272 kubelet[2669]: E0712 09:35:28.041262 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.041445 kubelet[2669]: E0712 09:35:28.041424 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.041445 kubelet[2669]: W0712 09:35:28.041435 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.041492 kubelet[2669]: E0712 09:35:28.041449 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.041618 kubelet[2669]: E0712 09:35:28.041597 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.041618 kubelet[2669]: W0712 09:35:28.041609 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.041659 kubelet[2669]: E0712 09:35:28.041622 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.041807 kubelet[2669]: E0712 09:35:28.041778 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.041807 kubelet[2669]: W0712 09:35:28.041798 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.041857 kubelet[2669]: E0712 09:35:28.041827 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.041995 kubelet[2669]: E0712 09:35:28.041981 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.041995 kubelet[2669]: W0712 09:35:28.041991 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.042053 kubelet[2669]: E0712 09:35:28.042007 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.042158 kubelet[2669]: E0712 09:35:28.042137 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.042158 kubelet[2669]: W0712 09:35:28.042149 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.042204 kubelet[2669]: E0712 09:35:28.042161 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.042314 kubelet[2669]: E0712 09:35:28.042302 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.042338 kubelet[2669]: W0712 09:35:28.042312 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.042338 kubelet[2669]: E0712 09:35:28.042327 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.042536 kubelet[2669]: E0712 09:35:28.042513 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.042536 kubelet[2669]: W0712 09:35:28.042530 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.042578 kubelet[2669]: E0712 09:35:28.042551 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.042739 kubelet[2669]: E0712 09:35:28.042727 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.042762 kubelet[2669]: W0712 09:35:28.042738 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.042789 kubelet[2669]: E0712 09:35:28.042764 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.042897 kubelet[2669]: E0712 09:35:28.042885 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.042897 kubelet[2669]: W0712 09:35:28.042896 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.042945 kubelet[2669]: E0712 09:35:28.042917 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.043032 kubelet[2669]: E0712 09:35:28.043021 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.043055 kubelet[2669]: W0712 09:35:28.043031 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.043055 kubelet[2669]: E0712 09:35:28.043044 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.043196 kubelet[2669]: E0712 09:35:28.043184 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.043219 kubelet[2669]: W0712 09:35:28.043195 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.043219 kubelet[2669]: E0712 09:35:28.043207 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.043364 kubelet[2669]: E0712 09:35:28.043353 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.043384 kubelet[2669]: W0712 09:35:28.043363 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.043384 kubelet[2669]: E0712 09:35:28.043379 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.043667 kubelet[2669]: E0712 09:35:28.043646 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.043667 kubelet[2669]: W0712 09:35:28.043661 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.043713 kubelet[2669]: E0712 09:35:28.043671 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.043966 kubelet[2669]: E0712 09:35:28.043952 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.043989 kubelet[2669]: W0712 09:35:28.043965 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.043989 kubelet[2669]: E0712 09:35:28.043979 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.044174 kubelet[2669]: E0712 09:35:28.044163 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.044196 kubelet[2669]: W0712 09:35:28.044173 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.044196 kubelet[2669]: E0712 09:35:28.044189 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.044386 kubelet[2669]: E0712 09:35:28.044375 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.044409 kubelet[2669]: W0712 09:35:28.044388 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.044409 kubelet[2669]: E0712 09:35:28.044397 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.044637 kubelet[2669]: E0712 09:35:28.044625 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 09:35:28.044660 kubelet[2669]: W0712 09:35:28.044636 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 09:35:28.044660 kubelet[2669]: E0712 09:35:28.044645 2669 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 09:35:28.380687 containerd[1532]: time="2025-07-12T09:35:28.380640760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:28.381516 containerd[1532]: time="2025-07-12T09:35:28.381476114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 12 09:35:28.382510 containerd[1532]: time="2025-07-12T09:35:28.382476476Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:28.384413 containerd[1532]: time="2025-07-12T09:35:28.384363953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:28.385011 containerd[1532]: time="2025-07-12T09:35:28.384921656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.133507589s" Jul 12 09:35:28.385011 containerd[1532]: time="2025-07-12T09:35:28.384951937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 09:35:28.387031 containerd[1532]: time="2025-07-12T09:35:28.386962660Z" level=info msg="CreateContainer within sandbox \"65c5d5d16c4426c58a8b45e01197ee8ee3017d846cef2f427d71903378de67c5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 09:35:28.393557 containerd[1532]: time="2025-07-12T09:35:28.392502808Z" level=info msg="Container bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:28.405514 containerd[1532]: time="2025-07-12T09:35:28.405458420Z" level=info msg="CreateContainer within sandbox \"65c5d5d16c4426c58a8b45e01197ee8ee3017d846cef2f427d71903378de67c5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990\"" Jul 12 09:35:28.406348 containerd[1532]: time="2025-07-12T09:35:28.406311855Z" level=info msg="StartContainer for \"bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990\"" Jul 12 09:35:28.407891 containerd[1532]: time="2025-07-12T09:35:28.407856118Z" level=info msg="connecting to shim bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990" address="unix:///run/containerd/s/08f59b616a20de4be604305071f637d16e8ddd7cc4825a291259d7495b7f7aeb" protocol=ttrpc version=3 Jul 12 09:35:28.428971 systemd[1]: Started cri-containerd-bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990.scope - libcontainer container bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990. Jul 12 09:35:28.479237 containerd[1532]: time="2025-07-12T09:35:28.479176329Z" level=info msg="StartContainer for \"bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990\" returns successfully" Jul 12 09:35:28.492983 systemd[1]: cri-containerd-bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990.scope: Deactivated successfully. Jul 12 09:35:28.505246 containerd[1532]: time="2025-07-12T09:35:28.505198438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990\" id:\"bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990\" pid:3369 exited_at:{seconds:1752312928 nanos:504761460}" Jul 12 09:35:28.513506 containerd[1532]: time="2025-07-12T09:35:28.513448537Z" level=info msg="received exit event container_id:\"bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990\" id:\"bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990\" pid:3369 exited_at:{seconds:1752312928 nanos:504761460}" Jul 12 09:35:28.549581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd52d183e59955b9e1442af99e69a9ed09f1d891524498ed3118df3673810990-rootfs.mount: Deactivated successfully. Jul 12 09:35:28.980298 kubelet[2669]: I0712 09:35:28.980192 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 09:35:28.980695 kubelet[2669]: E0712 09:35:28.980524 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:28.981675 containerd[1532]: time="2025-07-12T09:35:28.980932065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 09:35:28.995871 kubelet[2669]: I0712 09:35:28.995347 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dd9c99cf4-knpsw" podStartSLOduration=2.517421584 podStartE2EDuration="3.995330016s" podCreationTimestamp="2025-07-12 09:35:25 +0000 UTC" firstStartedPulling="2025-07-12 09:35:25.772854767 +0000 UTC m=+17.932104326" lastFinishedPulling="2025-07-12 09:35:27.250763199 +0000 UTC m=+19.410012758" observedRunningTime="2025-07-12 09:35:27.987258877 +0000 UTC m=+20.146508436" watchObservedRunningTime="2025-07-12 09:35:28.995330016 +0000 UTC m=+21.154579575" Jul 12 09:35:29.923097 kubelet[2669]: E0712 09:35:29.923048 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tsbtk" podUID="cffc9272-54c6-4b71-afb4-040238028993" Jul 12 09:35:31.923501 kubelet[2669]: E0712 09:35:31.922929 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tsbtk" podUID="cffc9272-54c6-4b71-afb4-040238028993" Jul 12 09:35:32.234661 containerd[1532]: time="2025-07-12T09:35:32.234557395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:32.235280 containerd[1532]: time="2025-07-12T09:35:32.235250057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 09:35:32.236075 containerd[1532]: time="2025-07-12T09:35:32.236026561Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:32.241543 containerd[1532]: time="2025-07-12T09:35:32.241496455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:32.242259 containerd[1532]: time="2025-07-12T09:35:32.242043872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.261071686s" Jul 12 09:35:32.242259 containerd[1532]: time="2025-07-12T09:35:32.242076193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 09:35:32.244430 containerd[1532]: time="2025-07-12T09:35:32.244403387Z" level=info msg="CreateContainer within sandbox \"65c5d5d16c4426c58a8b45e01197ee8ee3017d846cef2f427d71903378de67c5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 09:35:32.252010 containerd[1532]: time="2025-07-12T09:35:32.251965187Z" level=info msg="Container 154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:32.268251 containerd[1532]: time="2025-07-12T09:35:32.268204503Z" level=info msg="CreateContainer within sandbox \"65c5d5d16c4426c58a8b45e01197ee8ee3017d846cef2f427d71903378de67c5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f\"" Jul 12 09:35:32.269069 containerd[1532]: time="2025-07-12T09:35:32.269047209Z" level=info msg="StartContainer for \"154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f\"" Jul 12 09:35:32.270393 containerd[1532]: time="2025-07-12T09:35:32.270359291Z" level=info msg="connecting to shim 154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f" address="unix:///run/containerd/s/08f59b616a20de4be604305071f637d16e8ddd7cc4825a291259d7495b7f7aeb" protocol=ttrpc version=3 Jul 12 09:35:32.289958 systemd[1]: Started cri-containerd-154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f.scope - libcontainer container 154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f. Jul 12 09:35:32.323051 containerd[1532]: time="2025-07-12T09:35:32.323016002Z" level=info msg="StartContainer for \"154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f\" returns successfully" Jul 12 09:35:32.861474 systemd[1]: cri-containerd-154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f.scope: Deactivated successfully. Jul 12 09:35:32.861882 systemd[1]: cri-containerd-154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f.scope: Consumed 433ms CPU time, 176.3M memory peak, 3M read from disk, 165.8M written to disk. Jul 12 09:35:32.874604 containerd[1532]: time="2025-07-12T09:35:32.874565068Z" level=info msg="received exit event container_id:\"154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f\" id:\"154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f\" pid:3429 exited_at:{seconds:1752312932 nanos:874374902}" Jul 12 09:35:32.874706 containerd[1532]: time="2025-07-12T09:35:32.874642710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f\" id:\"154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f\" pid:3429 exited_at:{seconds:1752312932 nanos:874374902}" Jul 12 09:35:32.891015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-154f250b10f1277f85aefffde87c0dda9364d141aa47c415b57c30a33226bd1f-rootfs.mount: Deactivated successfully. Jul 12 09:35:32.929768 kubelet[2669]: I0712 09:35:32.929737 2669 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 09:35:33.029352 systemd[1]: Created slice kubepods-besteffort-pod1e8d79fd_65fe_4e22_927d_2c54d9d8d62f.slice - libcontainer container kubepods-besteffort-pod1e8d79fd_65fe_4e22_927d_2c54d9d8d62f.slice. Jul 12 09:35:33.039512 systemd[1]: Created slice kubepods-burstable-pod71284256_c3ce_4e07_92e5_9f23490080e5.slice - libcontainer container kubepods-burstable-pod71284256_c3ce_4e07_92e5_9f23490080e5.slice. Jul 12 09:35:33.054312 systemd[1]: Created slice kubepods-besteffort-podc867975f_8565_4822_aed1_0b085f920c44.slice - libcontainer container kubepods-besteffort-podc867975f_8565_4822_aed1_0b085f920c44.slice. Jul 12 09:35:33.058631 systemd[1]: Created slice kubepods-burstable-poda3b2d3ba_558c_4f5b_8637_e61eebaebd46.slice - libcontainer container kubepods-burstable-poda3b2d3ba_558c_4f5b_8637_e61eebaebd46.slice. Jul 12 09:35:33.061193 systemd[1]: Created slice kubepods-besteffort-pod3fa3e400_f96b_4c76_a280_ab4e8cd5210e.slice - libcontainer container kubepods-besteffort-pod3fa3e400_f96b_4c76_a280_ab4e8cd5210e.slice. Jul 12 09:35:33.068698 systemd[1]: Created slice kubepods-besteffort-pod73ac4ac9_a325_4d79_be9f_08af13edaac2.slice - libcontainer container kubepods-besteffort-pod73ac4ac9_a325_4d79_be9f_08af13edaac2.slice. Jul 12 09:35:33.074600 systemd[1]: Created slice kubepods-besteffort-pod7de32244_939d_4ce0_9b95_f1b82893dcc6.slice - libcontainer container kubepods-besteffort-pod7de32244_939d_4ce0_9b95_f1b82893dcc6.slice. Jul 12 09:35:33.079096 kubelet[2669]: I0712 09:35:33.079064 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zcvm\" (UniqueName: \"kubernetes.io/projected/c867975f-8565-4822-aed1-0b085f920c44-kube-api-access-6zcvm\") pod \"whisker-5649bd8fdd-75swd\" (UID: \"c867975f-8565-4822-aed1-0b085f920c44\") " pod="calico-system/whisker-5649bd8fdd-75swd" Jul 12 09:35:33.079266 kubelet[2669]: I0712 09:35:33.079251 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ztq\" (UniqueName: \"kubernetes.io/projected/3fa3e400-f96b-4c76-a280-ab4e8cd5210e-kube-api-access-r6ztq\") pod \"calico-apiserver-7cc55dbbd8-pxzqd\" (UID: \"3fa3e400-f96b-4c76-a280-ab4e8cd5210e\") " pod="calico-apiserver/calico-apiserver-7cc55dbbd8-pxzqd" Jul 12 09:35:33.079351 kubelet[2669]: I0712 09:35:33.079338 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73ac4ac9-a325-4d79-be9f-08af13edaac2-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-jfc5p\" (UID: \"73ac4ac9-a325-4d79-be9f-08af13edaac2\") " pod="calico-system/goldmane-768f4c5c69-jfc5p" Jul 12 09:35:33.079538 kubelet[2669]: I0712 09:35:33.079503 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e8d79fd-65fe-4e22-927d-2c54d9d8d62f-tigera-ca-bundle\") pod \"calico-kube-controllers-f6b96db46-p7s4x\" (UID: \"1e8d79fd-65fe-4e22-927d-2c54d9d8d62f\") " pod="calico-system/calico-kube-controllers-f6b96db46-p7s4x" Jul 12 09:35:33.079632 kubelet[2669]: I0712 09:35:33.079620 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/73ac4ac9-a325-4d79-be9f-08af13edaac2-goldmane-key-pair\") pod \"goldmane-768f4c5c69-jfc5p\" (UID: \"73ac4ac9-a325-4d79-be9f-08af13edaac2\") " pod="calico-system/goldmane-768f4c5c69-jfc5p" Jul 12 09:35:33.079718 kubelet[2669]: I0712 09:35:33.079707 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c867975f-8565-4822-aed1-0b085f920c44-whisker-backend-key-pair\") pod \"whisker-5649bd8fdd-75swd\" (UID: \"c867975f-8565-4822-aed1-0b085f920c44\") " pod="calico-system/whisker-5649bd8fdd-75swd" Jul 12 09:35:33.079798 kubelet[2669]: I0712 09:35:33.079787 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c867975f-8565-4822-aed1-0b085f920c44-whisker-ca-bundle\") pod \"whisker-5649bd8fdd-75swd\" (UID: \"c867975f-8565-4822-aed1-0b085f920c44\") " pod="calico-system/whisker-5649bd8fdd-75swd" Jul 12 09:35:33.080044 kubelet[2669]: I0712 09:35:33.080006 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnhjg\" (UniqueName: \"kubernetes.io/projected/7de32244-939d-4ce0-9b95-f1b82893dcc6-kube-api-access-nnhjg\") pod \"calico-apiserver-7cc55dbbd8-vqkln\" (UID: \"7de32244-939d-4ce0-9b95-f1b82893dcc6\") " pod="calico-apiserver/calico-apiserver-7cc55dbbd8-vqkln" Jul 12 09:35:33.080169 kubelet[2669]: I0712 09:35:33.080150 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71284256-c3ce-4e07-92e5-9f23490080e5-config-volume\") pod \"coredns-668d6bf9bc-4zb4v\" (UID: \"71284256-c3ce-4e07-92e5-9f23490080e5\") " pod="kube-system/coredns-668d6bf9bc-4zb4v" Jul 12 09:35:33.080222 kubelet[2669]: I0712 09:35:33.080188 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7de32244-939d-4ce0-9b95-f1b82893dcc6-calico-apiserver-certs\") pod \"calico-apiserver-7cc55dbbd8-vqkln\" (UID: \"7de32244-939d-4ce0-9b95-f1b82893dcc6\") " pod="calico-apiserver/calico-apiserver-7cc55dbbd8-vqkln" Jul 12 09:35:33.080305 kubelet[2669]: I0712 09:35:33.080291 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ac4ac9-a325-4d79-be9f-08af13edaac2-config\") pod \"goldmane-768f4c5c69-jfc5p\" (UID: \"73ac4ac9-a325-4d79-be9f-08af13edaac2\") " pod="calico-system/goldmane-768f4c5c69-jfc5p" Jul 12 09:35:33.080339 kubelet[2669]: I0712 09:35:33.080315 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrrf2\" (UniqueName: \"kubernetes.io/projected/73ac4ac9-a325-4d79-be9f-08af13edaac2-kube-api-access-zrrf2\") pod \"goldmane-768f4c5c69-jfc5p\" (UID: \"73ac4ac9-a325-4d79-be9f-08af13edaac2\") " pod="calico-system/goldmane-768f4c5c69-jfc5p" Jul 12 09:35:33.080339 kubelet[2669]: I0712 09:35:33.080335 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3fa3e400-f96b-4c76-a280-ab4e8cd5210e-calico-apiserver-certs\") pod \"calico-apiserver-7cc55dbbd8-pxzqd\" (UID: \"3fa3e400-f96b-4c76-a280-ab4e8cd5210e\") " pod="calico-apiserver/calico-apiserver-7cc55dbbd8-pxzqd" Jul 12 09:35:33.080395 kubelet[2669]: I0712 09:35:33.080353 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thqp\" (UniqueName: \"kubernetes.io/projected/1e8d79fd-65fe-4e22-927d-2c54d9d8d62f-kube-api-access-4thqp\") pod \"calico-kube-controllers-f6b96db46-p7s4x\" (UID: \"1e8d79fd-65fe-4e22-927d-2c54d9d8d62f\") " pod="calico-system/calico-kube-controllers-f6b96db46-p7s4x" Jul 12 09:35:33.080395 kubelet[2669]: I0712 09:35:33.080371 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmbkq\" (UniqueName: \"kubernetes.io/projected/71284256-c3ce-4e07-92e5-9f23490080e5-kube-api-access-zmbkq\") pod \"coredns-668d6bf9bc-4zb4v\" (UID: \"71284256-c3ce-4e07-92e5-9f23490080e5\") " pod="kube-system/coredns-668d6bf9bc-4zb4v" Jul 12 09:35:33.080395 kubelet[2669]: I0712 09:35:33.080393 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3b2d3ba-558c-4f5b-8637-e61eebaebd46-config-volume\") pod \"coredns-668d6bf9bc-m9n2j\" (UID: \"a3b2d3ba-558c-4f5b-8637-e61eebaebd46\") " pod="kube-system/coredns-668d6bf9bc-m9n2j" Jul 12 09:35:33.080453 kubelet[2669]: I0712 09:35:33.080409 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86nq\" (UniqueName: \"kubernetes.io/projected/a3b2d3ba-558c-4f5b-8637-e61eebaebd46-kube-api-access-f86nq\") pod \"coredns-668d6bf9bc-m9n2j\" (UID: \"a3b2d3ba-558c-4f5b-8637-e61eebaebd46\") " pod="kube-system/coredns-668d6bf9bc-m9n2j" Jul 12 09:35:33.336266 containerd[1532]: time="2025-07-12T09:35:33.336145334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6b96db46-p7s4x,Uid:1e8d79fd-65fe-4e22-927d-2c54d9d8d62f,Namespace:calico-system,Attempt:0,}" Jul 12 09:35:33.346485 kubelet[2669]: E0712 09:35:33.346446 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:33.347000 containerd[1532]: time="2025-07-12T09:35:33.346937975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zb4v,Uid:71284256-c3ce-4e07-92e5-9f23490080e5,Namespace:kube-system,Attempt:0,}" Jul 12 09:35:33.365101 kubelet[2669]: E0712 09:35:33.365072 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:33.376037 containerd[1532]: time="2025-07-12T09:35:33.375757913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-jfc5p,Uid:73ac4ac9-a325-4d79-be9f-08af13edaac2,Namespace:calico-system,Attempt:0,}" Jul 12 09:35:33.376359 containerd[1532]: time="2025-07-12T09:35:33.376323530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc55dbbd8-pxzqd,Uid:3fa3e400-f96b-4c76-a280-ab4e8cd5210e,Namespace:calico-apiserver,Attempt:0,}" Jul 12 09:35:33.376471 containerd[1532]: time="2025-07-12T09:35:33.376450973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9n2j,Uid:a3b2d3ba-558c-4f5b-8637-e61eebaebd46,Namespace:kube-system,Attempt:0,}" Jul 12 09:35:33.376554 containerd[1532]: time="2025-07-12T09:35:33.376539016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5649bd8fdd-75swd,Uid:c867975f-8565-4822-aed1-0b085f920c44,Namespace:calico-system,Attempt:0,}" Jul 12 09:35:33.390834 containerd[1532]: time="2025-07-12T09:35:33.390698077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc55dbbd8-vqkln,Uid:7de32244-939d-4ce0-9b95-f1b82893dcc6,Namespace:calico-apiserver,Attempt:0,}" Jul 12 09:35:33.803058 containerd[1532]: time="2025-07-12T09:35:33.803007706Z" level=error msg="Failed to destroy network for sandbox \"56384b6509ef0f2ff77bbb37a2addd3a9c58a8f4746b4d5267e0dfe0424f4098\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.803431 containerd[1532]: time="2025-07-12T09:35:33.803187751Z" level=error msg="Failed to destroy network for sandbox \"6cd637b89eb43ec43a478267de3457ea055a1a8cc1f643912190afbfcfdf1189\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.807091 containerd[1532]: time="2025-07-12T09:35:33.806909342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9n2j,Uid:a3b2d3ba-558c-4f5b-8637-e61eebaebd46,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56384b6509ef0f2ff77bbb37a2addd3a9c58a8f4746b4d5267e0dfe0424f4098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.807662 containerd[1532]: time="2025-07-12T09:35:33.807621963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc55dbbd8-vqkln,Uid:7de32244-939d-4ce0-9b95-f1b82893dcc6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cd637b89eb43ec43a478267de3457ea055a1a8cc1f643912190afbfcfdf1189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.808591 kubelet[2669]: E0712 09:35:33.808415 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56384b6509ef0f2ff77bbb37a2addd3a9c58a8f4746b4d5267e0dfe0424f4098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.808591 kubelet[2669]: E0712 09:35:33.808415 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cd637b89eb43ec43a478267de3457ea055a1a8cc1f643912190afbfcfdf1189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.810446 containerd[1532]: time="2025-07-12T09:35:33.810155599Z" level=error msg="Failed to destroy network for sandbox \"a231629f0173fcaa1e5654020e1a21f5bd70d6a8622329b6ec65fd248099b13d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.812308 containerd[1532]: time="2025-07-12T09:35:33.811543680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-jfc5p,Uid:73ac4ac9-a325-4d79-be9f-08af13edaac2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a231629f0173fcaa1e5654020e1a21f5bd70d6a8622329b6ec65fd248099b13d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.812454 kubelet[2669]: E0712 09:35:33.811688 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cd637b89eb43ec43a478267de3457ea055a1a8cc1f643912190afbfcfdf1189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc55dbbd8-vqkln" Jul 12 09:35:33.812454 kubelet[2669]: E0712 09:35:33.811744 2669 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cd637b89eb43ec43a478267de3457ea055a1a8cc1f643912190afbfcfdf1189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc55dbbd8-vqkln" Jul 12 09:35:33.812454 kubelet[2669]: E0712 09:35:33.811815 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cc55dbbd8-vqkln_calico-apiserver(7de32244-939d-4ce0-9b95-f1b82893dcc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cc55dbbd8-vqkln_calico-apiserver(7de32244-939d-4ce0-9b95-f1b82893dcc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cd637b89eb43ec43a478267de3457ea055a1a8cc1f643912190afbfcfdf1189\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cc55dbbd8-vqkln" podUID="7de32244-939d-4ce0-9b95-f1b82893dcc6" Jul 12 09:35:33.812616 kubelet[2669]: E0712 09:35:33.811957 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56384b6509ef0f2ff77bbb37a2addd3a9c58a8f4746b4d5267e0dfe0424f4098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m9n2j" Jul 12 09:35:33.812616 kubelet[2669]: E0712 09:35:33.812013 2669 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56384b6509ef0f2ff77bbb37a2addd3a9c58a8f4746b4d5267e0dfe0424f4098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m9n2j" Jul 12 09:35:33.812616 kubelet[2669]: E0712 09:35:33.812047 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-m9n2j_kube-system(a3b2d3ba-558c-4f5b-8637-e61eebaebd46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-m9n2j_kube-system(a3b2d3ba-558c-4f5b-8637-e61eebaebd46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56384b6509ef0f2ff77bbb37a2addd3a9c58a8f4746b4d5267e0dfe0424f4098\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m9n2j" podUID="a3b2d3ba-558c-4f5b-8637-e61eebaebd46" Jul 12 09:35:33.812775 containerd[1532]: time="2025-07-12T09:35:33.812578591Z" level=error msg="Failed to destroy network for sandbox \"363be8cbd914eb09c41936ca8f578ca9697b3c3232f3ef0ffe7b28c060e9686e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.812832 kubelet[2669]: E0712 09:35:33.812136 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a231629f0173fcaa1e5654020e1a21f5bd70d6a8622329b6ec65fd248099b13d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.812832 kubelet[2669]: E0712 09:35:33.812194 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a231629f0173fcaa1e5654020e1a21f5bd70d6a8622329b6ec65fd248099b13d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-jfc5p" Jul 12 09:35:33.812832 kubelet[2669]: E0712 09:35:33.812208 2669 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a231629f0173fcaa1e5654020e1a21f5bd70d6a8622329b6ec65fd248099b13d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-jfc5p" Jul 12 09:35:33.812939 kubelet[2669]: E0712 09:35:33.812264 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-jfc5p_calico-system(73ac4ac9-a325-4d79-be9f-08af13edaac2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-jfc5p_calico-system(73ac4ac9-a325-4d79-be9f-08af13edaac2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a231629f0173fcaa1e5654020e1a21f5bd70d6a8622329b6ec65fd248099b13d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-jfc5p" podUID="73ac4ac9-a325-4d79-be9f-08af13edaac2" Jul 12 09:35:33.814018 containerd[1532]: time="2025-07-12T09:35:33.813952072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5649bd8fdd-75swd,Uid:c867975f-8565-4822-aed1-0b085f920c44,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"363be8cbd914eb09c41936ca8f578ca9697b3c3232f3ef0ffe7b28c060e9686e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.814222 kubelet[2669]: E0712 09:35:33.814187 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"363be8cbd914eb09c41936ca8f578ca9697b3c3232f3ef0ffe7b28c060e9686e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.814282 kubelet[2669]: E0712 09:35:33.814260 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"363be8cbd914eb09c41936ca8f578ca9697b3c3232f3ef0ffe7b28c060e9686e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5649bd8fdd-75swd" Jul 12 09:35:33.814311 kubelet[2669]: E0712 09:35:33.814286 2669 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"363be8cbd914eb09c41936ca8f578ca9697b3c3232f3ef0ffe7b28c060e9686e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5649bd8fdd-75swd" Jul 12 09:35:33.814355 kubelet[2669]: E0712 09:35:33.814326 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5649bd8fdd-75swd_calico-system(c867975f-8565-4822-aed1-0b085f920c44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5649bd8fdd-75swd_calico-system(c867975f-8565-4822-aed1-0b085f920c44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"363be8cbd914eb09c41936ca8f578ca9697b3c3232f3ef0ffe7b28c060e9686e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5649bd8fdd-75swd" podUID="c867975f-8565-4822-aed1-0b085f920c44" Jul 12 09:35:33.815479 containerd[1532]: time="2025-07-12T09:35:33.815442716Z" level=error msg="Failed to destroy network for sandbox \"a5b03ccb7e2ea83e337418ca2f2a090b3acac27b4212a4faa03a68c3ad0da745\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.818076 containerd[1532]: time="2025-07-12T09:35:33.817920590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6b96db46-p7s4x,Uid:1e8d79fd-65fe-4e22-927d-2c54d9d8d62f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b03ccb7e2ea83e337418ca2f2a090b3acac27b4212a4faa03a68c3ad0da745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.818169 kubelet[2669]: E0712 09:35:33.818116 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b03ccb7e2ea83e337418ca2f2a090b3acac27b4212a4faa03a68c3ad0da745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.818169 kubelet[2669]: E0712 09:35:33.818154 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b03ccb7e2ea83e337418ca2f2a090b3acac27b4212a4faa03a68c3ad0da745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f6b96db46-p7s4x" Jul 12 09:35:33.818229 kubelet[2669]: E0712 09:35:33.818170 2669 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b03ccb7e2ea83e337418ca2f2a090b3acac27b4212a4faa03a68c3ad0da745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f6b96db46-p7s4x" Jul 12 09:35:33.818264 kubelet[2669]: E0712 09:35:33.818220 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f6b96db46-p7s4x_calico-system(1e8d79fd-65fe-4e22-927d-2c54d9d8d62f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f6b96db46-p7s4x_calico-system(1e8d79fd-65fe-4e22-927d-2c54d9d8d62f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5b03ccb7e2ea83e337418ca2f2a090b3acac27b4212a4faa03a68c3ad0da745\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f6b96db46-p7s4x" podUID="1e8d79fd-65fe-4e22-927d-2c54d9d8d62f" Jul 12 09:35:33.819859 containerd[1532]: time="2025-07-12T09:35:33.819823806Z" level=error msg="Failed to destroy network for sandbox \"8bf0482ad4da66ef17cd72039ebaf504582aa75151f1027302837befe44ecd4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.821486 containerd[1532]: time="2025-07-12T09:35:33.821420094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zb4v,Uid:71284256-c3ce-4e07-92e5-9f23490080e5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf0482ad4da66ef17cd72039ebaf504582aa75151f1027302837befe44ecd4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.821706 kubelet[2669]: E0712 09:35:33.821669 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf0482ad4da66ef17cd72039ebaf504582aa75151f1027302837befe44ecd4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.821751 kubelet[2669]: E0712 09:35:33.821720 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf0482ad4da66ef17cd72039ebaf504582aa75151f1027302837befe44ecd4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4zb4v" Jul 12 09:35:33.821751 kubelet[2669]: E0712 09:35:33.821739 2669 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf0482ad4da66ef17cd72039ebaf504582aa75151f1027302837befe44ecd4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4zb4v" Jul 12 09:35:33.821816 kubelet[2669]: E0712 09:35:33.821777 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4zb4v_kube-system(71284256-c3ce-4e07-92e5-9f23490080e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4zb4v_kube-system(71284256-c3ce-4e07-92e5-9f23490080e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bf0482ad4da66ef17cd72039ebaf504582aa75151f1027302837befe44ecd4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4zb4v" podUID="71284256-c3ce-4e07-92e5-9f23490080e5" Jul 12 09:35:33.823997 containerd[1532]: time="2025-07-12T09:35:33.823834966Z" level=error msg="Failed to destroy network for sandbox \"77cd19fdda1b6bbbf029ddbeed26b47477b53df0944524e3db9570bd6780e579\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.825194 containerd[1532]: time="2025-07-12T09:35:33.825159125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc55dbbd8-pxzqd,Uid:3fa3e400-f96b-4c76-a280-ab4e8cd5210e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77cd19fdda1b6bbbf029ddbeed26b47477b53df0944524e3db9570bd6780e579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.825502 kubelet[2669]: E0712 09:35:33.825469 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77cd19fdda1b6bbbf029ddbeed26b47477b53df0944524e3db9570bd6780e579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.825558 kubelet[2669]: E0712 09:35:33.825514 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77cd19fdda1b6bbbf029ddbeed26b47477b53df0944524e3db9570bd6780e579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc55dbbd8-pxzqd" Jul 12 09:35:33.825558 kubelet[2669]: E0712 09:35:33.825531 2669 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77cd19fdda1b6bbbf029ddbeed26b47477b53df0944524e3db9570bd6780e579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc55dbbd8-pxzqd" Jul 12 09:35:33.825601 kubelet[2669]: E0712 09:35:33.825578 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cc55dbbd8-pxzqd_calico-apiserver(3fa3e400-f96b-4c76-a280-ab4e8cd5210e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cc55dbbd8-pxzqd_calico-apiserver(3fa3e400-f96b-4c76-a280-ab4e8cd5210e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77cd19fdda1b6bbbf029ddbeed26b47477b53df0944524e3db9570bd6780e579\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cc55dbbd8-pxzqd" podUID="3fa3e400-f96b-4c76-a280-ab4e8cd5210e" Jul 12 09:35:33.929115 systemd[1]: Created slice kubepods-besteffort-podcffc9272_54c6_4b71_afb4_040238028993.slice - libcontainer container kubepods-besteffort-podcffc9272_54c6_4b71_afb4_040238028993.slice. Jul 12 09:35:33.931347 containerd[1532]: time="2025-07-12T09:35:33.931285963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tsbtk,Uid:cffc9272-54c6-4b71-afb4-040238028993,Namespace:calico-system,Attempt:0,}" Jul 12 09:35:33.972032 containerd[1532]: time="2025-07-12T09:35:33.971979654Z" level=error msg="Failed to destroy network for sandbox \"5e196acd17e7ad54c1a168ef5e390d0db8dba4557e88cf8a119a72a45b61e21d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.973086 containerd[1532]: time="2025-07-12T09:35:33.973053686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tsbtk,Uid:cffc9272-54c6-4b71-afb4-040238028993,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e196acd17e7ad54c1a168ef5e390d0db8dba4557e88cf8a119a72a45b61e21d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.973347 kubelet[2669]: E0712 09:35:33.973307 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e196acd17e7ad54c1a168ef5e390d0db8dba4557e88cf8a119a72a45b61e21d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 09:35:33.973672 kubelet[2669]: E0712 09:35:33.973371 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e196acd17e7ad54c1a168ef5e390d0db8dba4557e88cf8a119a72a45b61e21d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tsbtk" Jul 12 09:35:33.973672 kubelet[2669]: E0712 09:35:33.973390 2669 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e196acd17e7ad54c1a168ef5e390d0db8dba4557e88cf8a119a72a45b61e21d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tsbtk" Jul 12 09:35:33.973672 kubelet[2669]: E0712 09:35:33.973434 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tsbtk_calico-system(cffc9272-54c6-4b71-afb4-040238028993)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tsbtk_calico-system(cffc9272-54c6-4b71-afb4-040238028993)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e196acd17e7ad54c1a168ef5e390d0db8dba4557e88cf8a119a72a45b61e21d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tsbtk" podUID="cffc9272-54c6-4b71-afb4-040238028993" Jul 12 09:35:33.997153 containerd[1532]: time="2025-07-12T09:35:33.996962077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 09:35:34.252638 systemd[1]: run-netns-cni\x2dea5d5bd7\x2d9949\x2d28e5\x2d0a4a\x2dc473067d94c0.mount: Deactivated successfully. Jul 12 09:35:34.252741 systemd[1]: run-netns-cni\x2dc317d3c6\x2d1aaf\x2d4d80\x2da5ba\x2d936d30594dc5.mount: Deactivated successfully. Jul 12 09:35:34.252787 systemd[1]: run-netns-cni\x2dabff49a8\x2dbaad\x2da76d\x2d9dfa\x2d1cd08ffe8292.mount: Deactivated successfully. Jul 12 09:35:34.252843 systemd[1]: run-netns-cni\x2d9ec14763\x2d84f0\x2df5db\x2d0460\x2de6a94563fff2.mount: Deactivated successfully. Jul 12 09:35:37.747284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241250616.mount: Deactivated successfully. Jul 12 09:35:38.063628 containerd[1532]: time="2025-07-12T09:35:38.063515855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 09:35:38.071573 containerd[1532]: time="2025-07-12T09:35:38.071526228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.074403826s" Jul 12 09:35:38.071573 containerd[1532]: time="2025-07-12T09:35:38.071564749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 09:35:38.072971 containerd[1532]: time="2025-07-12T09:35:38.072940139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:38.075254 containerd[1532]: time="2025-07-12T09:35:38.075225588Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:38.083018 containerd[1532]: time="2025-07-12T09:35:38.082979075Z" level=info msg="CreateContainer within sandbox \"65c5d5d16c4426c58a8b45e01197ee8ee3017d846cef2f427d71903378de67c5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 09:35:38.084467 containerd[1532]: time="2025-07-12T09:35:38.084306903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:38.091625 containerd[1532]: time="2025-07-12T09:35:38.091597461Z" level=info msg="Container 6cd61bb22145a0afcd4f8bc503fd8200f87faada600f4e4c195d5ffac781f4c1: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:38.100706 containerd[1532]: time="2025-07-12T09:35:38.100652576Z" level=info msg="CreateContainer within sandbox \"65c5d5d16c4426c58a8b45e01197ee8ee3017d846cef2f427d71903378de67c5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6cd61bb22145a0afcd4f8bc503fd8200f87faada600f4e4c195d5ffac781f4c1\"" Jul 12 09:35:38.101210 containerd[1532]: time="2025-07-12T09:35:38.101090185Z" level=info msg="StartContainer for \"6cd61bb22145a0afcd4f8bc503fd8200f87faada600f4e4c195d5ffac781f4c1\"" Jul 12 09:35:38.103899 containerd[1532]: time="2025-07-12T09:35:38.103848445Z" level=info msg="connecting to shim 6cd61bb22145a0afcd4f8bc503fd8200f87faada600f4e4c195d5ffac781f4c1" address="unix:///run/containerd/s/08f59b616a20de4be604305071f637d16e8ddd7cc4825a291259d7495b7f7aeb" protocol=ttrpc version=3 Jul 12 09:35:38.132964 systemd[1]: Started cri-containerd-6cd61bb22145a0afcd4f8bc503fd8200f87faada600f4e4c195d5ffac781f4c1.scope - libcontainer container 6cd61bb22145a0afcd4f8bc503fd8200f87faada600f4e4c195d5ffac781f4c1. Jul 12 09:35:38.202894 containerd[1532]: time="2025-07-12T09:35:38.202801657Z" level=info msg="StartContainer for \"6cd61bb22145a0afcd4f8bc503fd8200f87faada600f4e4c195d5ffac781f4c1\" returns successfully" Jul 12 09:35:38.380904 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 09:35:38.381022 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 09:35:38.568567 kubelet[2669]: I0712 09:35:38.568499 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zcvm\" (UniqueName: \"kubernetes.io/projected/c867975f-8565-4822-aed1-0b085f920c44-kube-api-access-6zcvm\") pod \"c867975f-8565-4822-aed1-0b085f920c44\" (UID: \"c867975f-8565-4822-aed1-0b085f920c44\") " Jul 12 09:35:38.568947 kubelet[2669]: I0712 09:35:38.568593 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c867975f-8565-4822-aed1-0b085f920c44-whisker-ca-bundle\") pod \"c867975f-8565-4822-aed1-0b085f920c44\" (UID: \"c867975f-8565-4822-aed1-0b085f920c44\") " Jul 12 09:35:38.568947 kubelet[2669]: I0712 09:35:38.568628 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c867975f-8565-4822-aed1-0b085f920c44-whisker-backend-key-pair\") pod \"c867975f-8565-4822-aed1-0b085f920c44\" (UID: \"c867975f-8565-4822-aed1-0b085f920c44\") " Jul 12 09:35:38.568998 kubelet[2669]: I0712 09:35:38.568969 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c867975f-8565-4822-aed1-0b085f920c44-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c867975f-8565-4822-aed1-0b085f920c44" (UID: "c867975f-8565-4822-aed1-0b085f920c44"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 09:35:38.569270 kubelet[2669]: I0712 09:35:38.569243 2669 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c867975f-8565-4822-aed1-0b085f920c44-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 12 09:35:38.574246 kubelet[2669]: I0712 09:35:38.574196 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c867975f-8565-4822-aed1-0b085f920c44-kube-api-access-6zcvm" (OuterVolumeSpecName: "kube-api-access-6zcvm") pod "c867975f-8565-4822-aed1-0b085f920c44" (UID: "c867975f-8565-4822-aed1-0b085f920c44"). InnerVolumeSpecName "kube-api-access-6zcvm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 09:35:38.581046 kubelet[2669]: I0712 09:35:38.581016 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c867975f-8565-4822-aed1-0b085f920c44-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c867975f-8565-4822-aed1-0b085f920c44" (UID: "c867975f-8565-4822-aed1-0b085f920c44"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 09:35:38.670360 kubelet[2669]: I0712 09:35:38.670221 2669 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c867975f-8565-4822-aed1-0b085f920c44-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 12 09:35:38.670360 kubelet[2669]: I0712 09:35:38.670264 2669 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6zcvm\" (UniqueName: \"kubernetes.io/projected/c867975f-8565-4822-aed1-0b085f920c44-kube-api-access-6zcvm\") on node \"localhost\" DevicePath \"\"" Jul 12 09:35:38.747792 systemd[1]: var-lib-kubelet-pods-c867975f\x2d8565\x2d4822\x2daed1\x2d0b085f920c44-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6zcvm.mount: Deactivated successfully. Jul 12 09:35:38.747895 systemd[1]: var-lib-kubelet-pods-c867975f\x2d8565\x2d4822\x2daed1\x2d0b085f920c44-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 09:35:39.014430 systemd[1]: Removed slice kubepods-besteffort-podc867975f_8565_4822_aed1_0b085f920c44.slice - libcontainer container kubepods-besteffort-podc867975f_8565_4822_aed1_0b085f920c44.slice. Jul 12 09:35:39.027269 kubelet[2669]: I0712 09:35:39.027080 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jjk6q" podStartSLOduration=1.8975066900000002 podStartE2EDuration="14.027064384s" podCreationTimestamp="2025-07-12 09:35:25 +0000 UTC" firstStartedPulling="2025-07-12 09:35:25.942500026 +0000 UTC m=+18.101749585" lastFinishedPulling="2025-07-12 09:35:38.07205772 +0000 UTC m=+30.231307279" observedRunningTime="2025-07-12 09:35:39.026276808 +0000 UTC m=+31.185526367" watchObservedRunningTime="2025-07-12 09:35:39.027064384 +0000 UTC m=+31.186313943" Jul 12 09:35:39.029590 kubelet[2669]: I0712 09:35:39.029477 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 09:35:39.031575 kubelet[2669]: E0712 09:35:39.031541 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:39.085592 systemd[1]: Created slice kubepods-besteffort-pod6171a8bc_c8d2_4056_8d51_aeb5d49748cc.slice - libcontainer container kubepods-besteffort-pod6171a8bc_c8d2_4056_8d51_aeb5d49748cc.slice. Jul 12 09:35:39.173849 kubelet[2669]: I0712 09:35:39.173487 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6171a8bc-c8d2-4056-8d51-aeb5d49748cc-whisker-backend-key-pair\") pod \"whisker-5c94dbd876-w5s6d\" (UID: \"6171a8bc-c8d2-4056-8d51-aeb5d49748cc\") " pod="calico-system/whisker-5c94dbd876-w5s6d" Jul 12 09:35:39.173849 kubelet[2669]: I0712 09:35:39.173538 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t98cp\" (UniqueName: \"kubernetes.io/projected/6171a8bc-c8d2-4056-8d51-aeb5d49748cc-kube-api-access-t98cp\") pod \"whisker-5c94dbd876-w5s6d\" (UID: \"6171a8bc-c8d2-4056-8d51-aeb5d49748cc\") " pod="calico-system/whisker-5c94dbd876-w5s6d" Jul 12 09:35:39.173849 kubelet[2669]: I0712 09:35:39.173557 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6171a8bc-c8d2-4056-8d51-aeb5d49748cc-whisker-ca-bundle\") pod \"whisker-5c94dbd876-w5s6d\" (UID: \"6171a8bc-c8d2-4056-8d51-aeb5d49748cc\") " pod="calico-system/whisker-5c94dbd876-w5s6d" Jul 12 09:35:39.389195 containerd[1532]: time="2025-07-12T09:35:39.389076137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c94dbd876-w5s6d,Uid:6171a8bc-c8d2-4056-8d51-aeb5d49748cc,Namespace:calico-system,Attempt:0,}" Jul 12 09:35:39.591688 systemd-networkd[1433]: cali82b4c9a1fd2: Link UP Jul 12 09:35:39.591965 systemd-networkd[1433]: cali82b4c9a1fd2: Gained carrier Jul 12 09:35:39.603520 containerd[1532]: 2025-07-12 09:35:39.412 [INFO][3803] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 09:35:39.603520 containerd[1532]: 2025-07-12 09:35:39.458 [INFO][3803] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5c94dbd876--w5s6d-eth0 whisker-5c94dbd876- calico-system 6171a8bc-c8d2-4056-8d51-aeb5d49748cc 877 0 2025-07-12 09:35:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c94dbd876 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5c94dbd876-w5s6d eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali82b4c9a1fd2 [] [] }} ContainerID="82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" Namespace="calico-system" Pod="whisker-5c94dbd876-w5s6d" WorkloadEndpoint="localhost-k8s-whisker--5c94dbd876--w5s6d-" Jul 12 09:35:39.603520 containerd[1532]: 2025-07-12 09:35:39.459 [INFO][3803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" Namespace="calico-system" Pod="whisker-5c94dbd876-w5s6d" WorkloadEndpoint="localhost-k8s-whisker--5c94dbd876--w5s6d-eth0" Jul 12 09:35:39.603520 containerd[1532]: 2025-07-12 09:35:39.546 [INFO][3818] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" HandleID="k8s-pod-network.82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" Workload="localhost-k8s-whisker--5c94dbd876--w5s6d-eth0" Jul 12 09:35:39.603777 containerd[1532]: 2025-07-12 09:35:39.546 [INFO][3818] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" HandleID="k8s-pod-network.82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" Workload="localhost-k8s-whisker--5c94dbd876--w5s6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039c990), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5c94dbd876-w5s6d", "timestamp":"2025-07-12 09:35:39.546556119 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 09:35:39.603777 containerd[1532]: 2025-07-12 09:35:39.546 [INFO][3818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 09:35:39.603777 containerd[1532]: 2025-07-12 09:35:39.546 [INFO][3818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 09:35:39.603777 containerd[1532]: 2025-07-12 09:35:39.547 [INFO][3818] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 09:35:39.603777 containerd[1532]: 2025-07-12 09:35:39.560 [INFO][3818] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" host="localhost" Jul 12 09:35:39.603777 containerd[1532]: 2025-07-12 09:35:39.565 [INFO][3818] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 09:35:39.603777 containerd[1532]: 2025-07-12 09:35:39.569 [INFO][3818] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 09:35:39.603777 containerd[1532]: 2025-07-12 09:35:39.571 [INFO][3818] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:39.603777 containerd[1532]: 2025-07-12 09:35:39.573 [INFO][3818] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:39.603777 containerd[1532]: 2025-07-12 09:35:39.573 [INFO][3818] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" host="localhost" Jul 12 09:35:39.604052 containerd[1532]: 2025-07-12 09:35:39.574 [INFO][3818] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800 Jul 12 09:35:39.604052 containerd[1532]: 2025-07-12 09:35:39.577 [INFO][3818] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" host="localhost" Jul 12 09:35:39.604052 containerd[1532]: 2025-07-12 09:35:39.582 [INFO][3818] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" host="localhost" Jul 12 09:35:39.604052 containerd[1532]: 2025-07-12 09:35:39.582 [INFO][3818] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" host="localhost" Jul 12 09:35:39.604052 containerd[1532]: 2025-07-12 09:35:39.582 [INFO][3818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 09:35:39.604052 containerd[1532]: 2025-07-12 09:35:39.582 [INFO][3818] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" HandleID="k8s-pod-network.82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" Workload="localhost-k8s-whisker--5c94dbd876--w5s6d-eth0" Jul 12 09:35:39.604273 containerd[1532]: 2025-07-12 09:35:39.585 [INFO][3803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" Namespace="calico-system" Pod="whisker-5c94dbd876-w5s6d" WorkloadEndpoint="localhost-k8s-whisker--5c94dbd876--w5s6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c94dbd876--w5s6d-eth0", GenerateName:"whisker-5c94dbd876-", Namespace:"calico-system", SelfLink:"", UID:"6171a8bc-c8d2-4056-8d51-aeb5d49748cc", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c94dbd876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5c94dbd876-w5s6d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali82b4c9a1fd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:39.604273 containerd[1532]: 2025-07-12 09:35:39.585 [INFO][3803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" Namespace="calico-system" Pod="whisker-5c94dbd876-w5s6d" WorkloadEndpoint="localhost-k8s-whisker--5c94dbd876--w5s6d-eth0" Jul 12 09:35:39.604363 containerd[1532]: 2025-07-12 09:35:39.585 [INFO][3803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82b4c9a1fd2 ContainerID="82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" Namespace="calico-system" Pod="whisker-5c94dbd876-w5s6d" WorkloadEndpoint="localhost-k8s-whisker--5c94dbd876--w5s6d-eth0" Jul 12 09:35:39.604363 containerd[1532]: 2025-07-12 09:35:39.591 [INFO][3803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" Namespace="calico-system" Pod="whisker-5c94dbd876-w5s6d" WorkloadEndpoint="localhost-k8s-whisker--5c94dbd876--w5s6d-eth0" Jul 12 09:35:39.604424 containerd[1532]: 2025-07-12 09:35:39.592 [INFO][3803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" Namespace="calico-system" Pod="whisker-5c94dbd876-w5s6d" WorkloadEndpoint="localhost-k8s-whisker--5c94dbd876--w5s6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c94dbd876--w5s6d-eth0", GenerateName:"whisker-5c94dbd876-", Namespace:"calico-system", SelfLink:"", UID:"6171a8bc-c8d2-4056-8d51-aeb5d49748cc", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c94dbd876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800", Pod:"whisker-5c94dbd876-w5s6d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali82b4c9a1fd2", MAC:"ce:e1:04:db:18:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:39.604487 containerd[1532]: 2025-07-12 09:35:39.600 [INFO][3803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" Namespace="calico-system" Pod="whisker-5c94dbd876-w5s6d" WorkloadEndpoint="localhost-k8s-whisker--5c94dbd876--w5s6d-eth0" Jul 12 09:35:39.634377 containerd[1532]: time="2025-07-12T09:35:39.634338612Z" level=info msg="connecting to shim 82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800" address="unix:///run/containerd/s/89b2a52af8675d0eaae5ad9ebfd62eb5a60a4e914f1019f9ba9399b0b2945940" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:39.653103 systemd[1]: Started cri-containerd-82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800.scope - libcontainer container 82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800. Jul 12 09:35:39.687450 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 09:35:39.740965 containerd[1532]: time="2025-07-12T09:35:39.740920365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c94dbd876-w5s6d,Uid:6171a8bc-c8d2-4056-8d51-aeb5d49748cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800\"" Jul 12 09:35:39.743393 containerd[1532]: time="2025-07-12T09:35:39.743359855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 09:35:39.941570 kubelet[2669]: I0712 09:35:39.941466 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c867975f-8565-4822-aed1-0b085f920c44" path="/var/lib/kubelet/pods/c867975f-8565-4822-aed1-0b085f920c44/volumes" Jul 12 09:35:40.019542 kubelet[2669]: E0712 09:35:40.017751 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:40.019542 kubelet[2669]: I0712 09:35:40.018291 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 09:35:40.063623 systemd-networkd[1433]: vxlan.calico: Link UP Jul 12 09:35:40.063629 systemd-networkd[1433]: vxlan.calico: Gained carrier Jul 12 09:35:40.742243 containerd[1532]: time="2025-07-12T09:35:40.741932092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:40.742691 containerd[1532]: time="2025-07-12T09:35:40.742641786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 09:35:40.743077 containerd[1532]: time="2025-07-12T09:35:40.743053554Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:40.745150 containerd[1532]: time="2025-07-12T09:35:40.745080792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:40.745975 containerd[1532]: time="2025-07-12T09:35:40.745936648Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.002540313s" Jul 12 09:35:40.745975 containerd[1532]: time="2025-07-12T09:35:40.745971249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 09:35:40.748998 containerd[1532]: time="2025-07-12T09:35:40.748966546Z" level=info msg="CreateContainer within sandbox \"82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 09:35:40.755378 containerd[1532]: time="2025-07-12T09:35:40.755337826Z" level=info msg="Container 2f08b18af440b494d7c7634ef335f30756e19cadadf2e9486f7e969bd2608bbf: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:40.762364 containerd[1532]: time="2025-07-12T09:35:40.762314318Z" level=info msg="CreateContainer within sandbox \"82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2f08b18af440b494d7c7634ef335f30756e19cadadf2e9486f7e969bd2608bbf\"" Jul 12 09:35:40.763145 containerd[1532]: time="2025-07-12T09:35:40.763027972Z" level=info msg="StartContainer for \"2f08b18af440b494d7c7634ef335f30756e19cadadf2e9486f7e969bd2608bbf\"" Jul 12 09:35:40.764088 containerd[1532]: time="2025-07-12T09:35:40.764054991Z" level=info msg="connecting to shim 2f08b18af440b494d7c7634ef335f30756e19cadadf2e9486f7e969bd2608bbf" address="unix:///run/containerd/s/89b2a52af8675d0eaae5ad9ebfd62eb5a60a4e914f1019f9ba9399b0b2945940" protocol=ttrpc version=3 Jul 12 09:35:40.785976 systemd[1]: Started cri-containerd-2f08b18af440b494d7c7634ef335f30756e19cadadf2e9486f7e969bd2608bbf.scope - libcontainer container 2f08b18af440b494d7c7634ef335f30756e19cadadf2e9486f7e969bd2608bbf. Jul 12 09:35:40.831547 containerd[1532]: time="2025-07-12T09:35:40.831498509Z" level=info msg="StartContainer for \"2f08b18af440b494d7c7634ef335f30756e19cadadf2e9486f7e969bd2608bbf\" returns successfully" Jul 12 09:35:40.833659 containerd[1532]: time="2025-07-12T09:35:40.833584908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 09:35:41.309137 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Jul 12 09:35:41.500945 systemd-networkd[1433]: cali82b4c9a1fd2: Gained IPv6LL Jul 12 09:35:42.189386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2933388766.mount: Deactivated successfully. Jul 12 09:35:42.202062 containerd[1532]: time="2025-07-12T09:35:42.202014141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:42.203226 containerd[1532]: time="2025-07-12T09:35:42.202732993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 09:35:42.203798 containerd[1532]: time="2025-07-12T09:35:42.203758610Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:42.206164 containerd[1532]: time="2025-07-12T09:35:42.205689162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:42.206655 containerd[1532]: time="2025-07-12T09:35:42.206598937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.372980188s" Jul 12 09:35:42.206655 containerd[1532]: time="2025-07-12T09:35:42.206632618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 09:35:42.210054 containerd[1532]: time="2025-07-12T09:35:42.210020994Z" level=info msg="CreateContainer within sandbox \"82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 09:35:42.215800 containerd[1532]: time="2025-07-12T09:35:42.215711249Z" level=info msg="Container 42d012120c76753956319389b8d2921afcea220901dd22adfb252a3577083c1e: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:42.224823 containerd[1532]: time="2025-07-12T09:35:42.224779200Z" level=info msg="CreateContainer within sandbox \"82c14abc9d438d9879f2f9c5f772617b5afd37dbe70eeeb6bfc7b87cfa91b800\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"42d012120c76753956319389b8d2921afcea220901dd22adfb252a3577083c1e\"" Jul 12 09:35:42.225315 containerd[1532]: time="2025-07-12T09:35:42.225284128Z" level=info msg="StartContainer for \"42d012120c76753956319389b8d2921afcea220901dd22adfb252a3577083c1e\"" Jul 12 09:35:42.226670 containerd[1532]: time="2025-07-12T09:35:42.226445108Z" level=info msg="connecting to shim 42d012120c76753956319389b8d2921afcea220901dd22adfb252a3577083c1e" address="unix:///run/containerd/s/89b2a52af8675d0eaae5ad9ebfd62eb5a60a4e914f1019f9ba9399b0b2945940" protocol=ttrpc version=3 Jul 12 09:35:42.255980 systemd[1]: Started cri-containerd-42d012120c76753956319389b8d2921afcea220901dd22adfb252a3577083c1e.scope - libcontainer container 42d012120c76753956319389b8d2921afcea220901dd22adfb252a3577083c1e. Jul 12 09:35:42.295081 containerd[1532]: time="2025-07-12T09:35:42.295044129Z" level=info msg="StartContainer for \"42d012120c76753956319389b8d2921afcea220901dd22adfb252a3577083c1e\" returns successfully" Jul 12 09:35:43.038560 kubelet[2669]: I0712 09:35:43.038390 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5c94dbd876-w5s6d" podStartSLOduration=1.573429864 podStartE2EDuration="4.038371416s" podCreationTimestamp="2025-07-12 09:35:39 +0000 UTC" firstStartedPulling="2025-07-12 09:35:39.742921086 +0000 UTC m=+31.902170645" lastFinishedPulling="2025-07-12 09:35:42.207862638 +0000 UTC m=+34.367112197" observedRunningTime="2025-07-12 09:35:43.037569526 +0000 UTC m=+35.196819165" watchObservedRunningTime="2025-07-12 09:35:43.038371416 +0000 UTC m=+35.197620975" Jul 12 09:35:44.922630 kubelet[2669]: E0712 09:35:44.922516 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:44.923554 containerd[1532]: time="2025-07-12T09:35:44.922943428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tsbtk,Uid:cffc9272-54c6-4b71-afb4-040238028993,Namespace:calico-system,Attempt:0,}" Jul 12 09:35:44.923554 containerd[1532]: time="2025-07-12T09:35:44.922944069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc55dbbd8-vqkln,Uid:7de32244-939d-4ce0-9b95-f1b82893dcc6,Namespace:calico-apiserver,Attempt:0,}" Jul 12 09:35:44.924092 containerd[1532]: time="2025-07-12T09:35:44.923880719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zb4v,Uid:71284256-c3ce-4e07-92e5-9f23490080e5,Namespace:kube-system,Attempt:0,}" Jul 12 09:35:45.073103 systemd-networkd[1433]: cali6c4df686d2d: Link UP Jul 12 09:35:45.074219 systemd-networkd[1433]: cali6c4df686d2d: Gained carrier Jul 12 09:35:45.090200 containerd[1532]: 2025-07-12 09:35:44.990 [INFO][4178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0 coredns-668d6bf9bc- kube-system 71284256-c3ce-4e07-92e5-9f23490080e5 804 0 2025-07-12 09:35:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4zb4v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6c4df686d2d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zb4v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zb4v-" Jul 12 09:35:45.090200 containerd[1532]: 2025-07-12 09:35:44.990 [INFO][4178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zb4v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0" Jul 12 09:35:45.090200 containerd[1532]: 2025-07-12 09:35:45.027 [INFO][4214] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" HandleID="k8s-pod-network.761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" Workload="localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0" Jul 12 09:35:45.090421 containerd[1532]: 2025-07-12 09:35:45.028 [INFO][4214] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" HandleID="k8s-pod-network.761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" Workload="localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003436c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4zb4v", "timestamp":"2025-07-12 09:35:45.027966078 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 09:35:45.090421 containerd[1532]: 2025-07-12 09:35:45.028 [INFO][4214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 09:35:45.090421 containerd[1532]: 2025-07-12 09:35:45.028 [INFO][4214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 09:35:45.090421 containerd[1532]: 2025-07-12 09:35:45.028 [INFO][4214] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 09:35:45.090421 containerd[1532]: 2025-07-12 09:35:45.040 [INFO][4214] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" host="localhost" Jul 12 09:35:45.090421 containerd[1532]: 2025-07-12 09:35:45.044 [INFO][4214] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 09:35:45.090421 containerd[1532]: 2025-07-12 09:35:45.050 [INFO][4214] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 09:35:45.090421 containerd[1532]: 2025-07-12 09:35:45.052 [INFO][4214] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:45.090421 containerd[1532]: 2025-07-12 09:35:45.054 [INFO][4214] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:45.090421 containerd[1532]: 2025-07-12 09:35:45.054 [INFO][4214] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" host="localhost" Jul 12 09:35:45.090616 containerd[1532]: 2025-07-12 09:35:45.056 [INFO][4214] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830 Jul 12 09:35:45.090616 containerd[1532]: 2025-07-12 09:35:45.060 [INFO][4214] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" host="localhost" Jul 12 09:35:45.090616 containerd[1532]: 2025-07-12 09:35:45.066 [INFO][4214] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" host="localhost" Jul 12 09:35:45.090616 containerd[1532]: 2025-07-12 09:35:45.066 [INFO][4214] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" host="localhost" Jul 12 09:35:45.090616 containerd[1532]: 2025-07-12 09:35:45.066 [INFO][4214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 09:35:45.090616 containerd[1532]: 2025-07-12 09:35:45.066 [INFO][4214] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" HandleID="k8s-pod-network.761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" Workload="localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0" Jul 12 09:35:45.090725 containerd[1532]: 2025-07-12 09:35:45.069 [INFO][4178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zb4v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"71284256-c3ce-4e07-92e5-9f23490080e5", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4zb4v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c4df686d2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:45.090792 containerd[1532]: 2025-07-12 09:35:45.069 [INFO][4178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zb4v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0" Jul 12 09:35:45.090792 containerd[1532]: 2025-07-12 09:35:45.069 [INFO][4178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c4df686d2d ContainerID="761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zb4v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0" Jul 12 09:35:45.090792 containerd[1532]: 2025-07-12 09:35:45.074 [INFO][4178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zb4v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0" Jul 12 09:35:45.090948 containerd[1532]: 2025-07-12 09:35:45.074 [INFO][4178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zb4v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"71284256-c3ce-4e07-92e5-9f23490080e5", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830", Pod:"coredns-668d6bf9bc-4zb4v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c4df686d2d", MAC:"66:27:ea:fb:56:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:45.090948 containerd[1532]: 2025-07-12 09:35:45.086 [INFO][4178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zb4v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zb4v-eth0" Jul 12 09:35:45.158182 containerd[1532]: time="2025-07-12T09:35:45.158088385Z" level=info msg="connecting to shim 761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830" address="unix:///run/containerd/s/c3064824649cfbb4dd1c71d965f25d625d6348619d17efe4eb45524c633931ca" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:45.177551 systemd-networkd[1433]: cali2f778fefd02: Link UP Jul 12 09:35:45.179722 systemd-networkd[1433]: cali2f778fefd02: Gained carrier Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:44.995 [INFO][4169] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0 calico-apiserver-7cc55dbbd8- calico-apiserver 7de32244-939d-4ce0-9b95-f1b82893dcc6 807 0 2025-07-12 09:35:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cc55dbbd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cc55dbbd8-vqkln eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2f778fefd02 [] [] }} ContainerID="8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-vqkln" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-" Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:44.995 [INFO][4169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-vqkln" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0" Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.042 [INFO][4221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" HandleID="k8s-pod-network.8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" Workload="localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0" Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.042 [INFO][4221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" HandleID="k8s-pod-network.8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" Workload="localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042c110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cc55dbbd8-vqkln", "timestamp":"2025-07-12 09:35:45.042074037 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.042 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.066 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.066 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.139 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" host="localhost" Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.145 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.151 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.154 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.157 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.157 [INFO][4221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" host="localhost" Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.159 [INFO][4221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775 Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.163 [INFO][4221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" host="localhost" Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.170 [INFO][4221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" host="localhost" Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.170 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" host="localhost" Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.170 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 09:35:45.197738 containerd[1532]: 2025-07-12 09:35:45.170 [INFO][4221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" HandleID="k8s-pod-network.8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" Workload="localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0" Jul 12 09:35:45.198334 containerd[1532]: 2025-07-12 09:35:45.173 [INFO][4169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-vqkln" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0", GenerateName:"calico-apiserver-7cc55dbbd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"7de32244-939d-4ce0-9b95-f1b82893dcc6", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc55dbbd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cc55dbbd8-vqkln", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f778fefd02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:45.198334 containerd[1532]: 2025-07-12 09:35:45.173 [INFO][4169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-vqkln" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0" Jul 12 09:35:45.198334 containerd[1532]: 2025-07-12 09:35:45.173 [INFO][4169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f778fefd02 ContainerID="8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-vqkln" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0" Jul 12 09:35:45.198334 containerd[1532]: 2025-07-12 09:35:45.179 [INFO][4169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-vqkln" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0" Jul 12 09:35:45.198334 containerd[1532]: 2025-07-12 09:35:45.184 [INFO][4169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-vqkln" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0", GenerateName:"calico-apiserver-7cc55dbbd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"7de32244-939d-4ce0-9b95-f1b82893dcc6", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc55dbbd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775", Pod:"calico-apiserver-7cc55dbbd8-vqkln", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f778fefd02", MAC:"9a:fa:a3:dd:b4:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:45.198334 containerd[1532]: 2025-07-12 09:35:45.195 [INFO][4169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-vqkln" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--vqkln-eth0" Jul 12 09:35:45.201977 systemd[1]: Started cri-containerd-761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830.scope - libcontainer container 761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830. Jul 12 09:35:45.218328 containerd[1532]: time="2025-07-12T09:35:45.218286063Z" level=info msg="connecting to shim 8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775" address="unix:///run/containerd/s/6092f825c71ef095a13e567046050e5f5c07c06d3c00180bf94830ba13275f6f" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:45.219233 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 09:35:45.239981 systemd[1]: Started cri-containerd-8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775.scope - libcontainer container 8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775. Jul 12 09:35:45.243350 containerd[1532]: time="2025-07-12T09:35:45.243212145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zb4v,Uid:71284256-c3ce-4e07-92e5-9f23490080e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830\"" Jul 12 09:35:45.244280 kubelet[2669]: E0712 09:35:45.244247 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:45.247475 containerd[1532]: time="2025-07-12T09:35:45.247351271Z" level=info msg="CreateContainer within sandbox \"761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 09:35:45.256973 containerd[1532]: time="2025-07-12T09:35:45.256908939Z" level=info msg="Container bfaba13b24191a4277a26d5d96913a83d91dbb1761f90dc423175191d96319ff: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:45.260948 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 09:35:45.262921 containerd[1532]: time="2025-07-12T09:35:45.262888406Z" level=info msg="CreateContainer within sandbox \"761dc48aab544f5a91581f1b48f253c42e4196f8c2f4a990847cb249b133b830\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bfaba13b24191a4277a26d5d96913a83d91dbb1761f90dc423175191d96319ff\"" Jul 12 09:35:45.263621 containerd[1532]: time="2025-07-12T09:35:45.263593014Z" level=info msg="StartContainer for \"bfaba13b24191a4277a26d5d96913a83d91dbb1761f90dc423175191d96319ff\"" Jul 12 09:35:45.266287 containerd[1532]: time="2025-07-12T09:35:45.266238604Z" level=info msg="connecting to shim bfaba13b24191a4277a26d5d96913a83d91dbb1761f90dc423175191d96319ff" address="unix:///run/containerd/s/c3064824649cfbb4dd1c71d965f25d625d6348619d17efe4eb45524c633931ca" protocol=ttrpc version=3 Jul 12 09:35:45.283433 systemd-networkd[1433]: cali61f1711558e: Link UP Jul 12 09:35:45.285870 systemd-networkd[1433]: cali61f1711558e: Gained carrier Jul 12 09:35:45.299784 systemd[1]: Started cri-containerd-bfaba13b24191a4277a26d5d96913a83d91dbb1761f90dc423175191d96319ff.scope - libcontainer container bfaba13b24191a4277a26d5d96913a83d91dbb1761f90dc423175191d96319ff. Jul 12 09:35:45.306201 containerd[1532]: time="2025-07-12T09:35:45.305818930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc55dbbd8-vqkln,Uid:7de32244-939d-4ce0-9b95-f1b82893dcc6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775\"" Jul 12 09:35:45.307854 containerd[1532]: time="2025-07-12T09:35:45.307677831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.007 [INFO][4193] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tsbtk-eth0 csi-node-driver- calico-system cffc9272-54c6-4b71-afb4-040238028993 701 0 2025-07-12 09:35:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tsbtk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali61f1711558e [] [] }} ContainerID="ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" Namespace="calico-system" Pod="csi-node-driver-tsbtk" WorkloadEndpoint="localhost-k8s-csi--node--driver--tsbtk-" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.007 [INFO][4193] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" Namespace="calico-system" Pod="csi-node-driver-tsbtk" WorkloadEndpoint="localhost-k8s-csi--node--driver--tsbtk-eth0" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.050 [INFO][4228] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" HandleID="k8s-pod-network.ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" Workload="localhost-k8s-csi--node--driver--tsbtk-eth0" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.050 [INFO][4228] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" HandleID="k8s-pod-network.ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" Workload="localhost-k8s-csi--node--driver--tsbtk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tsbtk", "timestamp":"2025-07-12 09:35:45.050254809 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.050 [INFO][4228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.170 [INFO][4228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.170 [INFO][4228] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.240 [INFO][4228] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" host="localhost" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.247 [INFO][4228] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.253 [INFO][4228] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.255 [INFO][4228] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.259 [INFO][4228] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.259 [INFO][4228] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" host="localhost" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.261 [INFO][4228] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.265 [INFO][4228] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" host="localhost" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.274 [INFO][4228] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" host="localhost" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.274 [INFO][4228] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" host="localhost" Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.274 [INFO][4228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 09:35:45.314562 containerd[1532]: 2025-07-12 09:35:45.274 [INFO][4228] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" HandleID="k8s-pod-network.ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" Workload="localhost-k8s-csi--node--driver--tsbtk-eth0" Jul 12 09:35:45.315662 containerd[1532]: 2025-07-12 09:35:45.278 [INFO][4193] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" Namespace="calico-system" Pod="csi-node-driver-tsbtk" WorkloadEndpoint="localhost-k8s-csi--node--driver--tsbtk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tsbtk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cffc9272-54c6-4b71-afb4-040238028993", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tsbtk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali61f1711558e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:45.315662 containerd[1532]: 2025-07-12 09:35:45.278 [INFO][4193] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" Namespace="calico-system" Pod="csi-node-driver-tsbtk" WorkloadEndpoint="localhost-k8s-csi--node--driver--tsbtk-eth0" Jul 12 09:35:45.315662 containerd[1532]: 2025-07-12 09:35:45.278 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61f1711558e ContainerID="ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" Namespace="calico-system" Pod="csi-node-driver-tsbtk" WorkloadEndpoint="localhost-k8s-csi--node--driver--tsbtk-eth0" Jul 12 09:35:45.315662 containerd[1532]: 2025-07-12 09:35:45.286 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" Namespace="calico-system" Pod="csi-node-driver-tsbtk" WorkloadEndpoint="localhost-k8s-csi--node--driver--tsbtk-eth0" Jul 12 09:35:45.315662 containerd[1532]: 2025-07-12 09:35:45.288 [INFO][4193] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" Namespace="calico-system" Pod="csi-node-driver-tsbtk" WorkloadEndpoint="localhost-k8s-csi--node--driver--tsbtk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tsbtk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cffc9272-54c6-4b71-afb4-040238028993", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b", Pod:"csi-node-driver-tsbtk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali61f1711558e", MAC:"de:eb:62:39:6c:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:45.315662 containerd[1532]: 2025-07-12 09:35:45.306 [INFO][4193] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" Namespace="calico-system" Pod="csi-node-driver-tsbtk" WorkloadEndpoint="localhost-k8s-csi--node--driver--tsbtk-eth0" Jul 12 09:35:45.342614 containerd[1532]: time="2025-07-12T09:35:45.342571545Z" level=info msg="connecting to shim ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b" address="unix:///run/containerd/s/52961d22e2be3e12414a5250dbf05a4e48bc087032ae69505573e247d8fb6d9d" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:45.349828 containerd[1532]: time="2025-07-12T09:35:45.349543383Z" level=info msg="StartContainer for \"bfaba13b24191a4277a26d5d96913a83d91dbb1761f90dc423175191d96319ff\" returns successfully" Jul 12 09:35:45.372115 systemd[1]: Started cri-containerd-ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b.scope - libcontainer container ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b. Jul 12 09:35:45.388596 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 09:35:45.405732 containerd[1532]: time="2025-07-12T09:35:45.405696457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tsbtk,Uid:cffc9272-54c6-4b71-afb4-040238028993,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b\"" Jul 12 09:35:46.042093 kubelet[2669]: E0712 09:35:46.042027 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:46.060214 kubelet[2669]: I0712 09:35:46.054043 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4zb4v" podStartSLOduration=32.05402731 podStartE2EDuration="32.05402731s" podCreationTimestamp="2025-07-12 09:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:35:46.052990058 +0000 UTC m=+38.212239617" watchObservedRunningTime="2025-07-12 09:35:46.05402731 +0000 UTC m=+38.213276869" Jul 12 09:35:46.557134 systemd-networkd[1433]: cali2f778fefd02: Gained IPv6LL Jul 12 09:35:47.005055 systemd-networkd[1433]: cali6c4df686d2d: Gained IPv6LL Jul 12 09:35:47.043516 kubelet[2669]: E0712 09:35:47.043487 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:47.198331 systemd-networkd[1433]: cali61f1711558e: Gained IPv6LL Jul 12 09:35:47.209353 containerd[1532]: time="2025-07-12T09:35:47.209301114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:47.210405 containerd[1532]: time="2025-07-12T09:35:47.210377245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 09:35:47.211035 containerd[1532]: time="2025-07-12T09:35:47.211003452Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:47.212926 containerd[1532]: time="2025-07-12T09:35:47.212859872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:47.213508 containerd[1532]: time="2025-07-12T09:35:47.213479318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.905751166s" Jul 12 09:35:47.213665 containerd[1532]: time="2025-07-12T09:35:47.213590559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 09:35:47.215069 containerd[1532]: time="2025-07-12T09:35:47.214849453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 09:35:47.215858 containerd[1532]: time="2025-07-12T09:35:47.215667581Z" level=info msg="CreateContainer within sandbox \"8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 09:35:47.226448 containerd[1532]: time="2025-07-12T09:35:47.226209454Z" level=info msg="Container 7c03c8623184992a3e373a8d4fdfb24638bf72360dc87c3e02705693ce866200: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:47.233253 containerd[1532]: time="2025-07-12T09:35:47.233203768Z" level=info msg="CreateContainer within sandbox \"8e53027ce37f76927e085bbc830f6f8dd98d44bc4c360640034ff7f60682a775\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7c03c8623184992a3e373a8d4fdfb24638bf72360dc87c3e02705693ce866200\"" Jul 12 09:35:47.233921 containerd[1532]: time="2025-07-12T09:35:47.233846775Z" level=info msg="StartContainer for \"7c03c8623184992a3e373a8d4fdfb24638bf72360dc87c3e02705693ce866200\"" Jul 12 09:35:47.235879 containerd[1532]: time="2025-07-12T09:35:47.235823076Z" level=info msg="connecting to shim 7c03c8623184992a3e373a8d4fdfb24638bf72360dc87c3e02705693ce866200" address="unix:///run/containerd/s/6092f825c71ef095a13e567046050e5f5c07c06d3c00180bf94830ba13275f6f" protocol=ttrpc version=3 Jul 12 09:35:47.258986 systemd[1]: Started cri-containerd-7c03c8623184992a3e373a8d4fdfb24638bf72360dc87c3e02705693ce866200.scope - libcontainer container 7c03c8623184992a3e373a8d4fdfb24638bf72360dc87c3e02705693ce866200. Jul 12 09:35:47.293250 containerd[1532]: time="2025-07-12T09:35:47.293212728Z" level=info msg="StartContainer for \"7c03c8623184992a3e373a8d4fdfb24638bf72360dc87c3e02705693ce866200\" returns successfully" Jul 12 09:35:47.923603 kubelet[2669]: E0712 09:35:47.923258 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:47.923932 containerd[1532]: time="2025-07-12T09:35:47.923874293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc55dbbd8-pxzqd,Uid:3fa3e400-f96b-4c76-a280-ab4e8cd5210e,Namespace:calico-apiserver,Attempt:0,}" Jul 12 09:35:47.924346 containerd[1532]: time="2025-07-12T09:35:47.923879853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-jfc5p,Uid:73ac4ac9-a325-4d79-be9f-08af13edaac2,Namespace:calico-system,Attempt:0,}" Jul 12 09:35:47.925070 containerd[1532]: time="2025-07-12T09:35:47.925005705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9n2j,Uid:a3b2d3ba-558c-4f5b-8637-e61eebaebd46,Namespace:kube-system,Attempt:0,}" Jul 12 09:35:48.049743 kubelet[2669]: E0712 09:35:48.048911 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:48.073975 systemd-networkd[1433]: cali955885b6712: Link UP Jul 12 09:35:48.074889 systemd-networkd[1433]: cali955885b6712: Gained carrier Jul 12 09:35:48.084954 kubelet[2669]: I0712 09:35:48.084447 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cc55dbbd8-vqkln" podStartSLOduration=23.177154918 podStartE2EDuration="25.084428741s" podCreationTimestamp="2025-07-12 09:35:23 +0000 UTC" firstStartedPulling="2025-07-12 09:35:45.307201826 +0000 UTC m=+37.466451385" lastFinishedPulling="2025-07-12 09:35:47.214475689 +0000 UTC m=+39.373725208" observedRunningTime="2025-07-12 09:35:48.068553656 +0000 UTC m=+40.227803295" watchObservedRunningTime="2025-07-12 09:35:48.084428741 +0000 UTC m=+40.243678300" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:47.985 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0 goldmane-768f4c5c69- calico-system 73ac4ac9-a325-4d79-be9f-08af13edaac2 806 0 2025-07-12 09:35:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-jfc5p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali955885b6712 [] [] }} ContainerID="a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" Namespace="calico-system" Pod="goldmane-768f4c5c69-jfc5p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jfc5p-" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:47.985 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" Namespace="calico-system" Pod="goldmane-768f4c5c69-jfc5p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.024 [INFO][4549] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" HandleID="k8s-pod-network.a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" Workload="localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.024 [INFO][4549] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" HandleID="k8s-pod-network.a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" Workload="localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-jfc5p", "timestamp":"2025-07-12 09:35:48.024408918 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.024 [INFO][4549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.024 [INFO][4549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.024 [INFO][4549] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.034 [INFO][4549] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" host="localhost" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.039 [INFO][4549] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.044 [INFO][4549] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.047 [INFO][4549] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.052 [INFO][4549] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.052 [INFO][4549] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" host="localhost" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.054 [INFO][4549] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.058 [INFO][4549] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" host="localhost" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.064 [INFO][4549] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" host="localhost" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.064 [INFO][4549] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" host="localhost" Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.064 [INFO][4549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 09:35:48.093920 containerd[1532]: 2025-07-12 09:35:48.066 [INFO][4549] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" HandleID="k8s-pod-network.a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" Workload="localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0" Jul 12 09:35:48.094746 containerd[1532]: 2025-07-12 09:35:48.071 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" Namespace="calico-system" Pod="goldmane-768f4c5c69-jfc5p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"73ac4ac9-a325-4d79-be9f-08af13edaac2", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-jfc5p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali955885b6712", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:48.094746 containerd[1532]: 2025-07-12 09:35:48.071 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" Namespace="calico-system" Pod="goldmane-768f4c5c69-jfc5p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0" Jul 12 09:35:48.094746 containerd[1532]: 2025-07-12 09:35:48.071 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali955885b6712 ContainerID="a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" Namespace="calico-system" Pod="goldmane-768f4c5c69-jfc5p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0" Jul 12 09:35:48.094746 containerd[1532]: 2025-07-12 09:35:48.075 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" Namespace="calico-system" Pod="goldmane-768f4c5c69-jfc5p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0" Jul 12 09:35:48.094746 containerd[1532]: 2025-07-12 09:35:48.075 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" Namespace="calico-system" Pod="goldmane-768f4c5c69-jfc5p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"73ac4ac9-a325-4d79-be9f-08af13edaac2", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a", Pod:"goldmane-768f4c5c69-jfc5p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali955885b6712", MAC:"ba:23:26:f2:82:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:48.094746 containerd[1532]: 2025-07-12 09:35:48.089 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" Namespace="calico-system" Pod="goldmane-768f4c5c69-jfc5p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jfc5p-eth0" Jul 12 09:35:48.155423 containerd[1532]: time="2025-07-12T09:35:48.155377196Z" level=info msg="connecting to shim a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a" address="unix:///run/containerd/s/481941b93f21ce0a5b9cebba87ddac72d9b9ed74152b58445c09ffaf707f9c25" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:48.188963 systemd[1]: Started cri-containerd-a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a.scope - libcontainer container a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a. Jul 12 09:35:48.191908 systemd-networkd[1433]: calidebe9b58e96: Link UP Jul 12 09:35:48.193138 systemd-networkd[1433]: calidebe9b58e96: Gained carrier Jul 12 09:35:48.217799 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:47.994 [INFO][4502] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0 calico-apiserver-7cc55dbbd8- calico-apiserver 3fa3e400-f96b-4c76-a280-ab4e8cd5210e 798 0 2025-07-12 09:35:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cc55dbbd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cc55dbbd8-pxzqd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidebe9b58e96 [] [] }} ContainerID="200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-pxzqd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-" Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:47.994 [INFO][4502] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-pxzqd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0" Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.027 [INFO][4555] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" HandleID="k8s-pod-network.200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" Workload="localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0" Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.027 [INFO][4555] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" HandleID="k8s-pod-network.200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" Workload="localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cc55dbbd8-pxzqd", "timestamp":"2025-07-12 09:35:48.027335228 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.027 [INFO][4555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.065 [INFO][4555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.065 [INFO][4555] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.137 [INFO][4555] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" host="localhost" Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.148 [INFO][4555] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.153 [INFO][4555] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.156 [INFO][4555] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.159 [INFO][4555] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.159 [INFO][4555] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" host="localhost" Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.161 [INFO][4555] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7 Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.166 [INFO][4555] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" host="localhost" Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.176 [INFO][4555] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" host="localhost" Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.176 [INFO][4555] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" host="localhost" Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.176 [INFO][4555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 09:35:48.228758 containerd[1532]: 2025-07-12 09:35:48.176 [INFO][4555] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" HandleID="k8s-pod-network.200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" Workload="localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0" Jul 12 09:35:48.230166 containerd[1532]: 2025-07-12 09:35:48.185 [INFO][4502] cni-plugin/k8s.go 418: Populated endpoint ContainerID="200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-pxzqd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0", GenerateName:"calico-apiserver-7cc55dbbd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3fa3e400-f96b-4c76-a280-ab4e8cd5210e", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc55dbbd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cc55dbbd8-pxzqd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidebe9b58e96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:48.230166 containerd[1532]: 2025-07-12 09:35:48.186 [INFO][4502] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-pxzqd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0" Jul 12 09:35:48.230166 containerd[1532]: 2025-07-12 09:35:48.186 [INFO][4502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidebe9b58e96 ContainerID="200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-pxzqd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0" Jul 12 09:35:48.230166 containerd[1532]: 2025-07-12 09:35:48.195 [INFO][4502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-pxzqd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0" Jul 12 09:35:48.230166 containerd[1532]: 2025-07-12 09:35:48.195 [INFO][4502] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-pxzqd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0", GenerateName:"calico-apiserver-7cc55dbbd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3fa3e400-f96b-4c76-a280-ab4e8cd5210e", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc55dbbd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7", Pod:"calico-apiserver-7cc55dbbd8-pxzqd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidebe9b58e96", MAC:"be:2d:c4:e7:97:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:48.230166 containerd[1532]: 2025-07-12 09:35:48.216 [INFO][4502] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" Namespace="calico-apiserver" Pod="calico-apiserver-7cc55dbbd8-pxzqd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cc55dbbd8--pxzqd-eth0" Jul 12 09:35:48.234079 systemd[1]: Started sshd@7-10.0.0.56:22-10.0.0.1:45236.service - OpenSSH per-connection server daemon (10.0.0.1:45236). Jul 12 09:35:48.267090 containerd[1532]: time="2025-07-12T09:35:48.267049515Z" level=info msg="connecting to shim 200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7" address="unix:///run/containerd/s/ce31bb591a329f697da93439ec49190dbd8b8479d7652133f6b06a4a117410a8" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:48.304331 containerd[1532]: time="2025-07-12T09:35:48.304269821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-jfc5p,Uid:73ac4ac9-a325-4d79-be9f-08af13edaac2,Namespace:calico-system,Attempt:0,} returns sandbox id \"a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a\"" Jul 12 09:35:48.314302 systemd-networkd[1433]: calic612ee80811: Link UP Jul 12 09:35:48.318112 systemd-networkd[1433]: calic612ee80811: Gained carrier Jul 12 09:35:48.328224 systemd[1]: Started cri-containerd-200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7.scope - libcontainer container 200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7. Jul 12 09:35:48.332820 sshd[4636]: Accepted publickey for core from 10.0.0.1 port 45236 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:35:48.337863 sshd-session[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:47.997 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0 coredns-668d6bf9bc- kube-system a3b2d3ba-558c-4f5b-8637-e61eebaebd46 803 0 2025-07-12 09:35:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-m9n2j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic612ee80811 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9n2j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9n2j-" Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:47.997 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9n2j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0" Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.028 [INFO][4562] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" HandleID="k8s-pod-network.bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" Workload="localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0" Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.028 [INFO][4562] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" HandleID="k8s-pod-network.bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" Workload="localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd2d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-m9n2j", "timestamp":"2025-07-12 09:35:48.028568001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.028 [INFO][4562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.180 [INFO][4562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.185 [INFO][4562] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.241 [INFO][4562] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" host="localhost" Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.248 [INFO][4562] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.263 [INFO][4562] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.269 [INFO][4562] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.274 [INFO][4562] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.274 [INFO][4562] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" host="localhost" Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.281 [INFO][4562] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.288 [INFO][4562] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" host="localhost" Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.295 [INFO][4562] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" host="localhost" Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.297 [INFO][4562] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" host="localhost" Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.297 [INFO][4562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 09:35:48.340103 containerd[1532]: 2025-07-12 09:35:48.297 [INFO][4562] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" HandleID="k8s-pod-network.bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" Workload="localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0" Jul 12 09:35:48.340651 containerd[1532]: 2025-07-12 09:35:48.304 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9n2j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a3b2d3ba-558c-4f5b-8637-e61eebaebd46", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-m9n2j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic612ee80811", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:48.340651 containerd[1532]: 2025-07-12 09:35:48.304 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9n2j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0" Jul 12 09:35:48.340651 containerd[1532]: 2025-07-12 09:35:48.307 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic612ee80811 ContainerID="bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9n2j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0" Jul 12 09:35:48.340651 containerd[1532]: 2025-07-12 09:35:48.319 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9n2j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0" Jul 12 09:35:48.340651 containerd[1532]: 2025-07-12 09:35:48.320 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9n2j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a3b2d3ba-558c-4f5b-8637-e61eebaebd46", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c", Pod:"coredns-668d6bf9bc-m9n2j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic612ee80811", MAC:"36:bf:ed:a1:4f:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:48.340651 containerd[1532]: 2025-07-12 09:35:48.330 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9n2j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9n2j-eth0" Jul 12 09:35:48.346843 systemd-logind[1506]: New session 8 of user core. Jul 12 09:35:48.353996 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 09:35:48.402842 containerd[1532]: time="2025-07-12T09:35:48.401672631Z" level=info msg="connecting to shim bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c" address="unix:///run/containerd/s/e8a4b774d5bd65d1bafcb5bed03bfff40e593247b259028b955672d0705b21a4" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:48.406351 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 09:35:48.451015 systemd[1]: Started cri-containerd-bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c.scope - libcontainer container bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c. Jul 12 09:35:48.466656 containerd[1532]: time="2025-07-12T09:35:48.466605744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc55dbbd8-pxzqd,Uid:3fa3e400-f96b-4c76-a280-ab4e8cd5210e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7\"" Jul 12 09:35:48.469164 containerd[1532]: time="2025-07-12T09:35:48.469125410Z" level=info msg="CreateContainer within sandbox \"200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 09:35:48.479424 containerd[1532]: time="2025-07-12T09:35:48.478662829Z" level=info msg="Container ab161084124def579b7bb0f096741a1d2cd2dfb6da74b071512be146cea45cdb: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:48.482213 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 09:35:48.518309 containerd[1532]: time="2025-07-12T09:35:48.518258640Z" level=info msg="CreateContainer within sandbox \"200f33d5a4bfc38139cf761ee68c127d35c97a118d22a36b5d15978a8956b3e7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ab161084124def579b7bb0f096741a1d2cd2dfb6da74b071512be146cea45cdb\"" Jul 12 09:35:48.519955 containerd[1532]: time="2025-07-12T09:35:48.519906617Z" level=info msg="StartContainer for \"ab161084124def579b7bb0f096741a1d2cd2dfb6da74b071512be146cea45cdb\"" Jul 12 09:35:48.526347 containerd[1532]: time="2025-07-12T09:35:48.526319243Z" level=info msg="connecting to shim ab161084124def579b7bb0f096741a1d2cd2dfb6da74b071512be146cea45cdb" address="unix:///run/containerd/s/ce31bb591a329f697da93439ec49190dbd8b8479d7652133f6b06a4a117410a8" protocol=ttrpc version=3 Jul 12 09:35:48.538007 containerd[1532]: time="2025-07-12T09:35:48.537945964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9n2j,Uid:a3b2d3ba-558c-4f5b-8637-e61eebaebd46,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c\"" Jul 12 09:35:48.541163 kubelet[2669]: E0712 09:35:48.541125 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:48.543945 containerd[1532]: time="2025-07-12T09:35:48.543869425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:48.544603 containerd[1532]: time="2025-07-12T09:35:48.544558793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 09:35:48.545972 containerd[1532]: time="2025-07-12T09:35:48.545929327Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:48.550073 containerd[1532]: time="2025-07-12T09:35:48.549619325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:48.551257 containerd[1532]: time="2025-07-12T09:35:48.551228542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.336347089s" Jul 12 09:35:48.551362 containerd[1532]: time="2025-07-12T09:35:48.551346223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 09:35:48.554153 containerd[1532]: time="2025-07-12T09:35:48.554130492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 09:35:48.556743 containerd[1532]: time="2025-07-12T09:35:48.556711759Z" level=info msg="CreateContainer within sandbox \"ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 09:35:48.569981 containerd[1532]: time="2025-07-12T09:35:48.569736214Z" level=info msg="CreateContainer within sandbox \"bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 09:35:48.576953 systemd[1]: Started cri-containerd-ab161084124def579b7bb0f096741a1d2cd2dfb6da74b071512be146cea45cdb.scope - libcontainer container ab161084124def579b7bb0f096741a1d2cd2dfb6da74b071512be146cea45cdb. Jul 12 09:35:48.581482 containerd[1532]: time="2025-07-12T09:35:48.581392175Z" level=info msg="Container 72f5cb7f01415b70cb3a723e7a9e7579f2f80e12515d63aeb85123cb6d27ea2e: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:48.591457 containerd[1532]: time="2025-07-12T09:35:48.591392598Z" level=info msg="Container 4bb142d49144009e665d4e035d4ec00292a3e7d318199d9249ca550039b49d0a: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:48.594759 containerd[1532]: time="2025-07-12T09:35:48.594298388Z" level=info msg="CreateContainer within sandbox \"bc6bc75b9d1cbf5048de27686aef0f0392e34a6d989e457dd3b6e9ae48679b7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72f5cb7f01415b70cb3a723e7a9e7579f2f80e12515d63aeb85123cb6d27ea2e\"" Jul 12 09:35:48.596270 containerd[1532]: time="2025-07-12T09:35:48.596242889Z" level=info msg="StartContainer for \"72f5cb7f01415b70cb3a723e7a9e7579f2f80e12515d63aeb85123cb6d27ea2e\"" Jul 12 09:35:48.598149 containerd[1532]: time="2025-07-12T09:35:48.598039587Z" level=info msg="connecting to shim 72f5cb7f01415b70cb3a723e7a9e7579f2f80e12515d63aeb85123cb6d27ea2e" address="unix:///run/containerd/s/e8a4b774d5bd65d1bafcb5bed03bfff40e593247b259028b955672d0705b21a4" protocol=ttrpc version=3 Jul 12 09:35:48.609159 containerd[1532]: time="2025-07-12T09:35:48.608932700Z" level=info msg="CreateContainer within sandbox \"ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4bb142d49144009e665d4e035d4ec00292a3e7d318199d9249ca550039b49d0a\"" Jul 12 09:35:48.611183 containerd[1532]: time="2025-07-12T09:35:48.611152923Z" level=info msg="StartContainer for \"4bb142d49144009e665d4e035d4ec00292a3e7d318199d9249ca550039b49d0a\"" Jul 12 09:35:48.613632 containerd[1532]: time="2025-07-12T09:35:48.613593269Z" level=info msg="connecting to shim 4bb142d49144009e665d4e035d4ec00292a3e7d318199d9249ca550039b49d0a" address="unix:///run/containerd/s/52961d22e2be3e12414a5250dbf05a4e48bc087032ae69505573e247d8fb6d9d" protocol=ttrpc version=3 Jul 12 09:35:48.626460 systemd[1]: Started cri-containerd-72f5cb7f01415b70cb3a723e7a9e7579f2f80e12515d63aeb85123cb6d27ea2e.scope - libcontainer container 72f5cb7f01415b70cb3a723e7a9e7579f2f80e12515d63aeb85123cb6d27ea2e. Jul 12 09:35:48.649603 containerd[1532]: time="2025-07-12T09:35:48.649497321Z" level=info msg="StartContainer for \"ab161084124def579b7bb0f096741a1d2cd2dfb6da74b071512be146cea45cdb\" returns successfully" Jul 12 09:35:48.657140 systemd[1]: Started cri-containerd-4bb142d49144009e665d4e035d4ec00292a3e7d318199d9249ca550039b49d0a.scope - libcontainer container 4bb142d49144009e665d4e035d4ec00292a3e7d318199d9249ca550039b49d0a. Jul 12 09:35:48.683327 containerd[1532]: time="2025-07-12T09:35:48.683219271Z" level=info msg="StartContainer for \"72f5cb7f01415b70cb3a723e7a9e7579f2f80e12515d63aeb85123cb6d27ea2e\" returns successfully" Jul 12 09:35:48.731328 containerd[1532]: time="2025-07-12T09:35:48.730231958Z" level=info msg="StartContainer for \"4bb142d49144009e665d4e035d4ec00292a3e7d318199d9249ca550039b49d0a\" returns successfully" Jul 12 09:35:48.758416 sshd[4700]: Connection closed by 10.0.0.1 port 45236 Jul 12 09:35:48.758796 sshd-session[4636]: pam_unix(sshd:session): session closed for user core Jul 12 09:35:48.764451 systemd[1]: sshd@7-10.0.0.56:22-10.0.0.1:45236.service: Deactivated successfully. Jul 12 09:35:48.767037 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 09:35:48.771344 systemd-logind[1506]: Session 8 logged out. Waiting for processes to exit. Jul 12 09:35:48.772864 systemd-logind[1506]: Removed session 8. Jul 12 09:35:48.922997 containerd[1532]: time="2025-07-12T09:35:48.922879116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6b96db46-p7s4x,Uid:1e8d79fd-65fe-4e22-927d-2c54d9d8d62f,Namespace:calico-system,Attempt:0,}" Jul 12 09:35:49.043974 systemd-networkd[1433]: cali3760365fa9f: Link UP Jul 12 09:35:49.044405 systemd-networkd[1433]: cali3760365fa9f: Gained carrier Jul 12 09:35:49.059718 kubelet[2669]: E0712 09:35:49.059339 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:48.962 [INFO][4874] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0 calico-kube-controllers-f6b96db46- calico-system 1e8d79fd-65fe-4e22-927d-2c54d9d8d62f 805 0 2025-07-12 09:35:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f6b96db46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-f6b96db46-p7s4x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3760365fa9f [] [] }} ContainerID="9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" Namespace="calico-system" Pod="calico-kube-controllers-f6b96db46-p7s4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:48.962 [INFO][4874] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" Namespace="calico-system" Pod="calico-kube-controllers-f6b96db46-p7s4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:48.985 [INFO][4891] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" HandleID="k8s-pod-network.9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" Workload="localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:48.985 [INFO][4891] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" HandleID="k8s-pod-network.9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" Workload="localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003220e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-f6b96db46-p7s4x", "timestamp":"2025-07-12 09:35:48.985103202 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:48.985 [INFO][4891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:48.985 [INFO][4891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:48.985 [INFO][4891] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:49.000 [INFO][4891] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" host="localhost" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:49.007 [INFO][4891] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:49.013 [INFO][4891] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:49.015 [INFO][4891] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:49.018 [INFO][4891] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:49.018 [INFO][4891] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" host="localhost" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:49.020 [INFO][4891] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27 Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:49.024 [INFO][4891] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" host="localhost" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:49.033 [INFO][4891] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" host="localhost" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:49.033 [INFO][4891] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" host="localhost" Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:49.033 [INFO][4891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 09:35:49.075654 containerd[1532]: 2025-07-12 09:35:49.033 [INFO][4891] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" HandleID="k8s-pod-network.9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" Workload="localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0" Jul 12 09:35:49.076428 containerd[1532]: 2025-07-12 09:35:49.039 [INFO][4874] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" Namespace="calico-system" Pod="calico-kube-controllers-f6b96db46-p7s4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0", GenerateName:"calico-kube-controllers-f6b96db46-", Namespace:"calico-system", SelfLink:"", UID:"1e8d79fd-65fe-4e22-927d-2c54d9d8d62f", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f6b96db46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-f6b96db46-p7s4x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3760365fa9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:49.076428 containerd[1532]: 2025-07-12 09:35:49.039 [INFO][4874] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" Namespace="calico-system" Pod="calico-kube-controllers-f6b96db46-p7s4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0" Jul 12 09:35:49.076428 containerd[1532]: 2025-07-12 09:35:49.039 [INFO][4874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3760365fa9f ContainerID="9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" Namespace="calico-system" Pod="calico-kube-controllers-f6b96db46-p7s4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0" Jul 12 09:35:49.076428 containerd[1532]: 2025-07-12 09:35:49.043 [INFO][4874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" Namespace="calico-system" Pod="calico-kube-controllers-f6b96db46-p7s4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0" Jul 12 09:35:49.076428 containerd[1532]: 2025-07-12 09:35:49.047 [INFO][4874] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" Namespace="calico-system" Pod="calico-kube-controllers-f6b96db46-p7s4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0", GenerateName:"calico-kube-controllers-f6b96db46-", Namespace:"calico-system", SelfLink:"", UID:"1e8d79fd-65fe-4e22-927d-2c54d9d8d62f", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 9, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f6b96db46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27", Pod:"calico-kube-controllers-f6b96db46-p7s4x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3760365fa9f", MAC:"76:02:5e:b4:dd:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 09:35:49.076428 containerd[1532]: 2025-07-12 09:35:49.068 [INFO][4874] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" Namespace="calico-system" Pod="calico-kube-controllers-f6b96db46-p7s4x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f6b96db46--p7s4x-eth0" Jul 12 09:35:49.095615 kubelet[2669]: I0712 09:35:49.095506 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m9n2j" podStartSLOduration=35.09548772 podStartE2EDuration="35.09548772s" podCreationTimestamp="2025-07-12 09:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:35:49.093154336 +0000 UTC m=+41.252403895" watchObservedRunningTime="2025-07-12 09:35:49.09548772 +0000 UTC m=+41.254737279" Jul 12 09:35:49.131324 containerd[1532]: time="2025-07-12T09:35:49.131268641Z" level=info msg="connecting to shim 9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27" address="unix:///run/containerd/s/df1051ff6a3b2e76bbd37ca8e03c018e804040f690385afe26a35eb2b07ab72e" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:35:49.164769 systemd[1]: Started cri-containerd-9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27.scope - libcontainer container 9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27. Jul 12 09:35:49.197058 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 09:35:49.220552 containerd[1532]: time="2025-07-12T09:35:49.220483181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6b96db46-p7s4x,Uid:1e8d79fd-65fe-4e22-927d-2c54d9d8d62f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27\"" Jul 12 09:35:49.437317 systemd-networkd[1433]: cali955885b6712: Gained IPv6LL Jul 12 09:35:49.865828 kubelet[2669]: I0712 09:35:49.864965 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cc55dbbd8-pxzqd" podStartSLOduration=26.864949122 podStartE2EDuration="26.864949122s" podCreationTimestamp="2025-07-12 09:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:35:49.140326372 +0000 UTC m=+41.299575931" watchObservedRunningTime="2025-07-12 09:35:49.864949122 +0000 UTC m=+42.024198681" Jul 12 09:35:50.078965 systemd-networkd[1433]: calic612ee80811: Gained IPv6LL Jul 12 09:35:50.092830 kubelet[2669]: E0712 09:35:50.092374 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:50.142342 systemd-networkd[1433]: calidebe9b58e96: Gained IPv6LL Jul 12 09:35:50.258852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376644190.mount: Deactivated successfully. Jul 12 09:35:50.638280 containerd[1532]: time="2025-07-12T09:35:50.638027427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:50.650797 containerd[1532]: time="2025-07-12T09:35:50.638990956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 09:35:50.650919 containerd[1532]: time="2025-07-12T09:35:50.639803564Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:50.650919 containerd[1532]: time="2025-07-12T09:35:50.642699473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.08843018s" Jul 12 09:35:50.650919 containerd[1532]: time="2025-07-12T09:35:50.650890873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 09:35:50.651565 containerd[1532]: time="2025-07-12T09:35:50.651524679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:50.652914 containerd[1532]: time="2025-07-12T09:35:50.652536609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 09:35:50.655322 containerd[1532]: time="2025-07-12T09:35:50.655280076Z" level=info msg="CreateContainer within sandbox \"a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 09:35:50.662352 containerd[1532]: time="2025-07-12T09:35:50.661484657Z" level=info msg="Container 1eb10830a14f8c03484c3eba9999289d9f3877075d532425dcdea2188e7a0cd9: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:50.668435 containerd[1532]: time="2025-07-12T09:35:50.668384245Z" level=info msg="CreateContainer within sandbox \"a08c571bbe1dcea3d475f20fe63bda29d87299b7df6101eaea39fd8b74d8b94a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1eb10830a14f8c03484c3eba9999289d9f3877075d532425dcdea2188e7a0cd9\"" Jul 12 09:35:50.669518 containerd[1532]: time="2025-07-12T09:35:50.669362654Z" level=info msg="StartContainer for \"1eb10830a14f8c03484c3eba9999289d9f3877075d532425dcdea2188e7a0cd9\"" Jul 12 09:35:50.672117 containerd[1532]: time="2025-07-12T09:35:50.672072121Z" level=info msg="connecting to shim 1eb10830a14f8c03484c3eba9999289d9f3877075d532425dcdea2188e7a0cd9" address="unix:///run/containerd/s/481941b93f21ce0a5b9cebba87ddac72d9b9ed74152b58445c09ffaf707f9c25" protocol=ttrpc version=3 Jul 12 09:35:50.697981 systemd[1]: Started cri-containerd-1eb10830a14f8c03484c3eba9999289d9f3877075d532425dcdea2188e7a0cd9.scope - libcontainer container 1eb10830a14f8c03484c3eba9999289d9f3877075d532425dcdea2188e7a0cd9. Jul 12 09:35:50.740243 containerd[1532]: time="2025-07-12T09:35:50.740135309Z" level=info msg="StartContainer for \"1eb10830a14f8c03484c3eba9999289d9f3877075d532425dcdea2188e7a0cd9\" returns successfully" Jul 12 09:35:50.973911 systemd-networkd[1433]: cali3760365fa9f: Gained IPv6LL Jul 12 09:35:51.094708 kubelet[2669]: E0712 09:35:51.094608 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:35:51.822627 containerd[1532]: time="2025-07-12T09:35:51.822570475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:51.823728 containerd[1532]: time="2025-07-12T09:35:51.823523324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 09:35:51.824434 containerd[1532]: time="2025-07-12T09:35:51.824400612Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:51.826508 containerd[1532]: time="2025-07-12T09:35:51.826473192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:51.827362 containerd[1532]: time="2025-07-12T09:35:51.827332720Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.17469279s" Jul 12 09:35:51.827494 containerd[1532]: time="2025-07-12T09:35:51.827455401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 09:35:51.829151 containerd[1532]: time="2025-07-12T09:35:51.828964736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 09:35:51.830379 containerd[1532]: time="2025-07-12T09:35:51.829967785Z" level=info msg="CreateContainer within sandbox \"ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 09:35:51.839900 containerd[1532]: time="2025-07-12T09:35:51.837969742Z" level=info msg="Container d1c89ec7796e549788eeddeaaf1e4e460f39adcccab10386ae0eba17d64d34dc: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:51.847767 containerd[1532]: time="2025-07-12T09:35:51.847720115Z" level=info msg="CreateContainer within sandbox \"ca7f8cfa52d7de9207d8990359b71c60bb71d9b28c4277004a452884ad50002b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d1c89ec7796e549788eeddeaaf1e4e460f39adcccab10386ae0eba17d64d34dc\"" Jul 12 09:35:51.848561 containerd[1532]: time="2025-07-12T09:35:51.848521642Z" level=info msg="StartContainer for \"d1c89ec7796e549788eeddeaaf1e4e460f39adcccab10386ae0eba17d64d34dc\"" Jul 12 09:35:51.850397 containerd[1532]: time="2025-07-12T09:35:51.850363220Z" level=info msg="connecting to shim d1c89ec7796e549788eeddeaaf1e4e460f39adcccab10386ae0eba17d64d34dc" address="unix:///run/containerd/s/52961d22e2be3e12414a5250dbf05a4e48bc087032ae69505573e247d8fb6d9d" protocol=ttrpc version=3 Jul 12 09:35:51.874055 systemd[1]: Started cri-containerd-d1c89ec7796e549788eeddeaaf1e4e460f39adcccab10386ae0eba17d64d34dc.scope - libcontainer container d1c89ec7796e549788eeddeaaf1e4e460f39adcccab10386ae0eba17d64d34dc. Jul 12 09:35:51.922802 containerd[1532]: time="2025-07-12T09:35:51.922706511Z" level=info msg="StartContainer for \"d1c89ec7796e549788eeddeaaf1e4e460f39adcccab10386ae0eba17d64d34dc\" returns successfully" Jul 12 09:35:51.991886 kubelet[2669]: I0712 09:35:51.991829 2669 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 09:35:51.991886 kubelet[2669]: I0712 09:35:51.991882 2669 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 09:35:52.119936 kubelet[2669]: I0712 09:35:52.119697 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tsbtk" podStartSLOduration=20.698361223 podStartE2EDuration="27.11957768s" podCreationTimestamp="2025-07-12 09:35:25 +0000 UTC" firstStartedPulling="2025-07-12 09:35:45.407123473 +0000 UTC m=+37.566373032" lastFinishedPulling="2025-07-12 09:35:51.82833993 +0000 UTC m=+43.987589489" observedRunningTime="2025-07-12 09:35:52.119393319 +0000 UTC m=+44.278642918" watchObservedRunningTime="2025-07-12 09:35:52.11957768 +0000 UTC m=+44.278827239" Jul 12 09:35:52.121610 kubelet[2669]: I0712 09:35:52.120793 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-jfc5p" podStartSLOduration=24.784435865 podStartE2EDuration="27.120779291s" podCreationTimestamp="2025-07-12 09:35:25 +0000 UTC" firstStartedPulling="2025-07-12 09:35:48.316020222 +0000 UTC m=+40.475269781" lastFinishedPulling="2025-07-12 09:35:50.652363648 +0000 UTC m=+42.811613207" observedRunningTime="2025-07-12 09:35:51.107290044 +0000 UTC m=+43.266539683" watchObservedRunningTime="2025-07-12 09:35:52.120779291 +0000 UTC m=+44.280028890" Jul 12 09:35:52.273503 containerd[1532]: time="2025-07-12T09:35:52.273388510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1eb10830a14f8c03484c3eba9999289d9f3877075d532425dcdea2188e7a0cd9\" id:\"540d739891fb3761475cb66a720f75bdbd8690493d8393e13e91bee47ec54bfc\" pid:5067 exit_status:1 exited_at:{seconds:1752312952 nanos:262521569}" Jul 12 09:35:53.187176 containerd[1532]: time="2025-07-12T09:35:53.187080354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1eb10830a14f8c03484c3eba9999289d9f3877075d532425dcdea2188e7a0cd9\" id:\"37b73eb959555a47df10844eb12b9b277337fff8000550a17d59527a08b34ba3\" pid:5092 exit_status:1 exited_at:{seconds:1752312953 nanos:186588509}" Jul 12 09:35:53.769668 containerd[1532]: time="2025-07-12T09:35:53.769623902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:53.770159 systemd[1]: Started sshd@8-10.0.0.56:22-10.0.0.1:37558.service - OpenSSH per-connection server daemon (10.0.0.1:37558). Jul 12 09:35:53.770715 containerd[1532]: time="2025-07-12T09:35:53.770297068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 09:35:53.771292 containerd[1532]: time="2025-07-12T09:35:53.771102355Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:53.773612 containerd[1532]: time="2025-07-12T09:35:53.773583698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:35:53.774718 containerd[1532]: time="2025-07-12T09:35:53.774693788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.945684012s" Jul 12 09:35:53.774769 containerd[1532]: time="2025-07-12T09:35:53.774722508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 09:35:53.790056 containerd[1532]: time="2025-07-12T09:35:53.789916805Z" level=info msg="CreateContainer within sandbox \"9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 09:35:53.796828 containerd[1532]: time="2025-07-12T09:35:53.795202053Z" level=info msg="Container 726a5107602a168efaedf5453ac6b0981a3595c19ebdfc8109f248965adec89c: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:35:53.805599 containerd[1532]: time="2025-07-12T09:35:53.805561947Z" level=info msg="CreateContainer within sandbox \"9dd9f52be2a0c50aa77f60b1ebe9141ce2a05c8f557caa3300a30f3771380f27\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"726a5107602a168efaedf5453ac6b0981a3595c19ebdfc8109f248965adec89c\"" Jul 12 09:35:53.806166 containerd[1532]: time="2025-07-12T09:35:53.806115112Z" level=info msg="StartContainer for \"726a5107602a168efaedf5453ac6b0981a3595c19ebdfc8109f248965adec89c\"" Jul 12 09:35:53.809165 containerd[1532]: time="2025-07-12T09:35:53.809132419Z" level=info msg="connecting to shim 726a5107602a168efaedf5453ac6b0981a3595c19ebdfc8109f248965adec89c" address="unix:///run/containerd/s/df1051ff6a3b2e76bbd37ca8e03c018e804040f690385afe26a35eb2b07ab72e" protocol=ttrpc version=3 Jul 12 09:35:53.831952 systemd[1]: Started cri-containerd-726a5107602a168efaedf5453ac6b0981a3595c19ebdfc8109f248965adec89c.scope - libcontainer container 726a5107602a168efaedf5453ac6b0981a3595c19ebdfc8109f248965adec89c. Jul 12 09:35:53.865931 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 37558 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:35:53.868872 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:35:53.869767 containerd[1532]: time="2025-07-12T09:35:53.869626406Z" level=info msg="StartContainer for \"726a5107602a168efaedf5453ac6b0981a3595c19ebdfc8109f248965adec89c\" returns successfully" Jul 12 09:35:53.875579 systemd-logind[1506]: New session 9 of user core. Jul 12 09:35:53.884967 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 09:35:53.927606 kubelet[2669]: I0712 09:35:53.927535 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 09:35:54.111783 containerd[1532]: time="2025-07-12T09:35:54.111676968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6cd61bb22145a0afcd4f8bc503fd8200f87faada600f4e4c195d5ffac781f4c1\" id:\"d74b6c8522aabb17b0610690d7031387f0e846b729084d66f9edea4d8db3e54b\" pid:5174 exited_at:{seconds:1752312954 nanos:111364646}" Jul 12 09:35:54.155962 sshd[5147]: Connection closed by 10.0.0.1 port 37558 Jul 12 09:35:54.155503 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Jul 12 09:35:54.160212 systemd[1]: sshd@8-10.0.0.56:22-10.0.0.1:37558.service: Deactivated successfully. Jul 12 09:35:54.165238 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 09:35:54.171492 systemd-logind[1506]: Session 9 logged out. Waiting for processes to exit. Jul 12 09:35:54.173566 systemd-logind[1506]: Removed session 9. Jul 12 09:35:54.195992 containerd[1532]: time="2025-07-12T09:35:54.195904510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"726a5107602a168efaedf5453ac6b0981a3595c19ebdfc8109f248965adec89c\" id:\"bba068eb68e4bdaff9a83388822c14e76c3a506cb8db2ff02266777e252af2f9\" pid:5226 exited_at:{seconds:1752312954 nanos:195546507}" Jul 12 09:35:54.209780 kubelet[2669]: I0712 09:35:54.209700 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f6b96db46-p7s4x" podStartSLOduration=24.656502236 podStartE2EDuration="29.209681151s" podCreationTimestamp="2025-07-12 09:35:25 +0000 UTC" firstStartedPulling="2025-07-12 09:35:49.223313289 +0000 UTC m=+41.382562848" lastFinishedPulling="2025-07-12 09:35:53.776492204 +0000 UTC m=+45.935741763" observedRunningTime="2025-07-12 09:35:54.162030812 +0000 UTC m=+46.321280451" watchObservedRunningTime="2025-07-12 09:35:54.209681151 +0000 UTC m=+46.368930750" Jul 12 09:35:54.239904 containerd[1532]: time="2025-07-12T09:35:54.239861697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6cd61bb22145a0afcd4f8bc503fd8200f87faada600f4e4c195d5ffac781f4c1\" id:\"2eb26e559f863f14e07df7fb19b88fcb4a8643dbdbe3cdda8db49d0a447632a1\" pid:5216 exited_at:{seconds:1752312954 nanos:239507614}" Jul 12 09:35:59.168604 systemd[1]: Started sshd@9-10.0.0.56:22-10.0.0.1:37560.service - OpenSSH per-connection server daemon (10.0.0.1:37560). Jul 12 09:35:59.234046 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 37560 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:35:59.235848 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:35:59.241619 systemd-logind[1506]: New session 10 of user core. Jul 12 09:35:59.255978 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 09:35:59.410174 sshd[5265]: Connection closed by 10.0.0.1 port 37560 Jul 12 09:35:59.410668 sshd-session[5262]: pam_unix(sshd:session): session closed for user core Jul 12 09:35:59.422540 systemd[1]: sshd@9-10.0.0.56:22-10.0.0.1:37560.service: Deactivated successfully. Jul 12 09:35:59.424368 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 09:35:59.426356 systemd-logind[1506]: Session 10 logged out. Waiting for processes to exit. Jul 12 09:35:59.429274 systemd[1]: Started sshd@10-10.0.0.56:22-10.0.0.1:37576.service - OpenSSH per-connection server daemon (10.0.0.1:37576). Jul 12 09:35:59.430060 systemd-logind[1506]: Removed session 10. Jul 12 09:35:59.482357 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 37576 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:35:59.483597 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:35:59.488902 systemd-logind[1506]: New session 11 of user core. Jul 12 09:35:59.495971 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 09:35:59.685004 sshd[5283]: Connection closed by 10.0.0.1 port 37576 Jul 12 09:35:59.685946 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Jul 12 09:35:59.698552 systemd[1]: sshd@10-10.0.0.56:22-10.0.0.1:37576.service: Deactivated successfully. Jul 12 09:35:59.702188 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 09:35:59.703303 systemd-logind[1506]: Session 11 logged out. Waiting for processes to exit. Jul 12 09:35:59.708311 systemd[1]: Started sshd@11-10.0.0.56:22-10.0.0.1:37582.service - OpenSSH per-connection server daemon (10.0.0.1:37582). Jul 12 09:35:59.709786 systemd-logind[1506]: Removed session 11. Jul 12 09:35:59.758162 sshd[5294]: Accepted publickey for core from 10.0.0.1 port 37582 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:35:59.759637 sshd-session[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:35:59.764080 systemd-logind[1506]: New session 12 of user core. Jul 12 09:35:59.774978 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 09:35:59.911088 sshd[5297]: Connection closed by 10.0.0.1 port 37582 Jul 12 09:35:59.911432 sshd-session[5294]: pam_unix(sshd:session): session closed for user core Jul 12 09:35:59.915053 systemd-logind[1506]: Session 12 logged out. Waiting for processes to exit. Jul 12 09:35:59.915286 systemd[1]: sshd@11-10.0.0.56:22-10.0.0.1:37582.service: Deactivated successfully. Jul 12 09:35:59.917911 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 09:35:59.919837 systemd-logind[1506]: Removed session 12. Jul 12 09:36:03.722691 containerd[1532]: time="2025-07-12T09:36:03.722642160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"726a5107602a168efaedf5453ac6b0981a3595c19ebdfc8109f248965adec89c\" id:\"f366f3302b235b9ee8550df8147fb6ebf09f4b4fa24917fb497000ee4a424c32\" pid:5328 exited_at:{seconds:1752312963 nanos:722448958}" Jul 12 09:36:04.924135 systemd[1]: Started sshd@12-10.0.0.56:22-10.0.0.1:50526.service - OpenSSH per-connection server daemon (10.0.0.1:50526). Jul 12 09:36:04.989410 sshd[5342]: Accepted publickey for core from 10.0.0.1 port 50526 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:36:04.990758 sshd-session[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:36:04.995143 systemd-logind[1506]: New session 13 of user core. Jul 12 09:36:05.005978 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 09:36:05.119421 sshd[5345]: Connection closed by 10.0.0.1 port 50526 Jul 12 09:36:05.120008 sshd-session[5342]: pam_unix(sshd:session): session closed for user core Jul 12 09:36:05.128718 systemd[1]: sshd@12-10.0.0.56:22-10.0.0.1:50526.service: Deactivated successfully. Jul 12 09:36:05.131260 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 09:36:05.132162 systemd-logind[1506]: Session 13 logged out. Waiting for processes to exit. Jul 12 09:36:05.135762 systemd[1]: Started sshd@13-10.0.0.56:22-10.0.0.1:50528.service - OpenSSH per-connection server daemon (10.0.0.1:50528). Jul 12 09:36:05.136456 systemd-logind[1506]: Removed session 13. Jul 12 09:36:05.191626 sshd[5359]: Accepted publickey for core from 10.0.0.1 port 50528 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:36:05.193111 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:36:05.197197 systemd-logind[1506]: New session 14 of user core. Jul 12 09:36:05.206988 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 09:36:05.405867 sshd[5362]: Connection closed by 10.0.0.1 port 50528 Jul 12 09:36:05.406008 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Jul 12 09:36:05.419414 systemd[1]: sshd@13-10.0.0.56:22-10.0.0.1:50528.service: Deactivated successfully. Jul 12 09:36:05.422406 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 09:36:05.423623 systemd-logind[1506]: Session 14 logged out. Waiting for processes to exit. Jul 12 09:36:05.428125 systemd[1]: Started sshd@14-10.0.0.56:22-10.0.0.1:50540.service - OpenSSH per-connection server daemon (10.0.0.1:50540). Jul 12 09:36:05.428731 systemd-logind[1506]: Removed session 14. Jul 12 09:36:05.487813 sshd[5374]: Accepted publickey for core from 10.0.0.1 port 50540 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:36:05.489388 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:36:05.494321 systemd-logind[1506]: New session 15 of user core. Jul 12 09:36:05.504994 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 09:36:06.064236 sshd[5377]: Connection closed by 10.0.0.1 port 50540 Jul 12 09:36:06.064785 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Jul 12 09:36:06.076460 systemd[1]: sshd@14-10.0.0.56:22-10.0.0.1:50540.service: Deactivated successfully. Jul 12 09:36:06.079700 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 09:36:06.081565 systemd-logind[1506]: Session 15 logged out. Waiting for processes to exit. Jul 12 09:36:06.084864 systemd[1]: Started sshd@15-10.0.0.56:22-10.0.0.1:50552.service - OpenSSH per-connection server daemon (10.0.0.1:50552). Jul 12 09:36:06.085964 systemd-logind[1506]: Removed session 15. Jul 12 09:36:06.139443 sshd[5395]: Accepted publickey for core from 10.0.0.1 port 50552 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:36:06.140679 sshd-session[5395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:36:06.145192 systemd-logind[1506]: New session 16 of user core. Jul 12 09:36:06.155020 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 09:36:06.516475 sshd[5399]: Connection closed by 10.0.0.1 port 50552 Jul 12 09:36:06.516840 sshd-session[5395]: pam_unix(sshd:session): session closed for user core Jul 12 09:36:06.527488 systemd[1]: sshd@15-10.0.0.56:22-10.0.0.1:50552.service: Deactivated successfully. Jul 12 09:36:06.529375 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 09:36:06.531403 systemd-logind[1506]: Session 16 logged out. Waiting for processes to exit. Jul 12 09:36:06.536059 systemd[1]: Started sshd@16-10.0.0.56:22-10.0.0.1:50554.service - OpenSSH per-connection server daemon (10.0.0.1:50554). Jul 12 09:36:06.538401 systemd-logind[1506]: Removed session 16. Jul 12 09:36:06.590693 sshd[5412]: Accepted publickey for core from 10.0.0.1 port 50554 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:36:06.592639 sshd-session[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:36:06.597521 systemd-logind[1506]: New session 17 of user core. Jul 12 09:36:06.609985 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 09:36:06.758983 sshd[5415]: Connection closed by 10.0.0.1 port 50554 Jul 12 09:36:06.759679 sshd-session[5412]: pam_unix(sshd:session): session closed for user core Jul 12 09:36:06.764521 systemd[1]: sshd@16-10.0.0.56:22-10.0.0.1:50554.service: Deactivated successfully. Jul 12 09:36:06.768447 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 09:36:06.771021 systemd-logind[1506]: Session 17 logged out. Waiting for processes to exit. Jul 12 09:36:06.772307 systemd-logind[1506]: Removed session 17. Jul 12 09:36:11.774037 systemd[1]: Started sshd@17-10.0.0.56:22-10.0.0.1:50558.service - OpenSSH per-connection server daemon (10.0.0.1:50558). Jul 12 09:36:11.834082 sshd[5433]: Accepted publickey for core from 10.0.0.1 port 50558 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:36:11.835278 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:36:11.839087 systemd-logind[1506]: New session 18 of user core. Jul 12 09:36:11.854977 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 09:36:12.008086 sshd[5436]: Connection closed by 10.0.0.1 port 50558 Jul 12 09:36:12.008746 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Jul 12 09:36:12.013028 systemd[1]: sshd@17-10.0.0.56:22-10.0.0.1:50558.service: Deactivated successfully. Jul 12 09:36:12.016066 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 09:36:12.017022 systemd-logind[1506]: Session 18 logged out. Waiting for processes to exit. Jul 12 09:36:12.018417 systemd-logind[1506]: Removed session 18. Jul 12 09:36:17.022864 systemd[1]: Started sshd@18-10.0.0.56:22-10.0.0.1:41936.service - OpenSSH per-connection server daemon (10.0.0.1:41936). Jul 12 09:36:17.084612 sshd[5454]: Accepted publickey for core from 10.0.0.1 port 41936 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:36:17.087308 sshd-session[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:36:17.094096 systemd-logind[1506]: New session 19 of user core. Jul 12 09:36:17.108043 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 09:36:17.340133 sshd[5457]: Connection closed by 10.0.0.1 port 41936 Jul 12 09:36:17.340768 sshd-session[5454]: pam_unix(sshd:session): session closed for user core Jul 12 09:36:17.345493 systemd-logind[1506]: Session 19 logged out. Waiting for processes to exit. Jul 12 09:36:17.345866 systemd[1]: sshd@18-10.0.0.56:22-10.0.0.1:41936.service: Deactivated successfully. Jul 12 09:36:17.347560 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 09:36:17.350417 systemd-logind[1506]: Removed session 19. Jul 12 09:36:21.923735 kubelet[2669]: E0712 09:36:21.923629 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:36:22.358011 systemd[1]: Started sshd@19-10.0.0.56:22-10.0.0.1:41940.service - OpenSSH per-connection server daemon (10.0.0.1:41940). Jul 12 09:36:22.422631 sshd[5477]: Accepted publickey for core from 10.0.0.1 port 41940 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:36:22.424761 sshd-session[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:36:22.430416 systemd-logind[1506]: New session 20 of user core. Jul 12 09:36:22.436971 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 09:36:22.613180 sshd[5480]: Connection closed by 10.0.0.1 port 41940 Jul 12 09:36:22.613659 sshd-session[5477]: pam_unix(sshd:session): session closed for user core Jul 12 09:36:22.618595 systemd-logind[1506]: Session 20 logged out. Waiting for processes to exit. Jul 12 09:36:22.618854 systemd[1]: sshd@19-10.0.0.56:22-10.0.0.1:41940.service: Deactivated successfully. Jul 12 09:36:22.620444 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 09:36:22.622184 systemd-logind[1506]: Removed session 20. Jul 12 09:36:23.225539 containerd[1532]: time="2025-07-12T09:36:23.225357012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1eb10830a14f8c03484c3eba9999289d9f3877075d532425dcdea2188e7a0cd9\" id:\"74cf6ef51d9d87078f70752487b1634583225715f9012148a6fdfa3a58694d1c\" pid:5505 exited_at:{seconds:1752312983 nanos:225052091}" Jul 12 09:36:24.153911 containerd[1532]: time="2025-07-12T09:36:24.153871473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"726a5107602a168efaedf5453ac6b0981a3595c19ebdfc8109f248965adec89c\" id:\"ba61cc97742ed4ab34fa2d8ff3047c3095bb99c0005bce9308d7ea828e9a3be1\" pid:5549 exited_at:{seconds:1752312984 nanos:153572752}" Jul 12 09:36:24.196466 containerd[1532]: time="2025-07-12T09:36:24.196421332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6cd61bb22145a0afcd4f8bc503fd8200f87faada600f4e4c195d5ffac781f4c1\" id:\"1f3ec5f7089a3cd0b028120c43431d8adcfdaee3f6cb32f5c65c25ec39892dff\" pid:5536 exited_at:{seconds:1752312984 nanos:196154651}" Jul 12 09:36:27.625298 systemd[1]: Started sshd@20-10.0.0.56:22-10.0.0.1:59390.service - OpenSSH per-connection server daemon (10.0.0.1:59390). Jul 12 09:36:27.677338 sshd[5567]: Accepted publickey for core from 10.0.0.1 port 59390 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:36:27.679391 sshd-session[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:36:27.683575 systemd-logind[1506]: New session 21 of user core. Jul 12 09:36:27.688979 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 09:36:27.828425 sshd[5570]: Connection closed by 10.0.0.1 port 59390 Jul 12 09:36:27.829157 sshd-session[5567]: pam_unix(sshd:session): session closed for user core Jul 12 09:36:27.833670 systemd[1]: sshd@20-10.0.0.56:22-10.0.0.1:59390.service: Deactivated successfully. Jul 12 09:36:27.836061 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 09:36:27.837177 systemd-logind[1506]: Session 21 logged out. Waiting for processes to exit. Jul 12 09:36:27.838831 systemd-logind[1506]: Removed session 21. Jul 12 09:36:27.923520 kubelet[2669]: E0712 09:36:27.923464 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 09:36:28.922601 kubelet[2669]: E0712 09:36:28.922558 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"