Jul 6 23:32:15.834249 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:32:15.834271 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:52:18 -00 2025 Jul 6 23:32:15.834281 kernel: KASLR enabled Jul 6 23:32:15.834286 kernel: efi: EFI v2.7 by EDK II Jul 6 23:32:15.834292 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 6 23:32:15.834297 kernel: random: crng init done Jul 6 23:32:15.834304 kernel: secureboot: Secure boot disabled Jul 6 23:32:15.834310 kernel: ACPI: Early table checksum verification disabled Jul 6 23:32:15.834316 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 6 23:32:15.834323 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:32:15.834329 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:32:15.834334 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:32:15.834340 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:32:15.834346 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:32:15.834353 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:32:15.834361 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:32:15.834367 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:32:15.834373 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:32:15.834380 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:32:15.834386 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 6 23:32:15.834392 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:32:15.834398 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:32:15.834404 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] Jul 6 23:32:15.834410 kernel: Zone ranges: Jul 6 23:32:15.834416 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:32:15.834423 kernel: DMA32 empty Jul 6 23:32:15.834433 kernel: Normal empty Jul 6 23:32:15.834439 kernel: Device empty Jul 6 23:32:15.834445 kernel: Movable zone start for each node Jul 6 23:32:15.834451 kernel: Early memory node ranges Jul 6 23:32:15.834457 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 6 23:32:15.834463 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 6 23:32:15.834470 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 6 23:32:15.834476 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 6 23:32:15.834482 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 6 23:32:15.834488 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 6 23:32:15.834494 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 6 23:32:15.834502 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 6 23:32:15.834511 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 6 23:32:15.834517 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 6 23:32:15.834526 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 6 23:32:15.834533 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 6 23:32:15.834539 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 6 23:32:15.834548 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:32:15.834554 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 6 23:32:15.834561 kernel: psci: probing for conduit method from ACPI. Jul 6 23:32:15.834567 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:32:15.834574 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:32:15.834580 kernel: psci: Trusted OS migration not required Jul 6 23:32:15.834587 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:32:15.834593 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 6 23:32:15.834600 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:32:15.834607 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:32:15.834615 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 6 23:32:15.834621 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:32:15.834631 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:32:15.834638 kernel: CPU features: detected: Spectre-v4 Jul 6 23:32:15.834644 kernel: CPU features: detected: Spectre-BHB Jul 6 23:32:15.834655 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:32:15.834662 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:32:15.834668 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:32:15.834675 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:32:15.834682 kernel: alternatives: applying boot alternatives Jul 6 23:32:15.834689 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 6 23:32:15.834698 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:32:15.834704 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:32:15.834711 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:32:15.834718 kernel: Fallback order for Node 0: 0 Jul 6 23:32:15.834727 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 6 23:32:15.834734 kernel: Policy zone: DMA Jul 6 23:32:15.834740 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:32:15.834747 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 6 23:32:15.834761 kernel: software IO TLB: area num 4. Jul 6 23:32:15.834782 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 6 23:32:15.834789 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) Jul 6 23:32:15.834796 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:32:15.834805 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:32:15.834812 kernel: rcu: RCU event tracing is enabled. Jul 6 23:32:15.834819 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:32:15.834826 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:32:15.834833 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:32:15.834839 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:32:15.834846 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:32:15.834853 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:32:15.834859 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:32:15.834866 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:32:15.834872 kernel: GICv3: 256 SPIs implemented Jul 6 23:32:15.834880 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:32:15.834887 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:32:15.834893 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:32:15.834900 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 6 23:32:15.834906 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 6 23:32:15.834913 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 6 23:32:15.834919 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:32:15.834926 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:32:15.834933 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 6 23:32:15.834940 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 6 23:32:15.834946 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:32:15.834953 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:32:15.834961 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:32:15.834968 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:32:15.834974 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:32:15.834981 kernel: arm-pv: using stolen time PV Jul 6 23:32:15.834988 kernel: Console: colour dummy device 80x25 Jul 6 23:32:15.834995 kernel: ACPI: Core revision 20240827 Jul 6 23:32:15.835002 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:32:15.835009 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:32:15.835016 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:32:15.835024 kernel: landlock: Up and running. Jul 6 23:32:15.835031 kernel: SELinux: Initializing. Jul 6 23:32:15.835037 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:32:15.835044 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:32:15.835051 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:32:15.835058 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:32:15.835065 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:32:15.835072 kernel: Remapping and enabling EFI services. Jul 6 23:32:15.835078 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:32:15.835085 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:32:15.835097 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 6 23:32:15.835104 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 6 23:32:15.835113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:32:15.835125 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:32:15.835132 kernel: Detected PIPT I-cache on CPU2 Jul 6 23:32:15.835139 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 6 23:32:15.835146 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 6 23:32:15.835154 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:32:15.835161 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 6 23:32:15.835169 kernel: Detected PIPT I-cache on CPU3 Jul 6 23:32:15.835176 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 6 23:32:15.835183 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 6 23:32:15.835191 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:32:15.835200 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 6 23:32:15.835207 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:32:15.835214 kernel: SMP: Total of 4 processors activated. Jul 6 23:32:15.835221 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:32:15.835230 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:32:15.835238 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:32:15.835245 kernel: CPU features: detected: Common not Private translations Jul 6 23:32:15.835252 kernel: CPU features: detected: CRC32 instructions Jul 6 23:32:15.835259 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 6 23:32:15.835267 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:32:15.835274 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:32:15.835281 kernel: CPU features: detected: Privileged Access Never Jul 6 23:32:15.835288 kernel: CPU features: detected: RAS Extension Support Jul 6 23:32:15.835296 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:32:15.835304 kernel: alternatives: applying system-wide alternatives Jul 6 23:32:15.835311 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 6 23:32:15.835318 kernel: Memory: 2440548K/2572288K available (11072K kernel code, 2428K rwdata, 9032K rodata, 39424K init, 1035K bss, 125792K reserved, 0K cma-reserved) Jul 6 23:32:15.835325 kernel: devtmpfs: initialized Jul 6 23:32:15.835332 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:32:15.835339 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:32:15.835346 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:32:15.835354 kernel: 0 pages in range for non-PLT usage Jul 6 23:32:15.835362 kernel: 508480 pages in range for PLT usage Jul 6 23:32:15.835369 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:32:15.835376 kernel: SMBIOS 3.0.0 present. Jul 6 23:32:15.835383 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 6 23:32:15.835390 kernel: DMI: Memory slots populated: 1/1 Jul 6 23:32:15.835397 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:32:15.835404 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:32:15.835411 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:32:15.835419 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:32:15.835427 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:32:15.835434 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Jul 6 23:32:15.835441 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:32:15.835448 kernel: cpuidle: using governor menu Jul 6 23:32:15.835455 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:32:15.835462 kernel: ASID allocator initialised with 32768 entries Jul 6 23:32:15.835469 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:32:15.835476 kernel: Serial: AMBA PL011 UART driver Jul 6 23:32:15.835483 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:32:15.835492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:32:15.835499 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:32:15.835506 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:32:15.835515 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:32:15.835522 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:32:15.835529 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:32:15.835536 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:32:15.835543 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:32:15.835551 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:32:15.835559 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:32:15.835567 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:32:15.835574 kernel: ACPI: Interpreter enabled Jul 6 23:32:15.835581 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:32:15.835588 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:32:15.835596 kernel: ACPI: CPU0 has been hot-added Jul 6 23:32:15.835603 kernel: ACPI: CPU1 has been hot-added Jul 6 23:32:15.835609 kernel: ACPI: CPU2 has been hot-added Jul 6 23:32:15.835616 kernel: ACPI: CPU3 has been hot-added Jul 6 23:32:15.835623 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:32:15.835632 kernel: printk: legacy console [ttyAMA0] enabled Jul 6 23:32:15.835639 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:32:15.835799 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:32:15.835878 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:32:15.835942 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:32:15.836002 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 6 23:32:15.836061 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 6 23:32:15.836074 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 6 23:32:15.836082 kernel: PCI host bridge to bus 0000:00 Jul 6 23:32:15.836149 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 6 23:32:15.836208 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:32:15.836263 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 6 23:32:15.836316 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:32:15.836391 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 6 23:32:15.836464 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 6 23:32:15.836533 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 6 23:32:15.836596 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 6 23:32:15.836661 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 6 23:32:15.836722 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 6 23:32:15.836811 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 6 23:32:15.836880 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 6 23:32:15.836936 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 6 23:32:15.836991 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:32:15.837044 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 6 23:32:15.837053 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:32:15.837060 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:32:15.837067 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:32:15.837074 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:32:15.837083 kernel: iommu: Default domain type: Translated Jul 6 23:32:15.837090 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:32:15.837097 kernel: efivars: Registered efivars operations Jul 6 23:32:15.837104 kernel: vgaarb: loaded Jul 6 23:32:15.837111 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:32:15.837123 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:32:15.837130 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:32:15.837137 kernel: pnp: PnP ACPI init Jul 6 23:32:15.837213 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 6 23:32:15.837225 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:32:15.837232 kernel: NET: Registered PF_INET protocol family Jul 6 23:32:15.837239 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:32:15.837246 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:32:15.837253 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:32:15.837260 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:32:15.837267 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:32:15.837274 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:32:15.837282 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:32:15.837289 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:32:15.837296 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:32:15.837303 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:32:15.837310 kernel: kvm [1]: HYP mode not available Jul 6 23:32:15.837317 kernel: Initialise system trusted keyrings Jul 6 23:32:15.837324 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:32:15.837331 kernel: Key type asymmetric registered Jul 6 23:32:15.837338 kernel: Asymmetric key parser 'x509' registered Jul 6 23:32:15.837346 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:32:15.837353 kernel: io scheduler mq-deadline registered Jul 6 23:32:15.837359 kernel: io scheduler kyber registered Jul 6 23:32:15.837366 kernel: io scheduler bfq registered Jul 6 23:32:15.837373 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:32:15.837380 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:32:15.837388 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:32:15.837447 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 6 23:32:15.837456 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:32:15.837465 kernel: thunder_xcv, ver 1.0 Jul 6 23:32:15.837472 kernel: thunder_bgx, ver 1.0 Jul 6 23:32:15.837479 kernel: nicpf, ver 1.0 Jul 6 23:32:15.837486 kernel: nicvf, ver 1.0 Jul 6 23:32:15.837556 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:32:15.837613 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:32:15 UTC (1751844735) Jul 6 23:32:15.837622 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:32:15.837629 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 6 23:32:15.837638 kernel: watchdog: NMI not fully supported Jul 6 23:32:15.837645 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:32:15.837652 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:32:15.837659 kernel: Segment Routing with IPv6 Jul 6 23:32:15.837666 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:32:15.837673 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:32:15.837680 kernel: Key type dns_resolver registered Jul 6 23:32:15.837686 kernel: registered taskstats version 1 Jul 6 23:32:15.837694 kernel: Loading compiled-in X.509 certificates Jul 6 23:32:15.837701 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 90fb300ebe1fa0773739bb35dad461c5679d8dfb' Jul 6 23:32:15.837709 kernel: Demotion targets for Node 0: null Jul 6 23:32:15.837716 kernel: Key type .fscrypt registered Jul 6 23:32:15.837723 kernel: Key type fscrypt-provisioning registered Jul 6 23:32:15.837730 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:32:15.837737 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:32:15.837744 kernel: ima: No architecture policies found Jul 6 23:32:15.837757 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:32:15.837777 kernel: clk: Disabling unused clocks Jul 6 23:32:15.837787 kernel: PM: genpd: Disabling unused power domains Jul 6 23:32:15.837795 kernel: Warning: unable to open an initial console. Jul 6 23:32:15.837802 kernel: Freeing unused kernel memory: 39424K Jul 6 23:32:15.837810 kernel: Run /init as init process Jul 6 23:32:15.837817 kernel: with arguments: Jul 6 23:32:15.837823 kernel: /init Jul 6 23:32:15.837830 kernel: with environment: Jul 6 23:32:15.837837 kernel: HOME=/ Jul 6 23:32:15.837847 kernel: TERM=linux Jul 6 23:32:15.837856 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:32:15.837864 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:32:15.837874 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:32:15.837881 systemd[1]: Detected virtualization kvm. Jul 6 23:32:15.837889 systemd[1]: Detected architecture arm64. Jul 6 23:32:15.837896 systemd[1]: Running in initrd. Jul 6 23:32:15.837903 systemd[1]: No hostname configured, using default hostname. Jul 6 23:32:15.837913 systemd[1]: Hostname set to . Jul 6 23:32:15.837920 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:32:15.837927 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:32:15.837935 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:32:15.837942 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:32:15.837950 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:32:15.837958 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:32:15.837965 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:32:15.837975 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:32:15.837983 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:32:15.837991 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:32:15.837999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:32:15.838006 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:32:15.838014 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:32:15.838022 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:32:15.838030 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:32:15.838038 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:32:15.838045 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:32:15.838053 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:32:15.838060 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:32:15.838068 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:32:15.838075 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:32:15.838083 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:32:15.838090 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:32:15.838099 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:32:15.838107 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:32:15.838116 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:32:15.838124 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:32:15.838132 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:32:15.838140 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:32:15.838147 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:32:15.838154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:32:15.838163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:32:15.838171 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:32:15.838179 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:32:15.838187 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:32:15.838195 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:32:15.838222 systemd-journald[244]: Collecting audit messages is disabled. Jul 6 23:32:15.838242 systemd-journald[244]: Journal started Jul 6 23:32:15.838261 systemd-journald[244]: Runtime Journal (/run/log/journal/492bb0e3582d457a92f0ed2f4bf5f1fb) is 6M, max 48.5M, 42.4M free. Jul 6 23:32:15.828294 systemd-modules-load[245]: Inserted module 'overlay' Jul 6 23:32:15.845451 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:32:15.847937 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:32:15.853833 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:32:15.851049 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:32:15.854067 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:32:15.857456 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 6 23:32:15.858958 kernel: Bridge firewalling registered Jul 6 23:32:15.872260 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:32:15.873388 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:32:15.877052 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:32:15.877542 systemd-tmpfiles[264]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:32:15.878399 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:32:15.883818 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:32:15.888897 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:32:15.891369 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:32:15.894074 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:32:15.896387 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:32:15.905516 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:32:15.922157 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 6 23:32:15.936081 systemd-resolved[288]: Positive Trust Anchors: Jul 6 23:32:15.936096 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:32:15.936134 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:32:15.941953 systemd-resolved[288]: Defaulting to hostname 'linux'. Jul 6 23:32:15.943081 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:32:15.943989 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:32:16.009780 kernel: SCSI subsystem initialized Jul 6 23:32:16.015809 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:32:16.023808 kernel: iscsi: registered transport (tcp) Jul 6 23:32:16.040811 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:32:16.040857 kernel: QLogic iSCSI HBA Driver Jul 6 23:32:16.057365 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:32:16.084868 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:32:16.087062 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:32:16.145622 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:32:16.148050 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:32:16.234797 kernel: raid6: neonx8 gen() 15590 MB/s Jul 6 23:32:16.251787 kernel: raid6: neonx4 gen() 15782 MB/s Jul 6 23:32:16.268780 kernel: raid6: neonx2 gen() 13166 MB/s Jul 6 23:32:16.285783 kernel: raid6: neonx1 gen() 10495 MB/s Jul 6 23:32:16.302784 kernel: raid6: int64x8 gen() 6871 MB/s Jul 6 23:32:16.319784 kernel: raid6: int64x4 gen() 7262 MB/s Jul 6 23:32:16.336794 kernel: raid6: int64x2 gen() 6099 MB/s Jul 6 23:32:16.353806 kernel: raid6: int64x1 gen() 5044 MB/s Jul 6 23:32:16.353845 kernel: raid6: using algorithm neonx4 gen() 15782 MB/s Jul 6 23:32:16.370789 kernel: raid6: .... xor() 12294 MB/s, rmw enabled Jul 6 23:32:16.370813 kernel: raid6: using neon recovery algorithm Jul 6 23:32:16.375972 kernel: xor: measuring software checksum speed Jul 6 23:32:16.375995 kernel: 8regs : 21511 MB/sec Jul 6 23:32:16.377083 kernel: 32regs : 21676 MB/sec Jul 6 23:32:16.377098 kernel: arm64_neon : 28147 MB/sec Jul 6 23:32:16.377107 kernel: xor: using function: arm64_neon (28147 MB/sec) Jul 6 23:32:16.435838 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:32:16.442834 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:32:16.445373 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:32:16.486469 systemd-udevd[503]: Using default interface naming scheme 'v255'. Jul 6 23:32:16.490805 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:32:16.492609 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:32:16.522347 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Jul 6 23:32:16.551815 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:32:16.554101 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:32:16.611015 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:32:16.613169 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:32:16.665798 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 6 23:32:16.668851 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:32:16.672061 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:32:16.672103 kernel: GPT:9289727 != 19775487 Jul 6 23:32:16.673141 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:32:16.674068 kernel: GPT:9289727 != 19775487 Jul 6 23:32:16.674089 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:32:16.674777 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:32:16.675101 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:32:16.675233 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:32:16.678459 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:32:16.680374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:32:16.713377 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:32:16.714669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:32:16.727961 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:32:16.729207 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:32:16.738283 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:32:16.744990 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:32:16.745972 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:32:16.747882 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:32:16.750092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:32:16.751700 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:32:16.754071 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:32:16.755659 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:32:16.775595 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:32:16.790142 disk-uuid[595]: Primary Header is updated. Jul 6 23:32:16.790142 disk-uuid[595]: Secondary Entries is updated. Jul 6 23:32:16.790142 disk-uuid[595]: Secondary Header is updated. Jul 6 23:32:16.793804 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:32:17.811793 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:32:17.812170 disk-uuid[603]: The operation has completed successfully. Jul 6 23:32:17.838255 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:32:17.838372 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:32:17.862969 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:32:17.877996 sh[614]: Success Jul 6 23:32:17.895297 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:32:17.896799 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:32:17.896832 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:32:17.905910 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:32:17.947912 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:32:17.949478 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:32:17.964979 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:32:17.972000 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:32:17.972048 kernel: BTRFS: device fsid aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (626) Jul 6 23:32:17.973467 kernel: BTRFS info (device dm-0): first mount of filesystem aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 Jul 6 23:32:17.973492 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:32:17.974174 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:32:17.979757 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:32:17.981053 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:32:17.982019 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:32:17.982907 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:32:17.985598 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:32:18.009940 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (657) Jul 6 23:32:18.011841 kernel: BTRFS info (device vda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:32:18.011881 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:32:18.011891 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:32:18.018911 kernel: BTRFS info (device vda6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:32:18.020212 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:32:18.022409 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:32:18.105912 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:32:18.108979 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:32:18.159656 systemd-networkd[801]: lo: Link UP Jul 6 23:32:18.160453 systemd-networkd[801]: lo: Gained carrier Jul 6 23:32:18.161456 systemd-networkd[801]: Enumeration completed Jul 6 23:32:18.161797 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:32:18.163797 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:32:18.163801 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:32:18.165146 systemd[1]: Reached target network.target - Network. Jul 6 23:32:18.168294 systemd-networkd[801]: eth0: Link UP Jul 6 23:32:18.168298 systemd-networkd[801]: eth0: Gained carrier Jul 6 23:32:18.168308 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:32:18.202859 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:32:18.232422 ignition[703]: Ignition 2.21.0 Jul 6 23:32:18.232436 ignition[703]: Stage: fetch-offline Jul 6 23:32:18.232475 ignition[703]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:18.232484 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:32:18.233142 ignition[703]: parsed url from cmdline: "" Jul 6 23:32:18.233146 ignition[703]: no config URL provided Jul 6 23:32:18.233152 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:32:18.233159 ignition[703]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:32:18.233181 ignition[703]: op(1): [started] loading QEMU firmware config module Jul 6 23:32:18.233185 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:32:18.247665 ignition[703]: op(1): [finished] loading QEMU firmware config module Jul 6 23:32:18.289458 ignition[703]: parsing config with SHA512: 4f1af255807209d93f740a565ea2e482e982bdb2f781d41069cfd88257d163945ab81d084156f8f1e3094b0c432c172ac215ce3662ee5197b5676baf073f1511 Jul 6 23:32:18.294369 unknown[703]: fetched base config from "system" Jul 6 23:32:18.294380 unknown[703]: fetched user config from "qemu" Jul 6 23:32:18.294932 ignition[703]: fetch-offline: fetch-offline passed Jul 6 23:32:18.294941 systemd-resolved[288]: Detected conflict on linux IN A 10.0.0.79 Jul 6 23:32:18.294989 ignition[703]: Ignition finished successfully Jul 6 23:32:18.294948 systemd-resolved[288]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jul 6 23:32:18.296516 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:32:18.297919 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:32:18.298911 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:32:18.329396 ignition[814]: Ignition 2.21.0 Jul 6 23:32:18.329418 ignition[814]: Stage: kargs Jul 6 23:32:18.329561 ignition[814]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:18.329571 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:32:18.330326 ignition[814]: kargs: kargs passed Jul 6 23:32:18.330374 ignition[814]: Ignition finished successfully Jul 6 23:32:18.335847 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:32:18.337755 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:32:18.378812 ignition[822]: Ignition 2.21.0 Jul 6 23:32:18.378828 ignition[822]: Stage: disks Jul 6 23:32:18.378963 ignition[822]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:18.378972 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:32:18.380163 ignition[822]: disks: disks passed Jul 6 23:32:18.380218 ignition[822]: Ignition finished successfully Jul 6 23:32:18.382265 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:32:18.383338 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:32:18.384559 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:32:18.386240 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:32:18.387819 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:32:18.389287 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:32:18.392016 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:32:18.430286 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 6 23:32:18.435493 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:32:18.437884 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:32:18.524789 kernel: EXT4-fs (vda9): mounted filesystem a6b10247-fbe6-4a25-95d9-ddd4b58604ec r/w with ordered data mode. Quota mode: none. Jul 6 23:32:18.525781 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:32:18.526866 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:32:18.529265 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:32:18.531052 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:32:18.532057 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:32:18.532171 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:32:18.532246 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:32:18.559735 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:32:18.562656 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:32:18.567855 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (840) Jul 6 23:32:18.571603 kernel: BTRFS info (device vda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:32:18.571643 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:32:18.571655 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:32:18.576686 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:32:18.632340 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:32:18.635921 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:32:18.640144 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:32:18.643734 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:32:18.721703 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:32:18.723478 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:32:18.725025 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:32:18.747790 kernel: BTRFS info (device vda6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:32:18.765866 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:32:18.778339 ignition[953]: INFO : Ignition 2.21.0 Jul 6 23:32:18.778339 ignition[953]: INFO : Stage: mount Jul 6 23:32:18.780135 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:18.780135 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:32:18.781839 ignition[953]: INFO : mount: mount passed Jul 6 23:32:18.781839 ignition[953]: INFO : Ignition finished successfully Jul 6 23:32:18.782358 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:32:18.785943 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:32:18.971504 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:32:18.972999 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:32:18.995810 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (966) Jul 6 23:32:18.998141 kernel: BTRFS info (device vda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:32:18.998197 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:32:18.998208 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:32:19.001032 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:32:19.030179 ignition[983]: INFO : Ignition 2.21.0 Jul 6 23:32:19.030179 ignition[983]: INFO : Stage: files Jul 6 23:32:19.030179 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:19.030179 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:32:19.033707 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:32:19.035196 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:32:19.035196 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:32:19.039290 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:32:19.040813 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:32:19.040813 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:32:19.039919 unknown[983]: wrote ssh authorized keys file for user: core Jul 6 23:32:19.044567 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 6 23:32:19.044567 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 6 23:32:19.084756 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:32:19.212780 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 6 23:32:19.214726 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:32:19.214726 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:32:19.214726 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:32:19.214726 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:32:19.214726 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:32:19.214726 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:32:19.214726 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:32:19.214726 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:32:19.225790 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:32:19.225790 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:32:19.225790 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:32:19.225790 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:32:19.225790 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:32:19.225790 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 6 23:32:19.342883 systemd-networkd[801]: eth0: Gained IPv6LL Jul 6 23:32:19.861958 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:32:20.755730 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:32:20.755730 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:32:20.758976 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:32:20.760459 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:32:20.760459 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:32:20.760459 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 6 23:32:20.760459 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:32:20.760459 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:32:20.760459 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 6 23:32:20.760459 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:32:20.784786 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:32:20.788509 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:32:20.791260 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:32:20.791260 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:32:20.791260 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:32:20.791260 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:32:20.791260 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:32:20.791260 ignition[983]: INFO : files: files passed Jul 6 23:32:20.791260 ignition[983]: INFO : Ignition finished successfully Jul 6 23:32:20.792048 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:32:20.794633 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:32:20.799901 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:32:20.812901 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:32:20.819873 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:32:20.813527 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:32:20.823299 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:32:20.823299 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:32:20.819444 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:32:20.830315 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:32:20.820867 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:32:20.823383 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:32:20.849796 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:32:20.849925 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:32:20.851670 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:32:20.853065 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:32:20.854370 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:32:20.855211 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:32:20.887698 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:32:20.889823 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:32:20.915113 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:32:20.916100 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:32:20.917649 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:32:20.919048 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:32:20.919171 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:32:20.921216 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:32:20.922772 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:32:20.924419 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:32:20.925683 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:32:20.927164 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:32:20.928618 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:32:20.930042 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:32:20.931394 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:32:20.932835 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:32:20.934361 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:32:20.935639 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:32:20.936783 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:32:20.936913 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:32:20.938684 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:32:20.940126 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:32:20.941696 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:32:20.942873 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:32:20.944483 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:32:20.944599 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:32:20.946872 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:32:20.947041 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:32:20.948609 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:32:20.949834 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:32:20.953845 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:32:20.954881 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:32:20.956553 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:32:20.957711 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:32:20.957886 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:32:20.959027 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:32:20.959141 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:32:20.960257 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:32:20.960422 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:32:20.961997 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:32:20.962198 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:32:20.964286 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:32:20.969848 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:32:20.971538 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:32:20.971787 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:32:20.973502 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:32:20.973683 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:32:20.981192 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:32:20.982370 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:32:20.990758 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:32:20.997050 ignition[1038]: INFO : Ignition 2.21.0 Jul 6 23:32:20.997050 ignition[1038]: INFO : Stage: umount Jul 6 23:32:20.998598 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:20.998598 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:32:21.000452 ignition[1038]: INFO : umount: umount passed Jul 6 23:32:21.000452 ignition[1038]: INFO : Ignition finished successfully Jul 6 23:32:21.002039 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:32:21.002134 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:32:21.005080 systemd[1]: Stopped target network.target - Network. Jul 6 23:32:21.005904 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:32:21.005979 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:32:21.012054 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:32:21.012099 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:32:21.013514 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:32:21.013561 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:32:21.014905 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:32:21.014945 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:32:21.016536 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:32:21.017919 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:32:21.024682 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:32:21.024884 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:32:21.029150 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:32:21.029372 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:32:21.029475 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:32:21.032276 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:32:21.032830 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:32:21.034294 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:32:21.034331 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:32:21.036709 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:32:21.038549 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:32:21.038603 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:32:21.040247 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:32:21.040292 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:32:21.042878 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:32:21.042928 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:32:21.044522 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:32:21.044566 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:32:21.046916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:32:21.050341 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:32:21.050398 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:32:21.050676 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:32:21.052972 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:32:21.055030 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:32:21.055149 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:32:21.072413 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:32:21.072571 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:32:21.073873 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:32:21.073912 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:32:21.075153 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:32:21.075184 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:32:21.077123 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:32:21.077172 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:32:21.079447 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:32:21.079498 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:32:21.081779 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:32:21.081830 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:32:21.084861 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:32:21.086419 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:32:21.086477 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:32:21.088966 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:32:21.089019 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:32:21.091306 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:32:21.091350 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:32:21.093986 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:32:21.094032 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:32:21.095693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:32:21.095738 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:32:21.099298 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 6 23:32:21.099354 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 6 23:32:21.099382 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:32:21.099414 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:32:21.099693 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:32:21.101804 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:32:21.103333 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:32:21.103419 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:32:21.105793 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:32:21.107965 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:32:21.127842 systemd[1]: Switching root. Jul 6 23:32:21.157200 systemd-journald[244]: Journal stopped Jul 6 23:32:22.000130 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 6 23:32:22.000188 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:32:22.000200 kernel: SELinux: policy capability open_perms=1 Jul 6 23:32:22.000211 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:32:22.000221 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:32:22.000231 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:32:22.000243 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:32:22.000252 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:32:22.000262 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:32:22.000275 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:32:22.000284 kernel: audit: type=1403 audit(1751844741.331:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:32:22.000301 systemd[1]: Successfully loaded SELinux policy in 49.722ms. Jul 6 23:32:22.000318 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.285ms. Jul 6 23:32:22.000330 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:32:22.000343 systemd[1]: Detected virtualization kvm. Jul 6 23:32:22.000358 systemd[1]: Detected architecture arm64. Jul 6 23:32:22.000367 systemd[1]: Detected first boot. Jul 6 23:32:22.000378 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:32:22.000388 zram_generator::config[1083]: No configuration found. Jul 6 23:32:22.000399 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:32:22.000409 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:32:22.000420 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:32:22.000432 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:32:22.000441 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:32:22.000451 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:32:22.000462 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:32:22.000471 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:32:22.000481 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:32:22.000491 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:32:22.000501 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:32:22.000511 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:32:22.000523 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:32:22.000533 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:32:22.000543 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:32:22.000553 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:32:22.000563 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:32:22.000573 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:32:22.000583 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:32:22.000593 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:32:22.000604 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:32:22.000615 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:32:22.000625 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:32:22.000635 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:32:22.000644 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:32:22.000655 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:32:22.000667 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:32:22.000677 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:32:22.000689 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:32:22.000699 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:32:22.000709 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:32:22.000719 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:32:22.000734 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:32:22.000745 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:32:22.000755 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:32:22.000832 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:32:22.000846 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:32:22.000856 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:32:22.000869 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:32:22.000879 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:32:22.000888 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:32:22.000898 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:32:22.000908 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:32:22.000918 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:32:22.000928 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:32:22.000939 systemd[1]: Reached target machines.target - Containers. Jul 6 23:32:22.000950 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:32:22.000962 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:32:22.000972 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:32:22.000982 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:32:22.000992 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:32:22.001002 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:32:22.001012 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:32:22.001022 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:32:22.001034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:32:22.001044 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:32:22.001055 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:32:22.001064 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:32:22.001075 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:32:22.001085 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:32:22.001095 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:32:22.001105 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:32:22.001114 kernel: fuse: init (API version 7.41) Jul 6 23:32:22.001125 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:32:22.001135 kernel: loop: module loaded Jul 6 23:32:22.001144 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:32:22.001155 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:32:22.001165 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:32:22.001175 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:32:22.001187 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:32:22.001198 systemd[1]: Stopped verity-setup.service. Jul 6 23:32:22.001207 kernel: ACPI: bus type drm_connector registered Jul 6 23:32:22.001217 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:32:22.001226 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:32:22.001236 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:32:22.001248 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:32:22.001258 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:32:22.001268 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:32:22.001278 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:32:22.001288 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:32:22.001299 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:32:22.001309 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:32:22.001346 systemd-journald[1151]: Collecting audit messages is disabled. Jul 6 23:32:22.001367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:32:22.001378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:32:22.001388 systemd-journald[1151]: Journal started Jul 6 23:32:22.001409 systemd-journald[1151]: Runtime Journal (/run/log/journal/492bb0e3582d457a92f0ed2f4bf5f1fb) is 6M, max 48.5M, 42.4M free. Jul 6 23:32:21.760004 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:32:21.783013 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:32:21.783420 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:32:22.003808 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:32:22.004524 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:32:22.006404 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:32:22.007512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:32:22.007675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:32:22.009048 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:32:22.009888 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:32:22.011007 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:32:22.011174 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:32:22.012268 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:32:22.013374 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:32:22.014589 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:32:22.015998 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:32:22.028697 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:32:22.031110 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:32:22.033048 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:32:22.033972 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:32:22.034002 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:32:22.035675 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:32:22.042952 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:32:22.043878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:32:22.045253 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:32:22.047176 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:32:22.048095 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:32:22.051935 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:32:22.052948 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:32:22.054110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:32:22.056291 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:32:22.058283 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:32:22.058499 systemd-journald[1151]: Time spent on flushing to /var/log/journal/492bb0e3582d457a92f0ed2f4bf5f1fb is 23.532ms for 889 entries. Jul 6 23:32:22.058499 systemd-journald[1151]: System Journal (/var/log/journal/492bb0e3582d457a92f0ed2f4bf5f1fb) is 8M, max 195.6M, 187.6M free. Jul 6 23:32:22.088370 systemd-journald[1151]: Received client request to flush runtime journal. Jul 6 23:32:22.088415 kernel: loop0: detected capacity change from 0 to 207008 Jul 6 23:32:22.064953 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:32:22.068084 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:32:22.069235 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:32:22.082401 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:32:22.083594 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:32:22.089129 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:32:22.090458 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:32:22.099074 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jul 6 23:32:22.099091 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jul 6 23:32:22.101163 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:32:22.104697 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:32:22.104815 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:32:22.110985 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:32:22.121971 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:32:22.128792 kernel: loop1: detected capacity change from 0 to 107312 Jul 6 23:32:22.149899 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:32:22.152663 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:32:22.159784 kernel: loop2: detected capacity change from 0 to 138376 Jul 6 23:32:22.181398 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jul 6 23:32:22.181420 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jul 6 23:32:22.185662 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:32:22.197841 kernel: loop3: detected capacity change from 0 to 207008 Jul 6 23:32:22.210010 kernel: loop4: detected capacity change from 0 to 107312 Jul 6 23:32:22.229786 kernel: loop5: detected capacity change from 0 to 138376 Jul 6 23:32:22.238300 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:32:22.238750 (sd-merge)[1225]: Merged extensions into '/usr'. Jul 6 23:32:22.243587 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:32:22.243602 systemd[1]: Reloading... Jul 6 23:32:22.300799 zram_generator::config[1254]: No configuration found. Jul 6 23:32:22.358198 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:32:22.382291 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:32:22.446068 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:32:22.446316 systemd[1]: Reloading finished in 202 ms. Jul 6 23:32:22.462459 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:32:22.464780 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:32:22.483208 systemd[1]: Starting ensure-sysext.service... Jul 6 23:32:22.485024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:32:22.498888 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:32:22.499238 systemd[1]: Reloading... Jul 6 23:32:22.507167 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:32:22.507196 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:32:22.507440 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:32:22.507628 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:32:22.508754 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:32:22.509155 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 6 23:32:22.509268 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 6 23:32:22.512250 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:32:22.512377 systemd-tmpfiles[1286]: Skipping /boot Jul 6 23:32:22.522007 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:32:22.522025 systemd-tmpfiles[1286]: Skipping /boot Jul 6 23:32:22.543797 zram_generator::config[1313]: No configuration found. Jul 6 23:32:22.620741 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:32:22.684136 systemd[1]: Reloading finished in 184 ms. Jul 6 23:32:22.707411 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:32:22.713949 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:32:22.725074 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:32:22.727179 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:32:22.729311 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:32:22.732540 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:32:22.735498 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:32:22.738282 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:32:22.744028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:32:22.746179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:32:22.758068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:32:22.765002 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:32:22.765874 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:32:22.765978 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:32:22.767040 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:32:22.768495 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:32:22.768642 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:32:22.769961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:32:22.770091 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:32:22.776201 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:32:22.776384 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:32:22.780363 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:32:22.784021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:32:22.785909 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:32:22.787846 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:32:22.789905 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:32:22.790020 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:32:22.791137 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:32:22.793869 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:32:22.808216 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:32:22.814837 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:32:22.816273 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jul 6 23:32:22.816311 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:32:22.817273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:32:22.820664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:32:22.821292 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:32:22.822748 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:32:22.823352 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:32:22.824940 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:32:22.834061 augenrules[1390]: No rules Jul 6 23:32:22.834047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:32:22.836252 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:32:22.841012 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:32:22.844778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:32:22.847159 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:32:22.848050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:32:22.848175 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:32:22.848293 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:32:22.849219 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:32:22.852499 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:32:22.852705 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:32:22.854151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:32:22.854295 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:32:22.855556 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:32:22.855714 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:32:22.859458 systemd[1]: Finished ensure-sysext.service. Jul 6 23:32:22.864258 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:32:22.864898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:32:22.867435 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:32:22.874572 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:32:22.875422 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:32:22.882559 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:32:22.888039 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:32:22.888229 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:32:22.889404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:32:22.907067 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:32:22.941649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:32:22.944525 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:32:22.971602 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:32:23.022245 systemd-networkd[1436]: lo: Link UP Jul 6 23:32:23.022253 systemd-networkd[1436]: lo: Gained carrier Jul 6 23:32:23.028218 systemd-networkd[1436]: Enumeration completed Jul 6 23:32:23.028358 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:32:23.028638 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:32:23.028642 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:32:23.031191 systemd-networkd[1436]: eth0: Link UP Jul 6 23:32:23.031300 systemd-networkd[1436]: eth0: Gained carrier Jul 6 23:32:23.031319 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:32:23.032847 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:32:23.034920 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:32:23.053100 systemd-networkd[1436]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:32:23.064168 systemd-resolved[1352]: Positive Trust Anchors: Jul 6 23:32:23.064185 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:32:23.064217 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:32:23.068362 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:32:23.071034 systemd-resolved[1352]: Defaulting to hostname 'linux'. Jul 6 23:32:23.074976 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:32:23.078290 systemd[1]: Reached target network.target - Network. Jul 6 23:32:23.079104 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:32:23.081613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:32:23.098429 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:32:23.099672 systemd-timesyncd[1437]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:32:23.099736 systemd-timesyncd[1437]: Initial clock synchronization to Sun 2025-07-06 23:32:23.373674 UTC. Jul 6 23:32:23.101672 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:32:23.134053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:32:23.135142 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:32:23.136028 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:32:23.136923 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:32:23.137951 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:32:23.138874 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:32:23.139824 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:32:23.140731 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:32:23.140761 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:32:23.141439 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:32:23.143042 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:32:23.145097 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:32:23.148088 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:32:23.149363 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:32:23.150414 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:32:23.153392 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:32:23.154823 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:32:23.156280 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:32:23.157191 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:32:23.157949 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:32:23.158645 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:32:23.158673 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:32:23.159620 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:32:23.161361 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:32:23.163048 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:32:23.165933 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:32:23.167648 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:32:23.168475 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:32:23.169392 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:32:23.173059 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:32:23.174907 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:32:23.175668 jq[1482]: false Jul 6 23:32:23.178735 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:32:23.184778 extend-filesystems[1483]: Found /dev/vda6 Jul 6 23:32:23.186853 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:32:23.188538 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:32:23.189406 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:32:23.189969 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:32:23.191276 extend-filesystems[1483]: Found /dev/vda9 Jul 6 23:32:23.193222 extend-filesystems[1483]: Checking size of /dev/vda9 Jul 6 23:32:23.194329 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:32:23.198009 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:32:23.199595 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:32:23.203316 jq[1502]: true Jul 6 23:32:23.199841 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:32:23.200089 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:32:23.200239 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:32:23.201892 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:32:23.202050 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:32:23.211798 extend-filesystems[1483]: Resized partition /dev/vda9 Jul 6 23:32:23.213887 extend-filesystems[1520]: resize2fs 1.47.2 (1-Jan-2025) Jul 6 23:32:23.222100 (ntainerd)[1519]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:32:23.223721 jq[1508]: true Jul 6 23:32:23.228780 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:32:23.245173 tar[1506]: linux-arm64/LICENSE Jul 6 23:32:23.245173 tar[1506]: linux-arm64/helm Jul 6 23:32:23.249611 dbus-daemon[1480]: [system] SELinux support is enabled Jul 6 23:32:23.250102 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:32:23.256921 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:32:23.256970 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:32:23.258383 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:32:23.258407 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:32:23.270786 update_engine[1499]: I20250706 23:32:23.270607 1499 main.cc:92] Flatcar Update Engine starting Jul 6 23:32:23.271804 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:32:23.279878 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:32:23.289109 update_engine[1499]: I20250706 23:32:23.284494 1499 update_check_scheduler.cc:74] Next update check in 8m6s Jul 6 23:32:23.282594 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:32:23.291380 extend-filesystems[1520]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:32:23.291380 extend-filesystems[1520]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:32:23.291380 extend-filesystems[1520]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:32:23.294670 extend-filesystems[1483]: Resized filesystem in /dev/vda9 Jul 6 23:32:23.292871 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:32:23.300206 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:32:23.319775 bash[1540]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:32:23.320669 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:32:23.322366 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:32:23.330212 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:32:23.330575 systemd-logind[1497]: New seat seat0. Jul 6 23:32:23.331895 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:32:23.378988 locksmithd[1539]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:32:23.472520 containerd[1519]: time="2025-07-06T23:32:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:32:23.473998 containerd[1519]: time="2025-07-06T23:32:23.473965360Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:32:23.487247 containerd[1519]: time="2025-07-06T23:32:23.487189880Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.08µs" Jul 6 23:32:23.487247 containerd[1519]: time="2025-07-06T23:32:23.487241800Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:32:23.487325 containerd[1519]: time="2025-07-06T23:32:23.487262440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:32:23.487572 containerd[1519]: time="2025-07-06T23:32:23.487546400Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:32:23.487614 containerd[1519]: time="2025-07-06T23:32:23.487576120Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:32:23.487631 containerd[1519]: time="2025-07-06T23:32:23.487623960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:32:23.487793 containerd[1519]: time="2025-07-06T23:32:23.487750360Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:32:23.487824 containerd[1519]: time="2025-07-06T23:32:23.487793160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:32:23.488173 containerd[1519]: time="2025-07-06T23:32:23.488143040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:32:23.488173 containerd[1519]: time="2025-07-06T23:32:23.488170320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:32:23.488212 containerd[1519]: time="2025-07-06T23:32:23.488190000Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:32:23.488212 containerd[1519]: time="2025-07-06T23:32:23.488198840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:32:23.488369 containerd[1519]: time="2025-07-06T23:32:23.488348360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:32:23.488719 containerd[1519]: time="2025-07-06T23:32:23.488696080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:32:23.488762 containerd[1519]: time="2025-07-06T23:32:23.488745240Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:32:23.488846 containerd[1519]: time="2025-07-06T23:32:23.488760920Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:32:23.488897 containerd[1519]: time="2025-07-06T23:32:23.488880160Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:32:23.489358 containerd[1519]: time="2025-07-06T23:32:23.489333760Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:32:23.489438 containerd[1519]: time="2025-07-06T23:32:23.489420600Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:32:23.492963 containerd[1519]: time="2025-07-06T23:32:23.492930680Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.492986200Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493006240Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493021240Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493035960Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493051080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493062520Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493074920Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493087120Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493098600Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493108520Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493125640Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493254080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493274840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:32:23.493565 containerd[1519]: time="2025-07-06T23:32:23.493290000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:32:23.494063 containerd[1519]: time="2025-07-06T23:32:23.493301400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:32:23.494063 containerd[1519]: time="2025-07-06T23:32:23.493319080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:32:23.494063 containerd[1519]: time="2025-07-06T23:32:23.493330720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:32:23.494063 containerd[1519]: time="2025-07-06T23:32:23.493341800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:32:23.494063 containerd[1519]: time="2025-07-06T23:32:23.493352680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:32:23.494063 containerd[1519]: time="2025-07-06T23:32:23.493364120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:32:23.494063 containerd[1519]: time="2025-07-06T23:32:23.493375000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:32:23.494063 containerd[1519]: time="2025-07-06T23:32:23.493389320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:32:23.494063 containerd[1519]: time="2025-07-06T23:32:23.493574080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:32:23.494063 containerd[1519]: time="2025-07-06T23:32:23.493588800Z" level=info msg="Start snapshots syncer" Jul 6 23:32:23.494063 containerd[1519]: time="2025-07-06T23:32:23.493622600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:32:23.494846 containerd[1519]: time="2025-07-06T23:32:23.493869560Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.494872400Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.494979880Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.495103520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.495136120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.495152000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.495169400Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.495185320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.495202480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.495218080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.495252280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.495273840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.495293120Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.495337680Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:32:23.495556 containerd[1519]: time="2025-07-06T23:32:23.495353400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:32:23.495850 containerd[1519]: time="2025-07-06T23:32:23.495366800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:32:23.495850 containerd[1519]: time="2025-07-06T23:32:23.495381600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:32:23.495850 containerd[1519]: time="2025-07-06T23:32:23.495390960Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:32:23.495850 containerd[1519]: time="2025-07-06T23:32:23.495406640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:32:23.495850 containerd[1519]: time="2025-07-06T23:32:23.495422760Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:32:23.496049 containerd[1519]: time="2025-07-06T23:32:23.495951920Z" level=info msg="runtime interface created" Jul 6 23:32:23.496049 containerd[1519]: time="2025-07-06T23:32:23.495974680Z" level=info msg="created NRI interface" Jul 6 23:32:23.496049 containerd[1519]: time="2025-07-06T23:32:23.495993920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:32:23.496049 containerd[1519]: time="2025-07-06T23:32:23.496011040Z" level=info msg="Connect containerd service" Jul 6 23:32:23.496136 containerd[1519]: time="2025-07-06T23:32:23.496053880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:32:23.497085 containerd[1519]: time="2025-07-06T23:32:23.497055880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:32:23.611934 containerd[1519]: time="2025-07-06T23:32:23.611778920Z" level=info msg="Start subscribing containerd event" Jul 6 23:32:23.611934 containerd[1519]: time="2025-07-06T23:32:23.611851480Z" level=info msg="Start recovering state" Jul 6 23:32:23.611934 containerd[1519]: time="2025-07-06T23:32:23.611940200Z" level=info msg="Start event monitor" Jul 6 23:32:23.612095 containerd[1519]: time="2025-07-06T23:32:23.611966240Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:32:23.612095 containerd[1519]: time="2025-07-06T23:32:23.611975360Z" level=info msg="Start streaming server" Jul 6 23:32:23.612095 containerd[1519]: time="2025-07-06T23:32:23.611984080Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:32:23.612095 containerd[1519]: time="2025-07-06T23:32:23.611990760Z" level=info msg="runtime interface starting up..." Jul 6 23:32:23.612095 containerd[1519]: time="2025-07-06T23:32:23.611996520Z" level=info msg="starting plugins..." Jul 6 23:32:23.612095 containerd[1519]: time="2025-07-06T23:32:23.612010880Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:32:23.612713 containerd[1519]: time="2025-07-06T23:32:23.612630440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:32:23.612863 containerd[1519]: time="2025-07-06T23:32:23.612845760Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:32:23.613231 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:32:23.614092 containerd[1519]: time="2025-07-06T23:32:23.613831040Z" level=info msg="containerd successfully booted in 0.141722s" Jul 6 23:32:23.617269 sshd_keygen[1505]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:32:23.640836 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:32:23.646010 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:32:23.657268 tar[1506]: linux-arm64/README.md Jul 6 23:32:23.674831 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:32:23.675101 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:32:23.677829 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:32:23.680867 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:32:23.702857 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:32:23.706327 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:32:23.709038 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:32:23.710201 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:32:24.527621 systemd-networkd[1436]: eth0: Gained IPv6LL Jul 6 23:32:24.530412 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:32:24.532130 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:32:24.535907 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:32:24.538326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:24.553286 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:32:24.585528 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:32:24.587774 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:32:24.588040 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:32:24.590107 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:32:25.126508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:25.127861 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:32:25.128895 systemd[1]: Startup finished in 2.161s (kernel) + 5.692s (initrd) + 3.851s (userspace) = 11.705s. Jul 6 23:32:25.130062 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:32:25.589974 kubelet[1612]: E0706 23:32:25.589841 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:32:25.592227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:32:25.592385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:32:25.592763 systemd[1]: kubelet.service: Consumed 856ms CPU time, 257.1M memory peak. Jul 6 23:32:29.013008 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:32:29.014153 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:53650.service - OpenSSH per-connection server daemon (10.0.0.1:53650). Jul 6 23:32:29.104867 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 53650 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:32:29.106897 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:29.115608 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:32:29.116646 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:32:29.122091 systemd-logind[1497]: New session 1 of user core. Jul 6 23:32:29.136828 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:32:29.139731 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:32:29.161044 (systemd)[1629]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:32:29.163295 systemd-logind[1497]: New session c1 of user core. Jul 6 23:32:29.269811 systemd[1629]: Queued start job for default target default.target. Jul 6 23:32:29.276832 systemd[1629]: Created slice app.slice - User Application Slice. Jul 6 23:32:29.276868 systemd[1629]: Reached target paths.target - Paths. Jul 6 23:32:29.276910 systemd[1629]: Reached target timers.target - Timers. Jul 6 23:32:29.278253 systemd[1629]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:32:29.288426 systemd[1629]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:32:29.288623 systemd[1629]: Reached target sockets.target - Sockets. Jul 6 23:32:29.288730 systemd[1629]: Reached target basic.target - Basic System. Jul 6 23:32:29.288854 systemd[1629]: Reached target default.target - Main User Target. Jul 6 23:32:29.288899 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:32:29.289002 systemd[1629]: Startup finished in 119ms. Jul 6 23:32:29.290279 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:32:29.351597 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:53658.service - OpenSSH per-connection server daemon (10.0.0.1:53658). Jul 6 23:32:29.404441 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 53658 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:32:29.405819 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:29.410628 systemd-logind[1497]: New session 2 of user core. Jul 6 23:32:29.418979 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:32:29.474285 sshd[1642]: Connection closed by 10.0.0.1 port 53658 Jul 6 23:32:29.474808 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:29.492875 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:53658.service: Deactivated successfully. Jul 6 23:32:29.494354 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:32:29.496598 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:32:29.499260 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:53664.service - OpenSSH per-connection server daemon (10.0.0.1:53664). Jul 6 23:32:29.499766 systemd-logind[1497]: Removed session 2. Jul 6 23:32:29.548424 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 53664 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:32:29.549629 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:29.553952 systemd-logind[1497]: New session 3 of user core. Jul 6 23:32:29.569972 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:32:29.619927 sshd[1650]: Connection closed by 10.0.0.1 port 53664 Jul 6 23:32:29.619749 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:29.628391 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:53664.service: Deactivated successfully. Jul 6 23:32:29.631289 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:32:29.632870 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:32:29.634924 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:53680.service - OpenSSH per-connection server daemon (10.0.0.1:53680). Jul 6 23:32:29.635992 systemd-logind[1497]: Removed session 3. Jul 6 23:32:29.686581 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 53680 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:32:29.689769 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:29.697575 systemd-logind[1497]: New session 4 of user core. Jul 6 23:32:29.710992 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:32:29.762787 sshd[1658]: Connection closed by 10.0.0.1 port 53680 Jul 6 23:32:29.763431 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:29.783554 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:53680.service: Deactivated successfully. Jul 6 23:32:29.789969 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:32:29.790731 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:32:29.800738 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:53692.service - OpenSSH per-connection server daemon (10.0.0.1:53692). Jul 6 23:32:29.802125 systemd-logind[1497]: Removed session 4. Jul 6 23:32:29.841667 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 53692 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:32:29.843189 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:29.847854 systemd-logind[1497]: New session 5 of user core. Jul 6 23:32:29.855986 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:32:29.921986 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:32:29.922288 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:32:29.944647 sudo[1667]: pam_unix(sudo:session): session closed for user root Jul 6 23:32:29.967845 sshd[1666]: Connection closed by 10.0.0.1 port 53692 Jul 6 23:32:29.969138 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:29.981684 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:53692.service: Deactivated successfully. Jul 6 23:32:29.985503 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:32:29.986413 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:32:29.990213 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:53700.service - OpenSSH per-connection server daemon (10.0.0.1:53700). Jul 6 23:32:29.991132 systemd-logind[1497]: Removed session 5. Jul 6 23:32:30.049548 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 53700 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:32:30.051101 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:30.056472 systemd-logind[1497]: New session 6 of user core. Jul 6 23:32:30.077989 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:32:30.130519 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:32:30.130824 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:32:30.208523 sudo[1677]: pam_unix(sudo:session): session closed for user root Jul 6 23:32:30.214553 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:32:30.214974 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:32:30.225875 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:32:30.289282 augenrules[1699]: No rules Jul 6 23:32:30.290644 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:32:30.291968 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:32:30.293004 sudo[1676]: pam_unix(sudo:session): session closed for user root Jul 6 23:32:30.295820 sshd[1675]: Connection closed by 10.0.0.1 port 53700 Jul 6 23:32:30.294636 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:30.315672 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:53700.service: Deactivated successfully. Jul 6 23:32:30.317418 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:32:30.318338 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:32:30.321623 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:53714.service - OpenSSH per-connection server daemon (10.0.0.1:53714). Jul 6 23:32:30.322243 systemd-logind[1497]: Removed session 6. Jul 6 23:32:30.386705 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 53714 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:32:30.388134 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:30.392333 systemd-logind[1497]: New session 7 of user core. Jul 6 23:32:30.417276 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:32:30.469101 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:32:30.469378 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:32:30.878507 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:32:30.900184 (dockerd)[1731]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:32:31.212868 dockerd[1731]: time="2025-07-06T23:32:31.212731392Z" level=info msg="Starting up" Jul 6 23:32:31.213878 dockerd[1731]: time="2025-07-06T23:32:31.213847852Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:32:31.238626 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport738036311-merged.mount: Deactivated successfully. Jul 6 23:32:31.259688 dockerd[1731]: time="2025-07-06T23:32:31.259636928Z" level=info msg="Loading containers: start." Jul 6 23:32:31.268835 kernel: Initializing XFRM netlink socket Jul 6 23:32:31.485240 systemd-networkd[1436]: docker0: Link UP Jul 6 23:32:31.488914 dockerd[1731]: time="2025-07-06T23:32:31.488876088Z" level=info msg="Loading containers: done." Jul 6 23:32:31.503431 dockerd[1731]: time="2025-07-06T23:32:31.503371389Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:32:31.503603 dockerd[1731]: time="2025-07-06T23:32:31.503465600Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:32:31.503631 dockerd[1731]: time="2025-07-06T23:32:31.503602659Z" level=info msg="Initializing buildkit" Jul 6 23:32:31.529621 dockerd[1731]: time="2025-07-06T23:32:31.529568913Z" level=info msg="Completed buildkit initialization" Jul 6 23:32:31.535973 dockerd[1731]: time="2025-07-06T23:32:31.535820055Z" level=info msg="Daemon has completed initialization" Jul 6 23:32:31.535973 dockerd[1731]: time="2025-07-06T23:32:31.535888119Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:32:31.537183 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:32:32.120029 containerd[1519]: time="2025-07-06T23:32:32.119908705Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:32:32.717224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469240068.mount: Deactivated successfully. Jul 6 23:32:33.740600 containerd[1519]: time="2025-07-06T23:32:33.740544156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:33.741482 containerd[1519]: time="2025-07-06T23:32:33.741161530Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 6 23:32:33.741866 containerd[1519]: time="2025-07-06T23:32:33.741832127Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:33.744414 containerd[1519]: time="2025-07-06T23:32:33.744375214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:33.746000 containerd[1519]: time="2025-07-06T23:32:33.745961183Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.626011332s" Jul 6 23:32:33.746053 containerd[1519]: time="2025-07-06T23:32:33.746004020Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 6 23:32:33.746838 containerd[1519]: time="2025-07-06T23:32:33.746803814Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:32:34.780590 containerd[1519]: time="2025-07-06T23:32:34.780539933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:34.781791 containerd[1519]: time="2025-07-06T23:32:34.781596168Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 6 23:32:34.782485 containerd[1519]: time="2025-07-06T23:32:34.782438572Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:34.784956 containerd[1519]: time="2025-07-06T23:32:34.784892612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:34.786447 containerd[1519]: time="2025-07-06T23:32:34.786409279Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.039565064s" Jul 6 23:32:34.786447 containerd[1519]: time="2025-07-06T23:32:34.786445200Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 6 23:32:34.786967 containerd[1519]: time="2025-07-06T23:32:34.786930091Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:32:35.842633 containerd[1519]: time="2025-07-06T23:32:35.842585789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:35.842892 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:32:35.844206 containerd[1519]: time="2025-07-06T23:32:35.844041701Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 6 23:32:35.844308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:35.844998 containerd[1519]: time="2025-07-06T23:32:35.844859580Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:35.847903 containerd[1519]: time="2025-07-06T23:32:35.847855945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:35.848846 containerd[1519]: time="2025-07-06T23:32:35.848805696Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.061834245s" Jul 6 23:32:35.848905 containerd[1519]: time="2025-07-06T23:32:35.848849196Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 6 23:32:35.849509 containerd[1519]: time="2025-07-06T23:32:35.849344915Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:32:35.989161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:36.003152 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:32:36.040220 kubelet[2016]: E0706 23:32:36.040168 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:32:36.043411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:32:36.043543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:32:36.043901 systemd[1]: kubelet.service: Consumed 143ms CPU time, 106.3M memory peak. Jul 6 23:32:36.849461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522596084.mount: Deactivated successfully. Jul 6 23:32:37.224795 containerd[1519]: time="2025-07-06T23:32:37.224358164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:37.226088 containerd[1519]: time="2025-07-06T23:32:37.226042874Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 6 23:32:37.227905 containerd[1519]: time="2025-07-06T23:32:37.227860664Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:37.229721 containerd[1519]: time="2025-07-06T23:32:37.229682236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:37.230281 containerd[1519]: time="2025-07-06T23:32:37.230242841Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.380864403s" Jul 6 23:32:37.230318 containerd[1519]: time="2025-07-06T23:32:37.230282559Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 6 23:32:37.230842 containerd[1519]: time="2025-07-06T23:32:37.230815276Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:32:37.704911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount777186388.mount: Deactivated successfully. Jul 6 23:32:38.396075 containerd[1519]: time="2025-07-06T23:32:38.396020445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:38.396553 containerd[1519]: time="2025-07-06T23:32:38.396524775Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 6 23:32:38.397538 containerd[1519]: time="2025-07-06T23:32:38.397499538Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:38.400166 containerd[1519]: time="2025-07-06T23:32:38.400128232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:38.401321 containerd[1519]: time="2025-07-06T23:32:38.401287726Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.170438655s" Jul 6 23:32:38.401413 containerd[1519]: time="2025-07-06T23:32:38.401396980Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:32:38.401923 containerd[1519]: time="2025-07-06T23:32:38.401900064Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:32:38.820400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293918209.mount: Deactivated successfully. Jul 6 23:32:38.826218 containerd[1519]: time="2025-07-06T23:32:38.826170164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:32:38.826845 containerd[1519]: time="2025-07-06T23:32:38.826812822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 6 23:32:38.827873 containerd[1519]: time="2025-07-06T23:32:38.827834270Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:32:38.830327 containerd[1519]: time="2025-07-06T23:32:38.829754199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:32:38.830502 containerd[1519]: time="2025-07-06T23:32:38.830465417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 428.534592ms" Jul 6 23:32:38.830535 containerd[1519]: time="2025-07-06T23:32:38.830502653Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:32:38.831140 containerd[1519]: time="2025-07-06T23:32:38.830981087Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:32:39.382028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906744855.mount: Deactivated successfully. Jul 6 23:32:41.094744 containerd[1519]: time="2025-07-06T23:32:41.094686226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:41.095820 containerd[1519]: time="2025-07-06T23:32:41.095745843Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 6 23:32:41.200270 containerd[1519]: time="2025-07-06T23:32:41.200221302Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:41.207329 containerd[1519]: time="2025-07-06T23:32:41.207285576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:41.208867 containerd[1519]: time="2025-07-06T23:32:41.208815331Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.377800641s" Jul 6 23:32:41.208867 containerd[1519]: time="2025-07-06T23:32:41.208849973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 6 23:32:45.884937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:45.885090 systemd[1]: kubelet.service: Consumed 143ms CPU time, 106.3M memory peak. Jul 6 23:32:45.887401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:45.911707 systemd[1]: Reload requested from client PID 2174 ('systemctl') (unit session-7.scope)... Jul 6 23:32:45.911969 systemd[1]: Reloading... Jul 6 23:32:46.000822 zram_generator::config[2219]: No configuration found. Jul 6 23:32:46.100212 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:32:46.188541 systemd[1]: Reloading finished in 276 ms. Jul 6 23:32:46.240212 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:32:46.240292 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:32:46.240526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:46.240582 systemd[1]: kubelet.service: Consumed 92ms CPU time, 95M memory peak. Jul 6 23:32:46.242232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:46.381653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:46.386077 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:32:46.421621 kubelet[2262]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:32:46.421621 kubelet[2262]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:32:46.421621 kubelet[2262]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:32:46.422031 kubelet[2262]: I0706 23:32:46.421671 2262 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:32:46.855453 kubelet[2262]: I0706 23:32:46.855398 2262 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:32:46.855453 kubelet[2262]: I0706 23:32:46.855449 2262 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:32:46.855745 kubelet[2262]: I0706 23:32:46.855729 2262 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:32:46.901686 kubelet[2262]: E0706 23:32:46.901620 2262 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:46.904987 kubelet[2262]: I0706 23:32:46.904880 2262 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:32:46.918615 kubelet[2262]: I0706 23:32:46.918586 2262 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:32:46.921782 kubelet[2262]: I0706 23:32:46.921753 2262 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:32:46.922647 kubelet[2262]: I0706 23:32:46.922597 2262 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:32:46.922844 kubelet[2262]: I0706 23:32:46.922638 2262 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:32:46.922989 kubelet[2262]: I0706 23:32:46.922970 2262 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:32:46.922989 kubelet[2262]: I0706 23:32:46.922983 2262 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:32:46.923250 kubelet[2262]: I0706 23:32:46.923230 2262 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:32:46.925803 kubelet[2262]: I0706 23:32:46.925764 2262 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:32:46.925842 kubelet[2262]: I0706 23:32:46.925805 2262 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:32:46.926698 kubelet[2262]: I0706 23:32:46.926673 2262 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:32:46.926698 kubelet[2262]: I0706 23:32:46.926700 2262 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:32:46.927623 kubelet[2262]: W0706 23:32:46.927532 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 6 23:32:46.927623 kubelet[2262]: E0706 23:32:46.927590 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:46.930466 kubelet[2262]: I0706 23:32:46.930437 2262 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:32:46.931020 kubelet[2262]: W0706 23:32:46.930957 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 6 23:32:46.931020 kubelet[2262]: E0706 23:32:46.931000 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:46.936690 kubelet[2262]: I0706 23:32:46.935925 2262 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:32:46.936690 kubelet[2262]: W0706 23:32:46.936054 2262 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:32:46.937056 kubelet[2262]: I0706 23:32:46.937037 2262 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:32:46.937148 kubelet[2262]: I0706 23:32:46.937138 2262 server.go:1287] "Started kubelet" Jul 6 23:32:46.938493 kubelet[2262]: I0706 23:32:46.938462 2262 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:32:46.940057 kubelet[2262]: I0706 23:32:46.940028 2262 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:32:46.940195 kubelet[2262]: I0706 23:32:46.940153 2262 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:32:46.940616 kubelet[2262]: I0706 23:32:46.940420 2262 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:32:46.940946 kubelet[2262]: I0706 23:32:46.940927 2262 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:32:46.941536 kubelet[2262]: I0706 23:32:46.941500 2262 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:32:46.941908 kubelet[2262]: I0706 23:32:46.941894 2262 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:32:46.942464 kubelet[2262]: I0706 23:32:46.942443 2262 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:32:46.942532 kubelet[2262]: I0706 23:32:46.942504 2262 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:32:46.943273 kubelet[2262]: E0706 23:32:46.943226 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:32:46.946660 kubelet[2262]: E0706 23:32:46.946478 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" Jul 6 23:32:46.946660 kubelet[2262]: W0706 23:32:46.946561 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 6 23:32:46.946793 kubelet[2262]: E0706 23:32:46.946607 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:46.948975 kubelet[2262]: E0706 23:32:46.948938 2262 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:32:46.949548 kubelet[2262]: E0706 23:32:46.949275 2262 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcd88aae9e50b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:32:46.937113867 +0000 UTC m=+0.548189248,LastTimestamp:2025-07-06 23:32:46.937113867 +0000 UTC m=+0.548189248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:32:46.949970 kubelet[2262]: I0706 23:32:46.949947 2262 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:32:46.949970 kubelet[2262]: I0706 23:32:46.949968 2262 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:32:46.950059 kubelet[2262]: I0706 23:32:46.950049 2262 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:32:46.964624 kubelet[2262]: I0706 23:32:46.964579 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:32:46.968497 kubelet[2262]: I0706 23:32:46.968469 2262 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:32:46.968497 kubelet[2262]: I0706 23:32:46.968490 2262 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:32:46.968682 kubelet[2262]: I0706 23:32:46.968508 2262 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:32:46.969120 kubelet[2262]: I0706 23:32:46.969101 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:32:46.969185 kubelet[2262]: I0706 23:32:46.969176 2262 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:32:46.969435 kubelet[2262]: I0706 23:32:46.969229 2262 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:32:46.969435 kubelet[2262]: I0706 23:32:46.969238 2262 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:32:46.969435 kubelet[2262]: E0706 23:32:46.969279 2262 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:32:47.045280 kubelet[2262]: E0706 23:32:47.045247 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:32:47.069448 kubelet[2262]: E0706 23:32:47.069409 2262 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:32:47.081159 kubelet[2262]: W0706 23:32:47.081049 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 6 23:32:47.081159 kubelet[2262]: E0706 23:32:47.081125 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:47.081247 kubelet[2262]: I0706 23:32:47.081165 2262 policy_none.go:49] "None policy: Start" Jul 6 23:32:47.081247 kubelet[2262]: I0706 23:32:47.081187 2262 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:32:47.081247 kubelet[2262]: I0706 23:32:47.081200 2262 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:32:47.086353 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:32:47.104301 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:32:47.108022 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:32:47.123890 kubelet[2262]: I0706 23:32:47.123826 2262 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:32:47.124166 kubelet[2262]: I0706 23:32:47.124032 2262 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:32:47.124166 kubelet[2262]: I0706 23:32:47.124064 2262 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:32:47.124392 kubelet[2262]: I0706 23:32:47.124346 2262 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:32:47.126231 kubelet[2262]: E0706 23:32:47.126187 2262 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:32:47.126231 kubelet[2262]: E0706 23:32:47.126238 2262 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:32:47.150104 kubelet[2262]: E0706 23:32:47.150058 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" Jul 6 23:32:47.226593 kubelet[2262]: I0706 23:32:47.226522 2262 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:32:47.227180 kubelet[2262]: E0706 23:32:47.227146 2262 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 6 23:32:47.277941 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 6 23:32:47.313284 kubelet[2262]: E0706 23:32:47.313226 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:32:47.316055 systemd[1]: Created slice kubepods-burstable-pode73e5e1cb89fe288fcbb88122b0b6db0.slice - libcontainer container kubepods-burstable-pode73e5e1cb89fe288fcbb88122b0b6db0.slice. Jul 6 23:32:47.328044 kubelet[2262]: E0706 23:32:47.327880 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:32:47.330157 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 6 23:32:47.331570 kubelet[2262]: E0706 23:32:47.331524 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:32:47.350978 kubelet[2262]: I0706 23:32:47.350883 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:47.350978 kubelet[2262]: I0706 23:32:47.350926 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:47.350978 kubelet[2262]: I0706 23:32:47.350956 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:32:47.350978 kubelet[2262]: I0706 23:32:47.350983 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:47.351188 kubelet[2262]: I0706 23:32:47.351000 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:47.351188 kubelet[2262]: I0706 23:32:47.351019 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:47.351188 kubelet[2262]: I0706 23:32:47.351034 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e73e5e1cb89fe288fcbb88122b0b6db0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e73e5e1cb89fe288fcbb88122b0b6db0\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:32:47.351188 kubelet[2262]: I0706 23:32:47.351050 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e73e5e1cb89fe288fcbb88122b0b6db0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e73e5e1cb89fe288fcbb88122b0b6db0\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:32:47.351188 kubelet[2262]: I0706 23:32:47.351065 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e73e5e1cb89fe288fcbb88122b0b6db0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e73e5e1cb89fe288fcbb88122b0b6db0\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:32:47.429329 kubelet[2262]: I0706 23:32:47.429235 2262 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:32:47.429759 kubelet[2262]: E0706 23:32:47.429726 2262 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 6 23:32:47.551491 kubelet[2262]: E0706 23:32:47.551435 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" Jul 6 23:32:47.613997 kubelet[2262]: E0706 23:32:47.613956 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:47.614964 containerd[1519]: time="2025-07-06T23:32:47.614909358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:47.629257 kubelet[2262]: E0706 23:32:47.629217 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:47.630882 containerd[1519]: time="2025-07-06T23:32:47.630851841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e73e5e1cb89fe288fcbb88122b0b6db0,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:47.632580 kubelet[2262]: E0706 23:32:47.632547 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:47.633613 containerd[1519]: time="2025-07-06T23:32:47.633573878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:47.635687 containerd[1519]: time="2025-07-06T23:32:47.635634345Z" level=info msg="connecting to shim d80eb76f4c02907ee912222f0d6244df72f9e25860281232857e5415b8b45a90" address="unix:///run/containerd/s/9943171d0ef18b1e47af9a1536a1c848bf3baf43056a219969e75b57c07396ec" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:32:47.663455 containerd[1519]: time="2025-07-06T23:32:47.663393928Z" level=info msg="connecting to shim 58904cef0e96cee32a0265151b6f88d255725d18376376cf65ef11b5fb9ff6a1" address="unix:///run/containerd/s/5bc0b9270831655185d95e9cb38d6e1f44738bae528f4ad5d69532ffcf1fced2" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:32:47.671927 containerd[1519]: time="2025-07-06T23:32:47.671196582Z" level=info msg="connecting to shim a044f3f1628f2775b9baae48f2cd710e5cacdb3546a9397b483bdf45be6d67b4" address="unix:///run/containerd/s/fca7a81010d00541205dad7e9cae15724639929190a164a3a0be677b6bab5d68" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:32:47.672968 systemd[1]: Started cri-containerd-d80eb76f4c02907ee912222f0d6244df72f9e25860281232857e5415b8b45a90.scope - libcontainer container d80eb76f4c02907ee912222f0d6244df72f9e25860281232857e5415b8b45a90. Jul 6 23:32:47.704959 systemd[1]: Started cri-containerd-58904cef0e96cee32a0265151b6f88d255725d18376376cf65ef11b5fb9ff6a1.scope - libcontainer container 58904cef0e96cee32a0265151b6f88d255725d18376376cf65ef11b5fb9ff6a1. Jul 6 23:32:47.706138 systemd[1]: Started cri-containerd-a044f3f1628f2775b9baae48f2cd710e5cacdb3546a9397b483bdf45be6d67b4.scope - libcontainer container a044f3f1628f2775b9baae48f2cd710e5cacdb3546a9397b483bdf45be6d67b4. Jul 6 23:32:47.743372 containerd[1519]: time="2025-07-06T23:32:47.743317317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d80eb76f4c02907ee912222f0d6244df72f9e25860281232857e5415b8b45a90\"" Jul 6 23:32:47.744795 kubelet[2262]: E0706 23:32:47.744753 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:47.746871 containerd[1519]: time="2025-07-06T23:32:47.746839983Z" level=info msg="CreateContainer within sandbox \"d80eb76f4c02907ee912222f0d6244df72f9e25860281232857e5415b8b45a90\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:32:47.751098 containerd[1519]: time="2025-07-06T23:32:47.751064563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e73e5e1cb89fe288fcbb88122b0b6db0,Namespace:kube-system,Attempt:0,} returns sandbox id \"58904cef0e96cee32a0265151b6f88d255725d18376376cf65ef11b5fb9ff6a1\"" Jul 6 23:32:47.751960 kubelet[2262]: E0706 23:32:47.751935 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:47.753682 containerd[1519]: time="2025-07-06T23:32:47.753651385Z" level=info msg="CreateContainer within sandbox \"58904cef0e96cee32a0265151b6f88d255725d18376376cf65ef11b5fb9ff6a1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:32:47.755015 containerd[1519]: time="2025-07-06T23:32:47.754921960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a044f3f1628f2775b9baae48f2cd710e5cacdb3546a9397b483bdf45be6d67b4\"" Jul 6 23:32:47.755994 kubelet[2262]: E0706 23:32:47.755973 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:47.758067 containerd[1519]: time="2025-07-06T23:32:47.758035337Z" level=info msg="CreateContainer within sandbox \"a044f3f1628f2775b9baae48f2cd710e5cacdb3546a9397b483bdf45be6d67b4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:32:47.758340 containerd[1519]: time="2025-07-06T23:32:47.758308851Z" level=info msg="Container cb8d4d0bf4264c85c0673ed39bed8125a1dd1d1cba3bb0c1790928a6844d2ed8: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:32:47.760816 containerd[1519]: time="2025-07-06T23:32:47.760788183Z" level=info msg="Container 3f47fd36fe0e900a37aae15c2dc53c3e58c6f5e113c76eac70267d04321437d9: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:32:47.765614 containerd[1519]: time="2025-07-06T23:32:47.765545247Z" level=info msg="CreateContainer within sandbox \"d80eb76f4c02907ee912222f0d6244df72f9e25860281232857e5415b8b45a90\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cb8d4d0bf4264c85c0673ed39bed8125a1dd1d1cba3bb0c1790928a6844d2ed8\"" Jul 6 23:32:47.766413 containerd[1519]: time="2025-07-06T23:32:47.766386261Z" level=info msg="StartContainer for \"cb8d4d0bf4264c85c0673ed39bed8125a1dd1d1cba3bb0c1790928a6844d2ed8\"" Jul 6 23:32:47.767486 containerd[1519]: time="2025-07-06T23:32:47.767456158Z" level=info msg="connecting to shim cb8d4d0bf4264c85c0673ed39bed8125a1dd1d1cba3bb0c1790928a6844d2ed8" address="unix:///run/containerd/s/9943171d0ef18b1e47af9a1536a1c848bf3baf43056a219969e75b57c07396ec" protocol=ttrpc version=3 Jul 6 23:32:47.771921 containerd[1519]: time="2025-07-06T23:32:47.771882577Z" level=info msg="Container 93c1514d91d7097904a8059feabbab11171a2cc85814c201e348bd6f23466b55: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:32:47.773660 containerd[1519]: time="2025-07-06T23:32:47.773561880Z" level=info msg="CreateContainer within sandbox \"58904cef0e96cee32a0265151b6f88d255725d18376376cf65ef11b5fb9ff6a1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3f47fd36fe0e900a37aae15c2dc53c3e58c6f5e113c76eac70267d04321437d9\"" Jul 6 23:32:47.774036 containerd[1519]: time="2025-07-06T23:32:47.774016201Z" level=info msg="StartContainer for \"3f47fd36fe0e900a37aae15c2dc53c3e58c6f5e113c76eac70267d04321437d9\"" Jul 6 23:32:47.775707 containerd[1519]: time="2025-07-06T23:32:47.775675833Z" level=info msg="connecting to shim 3f47fd36fe0e900a37aae15c2dc53c3e58c6f5e113c76eac70267d04321437d9" address="unix:///run/containerd/s/5bc0b9270831655185d95e9cb38d6e1f44738bae528f4ad5d69532ffcf1fced2" protocol=ttrpc version=3 Jul 6 23:32:47.778647 containerd[1519]: time="2025-07-06T23:32:47.778605880Z" level=info msg="CreateContainer within sandbox \"a044f3f1628f2775b9baae48f2cd710e5cacdb3546a9397b483bdf45be6d67b4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"93c1514d91d7097904a8059feabbab11171a2cc85814c201e348bd6f23466b55\"" Jul 6 23:32:47.779210 containerd[1519]: time="2025-07-06T23:32:47.779168772Z" level=info msg="StartContainer for \"93c1514d91d7097904a8059feabbab11171a2cc85814c201e348bd6f23466b55\"" Jul 6 23:32:47.780514 containerd[1519]: time="2025-07-06T23:32:47.780488265Z" level=info msg="connecting to shim 93c1514d91d7097904a8059feabbab11171a2cc85814c201e348bd6f23466b55" address="unix:///run/containerd/s/fca7a81010d00541205dad7e9cae15724639929190a164a3a0be677b6bab5d68" protocol=ttrpc version=3 Jul 6 23:32:47.788942 systemd[1]: Started cri-containerd-cb8d4d0bf4264c85c0673ed39bed8125a1dd1d1cba3bb0c1790928a6844d2ed8.scope - libcontainer container cb8d4d0bf4264c85c0673ed39bed8125a1dd1d1cba3bb0c1790928a6844d2ed8. Jul 6 23:32:47.792893 systemd[1]: Started cri-containerd-3f47fd36fe0e900a37aae15c2dc53c3e58c6f5e113c76eac70267d04321437d9.scope - libcontainer container 3f47fd36fe0e900a37aae15c2dc53c3e58c6f5e113c76eac70267d04321437d9. Jul 6 23:32:47.798708 systemd[1]: Started cri-containerd-93c1514d91d7097904a8059feabbab11171a2cc85814c201e348bd6f23466b55.scope - libcontainer container 93c1514d91d7097904a8059feabbab11171a2cc85814c201e348bd6f23466b55. Jul 6 23:32:47.839391 kubelet[2262]: I0706 23:32:47.835064 2262 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:32:47.839391 kubelet[2262]: E0706 23:32:47.835390 2262 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 6 23:32:47.850129 kubelet[2262]: W0706 23:32:47.849832 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 6 23:32:47.850129 kubelet[2262]: E0706 23:32:47.850058 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:47.854737 containerd[1519]: time="2025-07-06T23:32:47.854695668Z" level=info msg="StartContainer for \"cb8d4d0bf4264c85c0673ed39bed8125a1dd1d1cba3bb0c1790928a6844d2ed8\" returns successfully" Jul 6 23:32:47.865971 containerd[1519]: time="2025-07-06T23:32:47.861255471Z" level=info msg="StartContainer for \"3f47fd36fe0e900a37aae15c2dc53c3e58c6f5e113c76eac70267d04321437d9\" returns successfully" Jul 6 23:32:47.868785 containerd[1519]: time="2025-07-06T23:32:47.868654725Z" level=info msg="StartContainer for \"93c1514d91d7097904a8059feabbab11171a2cc85814c201e348bd6f23466b55\" returns successfully" Jul 6 23:32:47.942485 kubelet[2262]: W0706 23:32:47.942175 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 6 23:32:47.942485 kubelet[2262]: E0706 23:32:47.942245 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:47.955120 kubelet[2262]: W0706 23:32:47.951832 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 6 23:32:47.955120 kubelet[2262]: E0706 23:32:47.951881 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:48.011950 kubelet[2262]: E0706 23:32:48.005859 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:32:48.011950 kubelet[2262]: E0706 23:32:48.006003 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:48.011950 kubelet[2262]: E0706 23:32:48.008237 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:32:48.011950 kubelet[2262]: E0706 23:32:48.008338 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:48.012579 kubelet[2262]: E0706 23:32:48.012253 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:32:48.012579 kubelet[2262]: E0706 23:32:48.012466 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:48.637274 kubelet[2262]: I0706 23:32:48.637246 2262 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:32:49.012285 kubelet[2262]: E0706 23:32:49.012191 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:32:49.012381 kubelet[2262]: E0706 23:32:49.012322 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:49.013848 kubelet[2262]: E0706 23:32:49.013815 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:32:49.013992 kubelet[2262]: E0706 23:32:49.013977 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:49.422003 kubelet[2262]: E0706 23:32:49.421894 2262 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 6 23:32:49.539920 kubelet[2262]: I0706 23:32:49.539477 2262 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:32:49.540311 kubelet[2262]: E0706 23:32:49.540146 2262 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 6 23:32:49.551026 kubelet[2262]: E0706 23:32:49.550984 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:32:49.651941 kubelet[2262]: E0706 23:32:49.651887 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:32:49.752831 kubelet[2262]: E0706 23:32:49.752660 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:32:49.853549 kubelet[2262]: E0706 23:32:49.853505 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:32:49.953627 kubelet[2262]: E0706 23:32:49.953585 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:32:50.013783 kubelet[2262]: E0706 23:32:50.013623 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:32:50.014121 kubelet[2262]: E0706 23:32:50.014065 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:50.043334 kubelet[2262]: I0706 23:32:50.043299 2262 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:32:50.049450 kubelet[2262]: E0706 23:32:50.049288 2262 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 6 23:32:50.049450 kubelet[2262]: I0706 23:32:50.049323 2262 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:50.051204 kubelet[2262]: E0706 23:32:50.051119 2262 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:50.051204 kubelet[2262]: I0706 23:32:50.051143 2262 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:32:50.052678 kubelet[2262]: E0706 23:32:50.052642 2262 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 6 23:32:50.929316 kubelet[2262]: I0706 23:32:50.929066 2262 apiserver.go:52] "Watching apiserver" Jul 6 23:32:50.942751 kubelet[2262]: I0706 23:32:50.942691 2262 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:32:51.344283 systemd[1]: Reload requested from client PID 2535 ('systemctl') (unit session-7.scope)... Jul 6 23:32:51.344299 systemd[1]: Reloading... Jul 6 23:32:51.345606 kubelet[2262]: I0706 23:32:51.345120 2262 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:51.351263 kubelet[2262]: E0706 23:32:51.351233 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:51.437662 zram_generator::config[2578]: No configuration found. Jul 6 23:32:51.500375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:32:51.596278 systemd[1]: Reloading finished in 251 ms. Jul 6 23:32:51.616716 kubelet[2262]: I0706 23:32:51.616603 2262 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:32:51.616973 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:51.628160 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:32:51.629824 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:51.629870 systemd[1]: kubelet.service: Consumed 951ms CPU time, 128.9M memory peak. Jul 6 23:32:51.632025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:51.766453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:51.770271 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:32:51.808734 kubelet[2620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:32:51.808734 kubelet[2620]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:32:51.808734 kubelet[2620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:32:51.809088 kubelet[2620]: I0706 23:32:51.808808 2620 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:32:51.816324 kubelet[2620]: I0706 23:32:51.816248 2620 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:32:51.816324 kubelet[2620]: I0706 23:32:51.816280 2620 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:32:51.816791 kubelet[2620]: I0706 23:32:51.816744 2620 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:32:51.818066 kubelet[2620]: I0706 23:32:51.818038 2620 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:32:51.820384 kubelet[2620]: I0706 23:32:51.820352 2620 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:32:51.824618 kubelet[2620]: I0706 23:32:51.824594 2620 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:32:51.827680 kubelet[2620]: I0706 23:32:51.827649 2620 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:32:51.827872 kubelet[2620]: I0706 23:32:51.827834 2620 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:32:51.828046 kubelet[2620]: I0706 23:32:51.827860 2620 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:32:51.828132 kubelet[2620]: I0706 23:32:51.828048 2620 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:32:51.828132 kubelet[2620]: I0706 23:32:51.828056 2620 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:32:51.828132 kubelet[2620]: I0706 23:32:51.828098 2620 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:32:51.828274 kubelet[2620]: I0706 23:32:51.828242 2620 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:32:51.828274 kubelet[2620]: I0706 23:32:51.828257 2620 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:32:51.828808 kubelet[2620]: I0706 23:32:51.828279 2620 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:32:51.828808 kubelet[2620]: I0706 23:32:51.828293 2620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:32:51.829440 kubelet[2620]: I0706 23:32:51.829420 2620 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:32:51.831013 kubelet[2620]: I0706 23:32:51.830908 2620 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:32:51.832077 kubelet[2620]: I0706 23:32:51.832050 2620 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:32:51.832704 kubelet[2620]: I0706 23:32:51.832694 2620 server.go:1287] "Started kubelet" Jul 6 23:32:51.834113 kubelet[2620]: I0706 23:32:51.834051 2620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:32:51.834351 kubelet[2620]: I0706 23:32:51.834323 2620 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:32:51.834409 kubelet[2620]: I0706 23:32:51.834386 2620 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:32:51.836372 kubelet[2620]: I0706 23:32:51.836341 2620 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:32:51.840152 kubelet[2620]: I0706 23:32:51.840118 2620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:32:51.841549 kubelet[2620]: I0706 23:32:51.841527 2620 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:32:51.841995 kubelet[2620]: E0706 23:32:51.841951 2620 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:32:51.844659 kubelet[2620]: E0706 23:32:51.842040 2620 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:32:51.844659 kubelet[2620]: I0706 23:32:51.842069 2620 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:32:51.844659 kubelet[2620]: I0706 23:32:51.842250 2620 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:32:51.844659 kubelet[2620]: I0706 23:32:51.842358 2620 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:32:51.844659 kubelet[2620]: I0706 23:32:51.843917 2620 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:32:51.844659 kubelet[2620]: I0706 23:32:51.844023 2620 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:32:51.851962 kubelet[2620]: I0706 23:32:51.850153 2620 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:32:51.866534 kubelet[2620]: I0706 23:32:51.864626 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:32:51.866534 kubelet[2620]: I0706 23:32:51.866457 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:32:51.866534 kubelet[2620]: I0706 23:32:51.866480 2620 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:32:51.866534 kubelet[2620]: I0706 23:32:51.866496 2620 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:32:51.866534 kubelet[2620]: I0706 23:32:51.866503 2620 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:32:51.866968 kubelet[2620]: E0706 23:32:51.866541 2620 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:32:51.918290 kubelet[2620]: I0706 23:32:51.918258 2620 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:32:51.918290 kubelet[2620]: I0706 23:32:51.918275 2620 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:32:51.918290 kubelet[2620]: I0706 23:32:51.918297 2620 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:32:51.918485 kubelet[2620]: I0706 23:32:51.918467 2620 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:32:51.918514 kubelet[2620]: I0706 23:32:51.918485 2620 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:32:51.918514 kubelet[2620]: I0706 23:32:51.918507 2620 policy_none.go:49] "None policy: Start" Jul 6 23:32:51.918558 kubelet[2620]: I0706 23:32:51.918516 2620 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:32:51.918558 kubelet[2620]: I0706 23:32:51.918525 2620 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:32:51.918623 kubelet[2620]: I0706 23:32:51.918613 2620 state_mem.go:75] "Updated machine memory state" Jul 6 23:32:51.922705 kubelet[2620]: I0706 23:32:51.922682 2620 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:32:51.923122 kubelet[2620]: I0706 23:32:51.922884 2620 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:32:51.923122 kubelet[2620]: I0706 23:32:51.922908 2620 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:32:51.923674 kubelet[2620]: I0706 23:32:51.923644 2620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:32:51.924031 kubelet[2620]: E0706 23:32:51.924010 2620 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:32:51.968069 kubelet[2620]: I0706 23:32:51.968023 2620 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:51.968180 kubelet[2620]: I0706 23:32:51.968024 2620 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:32:51.968312 kubelet[2620]: I0706 23:32:51.968024 2620 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:32:51.973691 kubelet[2620]: E0706 23:32:51.973659 2620 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:52.027114 kubelet[2620]: I0706 23:32:52.027089 2620 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:32:52.034288 kubelet[2620]: I0706 23:32:52.034258 2620 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 6 23:32:52.034405 kubelet[2620]: I0706 23:32:52.034362 2620 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:32:52.043569 kubelet[2620]: I0706 23:32:52.043535 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:52.043741 kubelet[2620]: I0706 23:32:52.043688 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:52.043741 kubelet[2620]: I0706 23:32:52.043715 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:32:52.043909 kubelet[2620]: I0706 23:32:52.043731 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e73e5e1cb89fe288fcbb88122b0b6db0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e73e5e1cb89fe288fcbb88122b0b6db0\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:32:52.043909 kubelet[2620]: I0706 23:32:52.043866 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e73e5e1cb89fe288fcbb88122b0b6db0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e73e5e1cb89fe288fcbb88122b0b6db0\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:32:52.043909 kubelet[2620]: I0706 23:32:52.043887 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:52.044075 kubelet[2620]: I0706 23:32:52.044007 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:52.044075 kubelet[2620]: I0706 23:32:52.044039 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:32:52.044075 kubelet[2620]: I0706 23:32:52.044057 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e73e5e1cb89fe288fcbb88122b0b6db0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e73e5e1cb89fe288fcbb88122b0b6db0\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:32:52.274036 kubelet[2620]: E0706 23:32:52.273934 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:52.274036 kubelet[2620]: E0706 23:32:52.274018 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:52.274197 kubelet[2620]: E0706 23:32:52.274038 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:52.829419 kubelet[2620]: I0706 23:32:52.829344 2620 apiserver.go:52] "Watching apiserver" Jul 6 23:32:52.842485 kubelet[2620]: I0706 23:32:52.842437 2620 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:32:52.901823 kubelet[2620]: I0706 23:32:52.901753 2620 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:32:52.902266 kubelet[2620]: E0706 23:32:52.902030 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:52.902429 kubelet[2620]: E0706 23:32:52.902410 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:52.951667 kubelet[2620]: E0706 23:32:52.951550 2620 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:32:52.951972 kubelet[2620]: E0706 23:32:52.951952 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:52.965558 kubelet[2620]: I0706 23:32:52.964709 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9646900010000001 podStartE2EDuration="1.964690001s" podCreationTimestamp="2025-07-06 23:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:32:52.951903712 +0000 UTC m=+1.178793569" watchObservedRunningTime="2025-07-06 23:32:52.964690001 +0000 UTC m=+1.191579858" Jul 6 23:32:52.965990 kubelet[2620]: I0706 23:32:52.965870 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9658575109999998 podStartE2EDuration="1.965857511s" podCreationTimestamp="2025-07-06 23:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:32:52.965554344 +0000 UTC m=+1.192444242" watchObservedRunningTime="2025-07-06 23:32:52.965857511 +0000 UTC m=+1.192747408" Jul 6 23:32:52.981242 kubelet[2620]: I0706 23:32:52.981183 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.9811660930000001 podStartE2EDuration="1.981166093s" podCreationTimestamp="2025-07-06 23:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:32:52.980171123 +0000 UTC m=+1.207061020" watchObservedRunningTime="2025-07-06 23:32:52.981166093 +0000 UTC m=+1.208056030" Jul 6 23:32:53.903479 kubelet[2620]: E0706 23:32:53.903329 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:53.904257 kubelet[2620]: E0706 23:32:53.903945 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:56.843877 kubelet[2620]: E0706 23:32:56.843834 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:58.504872 kubelet[2620]: I0706 23:32:58.504811 2620 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:32:58.505614 containerd[1519]: time="2025-07-06T23:32:58.505566627Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:32:58.506243 kubelet[2620]: I0706 23:32:58.506014 2620 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:32:59.162198 systemd[1]: Created slice kubepods-besteffort-podaced8c72_6efa_4465_a594_dfeff21e3131.slice - libcontainer container kubepods-besteffort-podaced8c72_6efa_4465_a594_dfeff21e3131.slice. Jul 6 23:32:59.187823 kubelet[2620]: I0706 23:32:59.187760 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aced8c72-6efa-4465-a594-dfeff21e3131-lib-modules\") pod \"kube-proxy-59g8j\" (UID: \"aced8c72-6efa-4465-a594-dfeff21e3131\") " pod="kube-system/kube-proxy-59g8j" Jul 6 23:32:59.187823 kubelet[2620]: I0706 23:32:59.187823 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb9pz\" (UniqueName: \"kubernetes.io/projected/aced8c72-6efa-4465-a594-dfeff21e3131-kube-api-access-sb9pz\") pod \"kube-proxy-59g8j\" (UID: \"aced8c72-6efa-4465-a594-dfeff21e3131\") " pod="kube-system/kube-proxy-59g8j" Jul 6 23:32:59.187978 kubelet[2620]: I0706 23:32:59.187846 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aced8c72-6efa-4465-a594-dfeff21e3131-kube-proxy\") pod \"kube-proxy-59g8j\" (UID: \"aced8c72-6efa-4465-a594-dfeff21e3131\") " pod="kube-system/kube-proxy-59g8j" Jul 6 23:32:59.187978 kubelet[2620]: I0706 23:32:59.187861 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aced8c72-6efa-4465-a594-dfeff21e3131-xtables-lock\") pod \"kube-proxy-59g8j\" (UID: \"aced8c72-6efa-4465-a594-dfeff21e3131\") " pod="kube-system/kube-proxy-59g8j" Jul 6 23:32:59.481721 kubelet[2620]: E0706 23:32:59.481605 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:59.482577 containerd[1519]: time="2025-07-06T23:32:59.482497693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59g8j,Uid:aced8c72-6efa-4465-a594-dfeff21e3131,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:59.497965 containerd[1519]: time="2025-07-06T23:32:59.497906878Z" level=info msg="connecting to shim 1ca59de4e23233da2d76b56a0dcdf01847b68ee4f5957bcc5dccaf667e73db79" address="unix:///run/containerd/s/0634bcab818ae6c175b411c79a3d24c608c7f5e63192c1d50fd8951e1a706599" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:32:59.530988 systemd[1]: Started cri-containerd-1ca59de4e23233da2d76b56a0dcdf01847b68ee4f5957bcc5dccaf667e73db79.scope - libcontainer container 1ca59de4e23233da2d76b56a0dcdf01847b68ee4f5957bcc5dccaf667e73db79. Jul 6 23:32:59.572808 containerd[1519]: time="2025-07-06T23:32:59.572173072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59g8j,Uid:aced8c72-6efa-4465-a594-dfeff21e3131,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ca59de4e23233da2d76b56a0dcdf01847b68ee4f5957bcc5dccaf667e73db79\"" Jul 6 23:32:59.574735 kubelet[2620]: E0706 23:32:59.574701 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:59.579742 containerd[1519]: time="2025-07-06T23:32:59.579693066Z" level=info msg="CreateContainer within sandbox \"1ca59de4e23233da2d76b56a0dcdf01847b68ee4f5957bcc5dccaf667e73db79\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:32:59.592590 systemd[1]: Created slice kubepods-besteffort-pod716d8951_74ca_46a1_a83e_87f8e498ef68.slice - libcontainer container kubepods-besteffort-pod716d8951_74ca_46a1_a83e_87f8e498ef68.slice. Jul 6 23:32:59.595225 containerd[1519]: time="2025-07-06T23:32:59.595055786Z" level=info msg="Container b9411407060ed051fd55dc8d91e4b646fa052a481244baf2865831824f3de26e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:32:59.602946 containerd[1519]: time="2025-07-06T23:32:59.602896792Z" level=info msg="CreateContainer within sandbox \"1ca59de4e23233da2d76b56a0dcdf01847b68ee4f5957bcc5dccaf667e73db79\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b9411407060ed051fd55dc8d91e4b646fa052a481244baf2865831824f3de26e\"" Jul 6 23:32:59.603922 containerd[1519]: time="2025-07-06T23:32:59.603891365Z" level=info msg="StartContainer for \"b9411407060ed051fd55dc8d91e4b646fa052a481244baf2865831824f3de26e\"" Jul 6 23:32:59.605564 containerd[1519]: time="2025-07-06T23:32:59.605528203Z" level=info msg="connecting to shim b9411407060ed051fd55dc8d91e4b646fa052a481244baf2865831824f3de26e" address="unix:///run/containerd/s/0634bcab818ae6c175b411c79a3d24c608c7f5e63192c1d50fd8951e1a706599" protocol=ttrpc version=3 Jul 6 23:32:59.625956 systemd[1]: Started cri-containerd-b9411407060ed051fd55dc8d91e4b646fa052a481244baf2865831824f3de26e.scope - libcontainer container b9411407060ed051fd55dc8d91e4b646fa052a481244baf2865831824f3de26e. Jul 6 23:32:59.666215 containerd[1519]: time="2025-07-06T23:32:59.666166408Z" level=info msg="StartContainer for \"b9411407060ed051fd55dc8d91e4b646fa052a481244baf2865831824f3de26e\" returns successfully" Jul 6 23:32:59.694861 kubelet[2620]: I0706 23:32:59.694817 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcqsr\" (UniqueName: \"kubernetes.io/projected/716d8951-74ca-46a1-a83e-87f8e498ef68-kube-api-access-lcqsr\") pod \"tigera-operator-747864d56d-zx94b\" (UID: \"716d8951-74ca-46a1-a83e-87f8e498ef68\") " pod="tigera-operator/tigera-operator-747864d56d-zx94b" Jul 6 23:32:59.694861 kubelet[2620]: I0706 23:32:59.694863 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/716d8951-74ca-46a1-a83e-87f8e498ef68-var-lib-calico\") pod \"tigera-operator-747864d56d-zx94b\" (UID: \"716d8951-74ca-46a1-a83e-87f8e498ef68\") " pod="tigera-operator/tigera-operator-747864d56d-zx94b" Jul 6 23:32:59.876528 kubelet[2620]: E0706 23:32:59.876138 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:59.896070 containerd[1519]: time="2025-07-06T23:32:59.896019134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-zx94b,Uid:716d8951-74ca-46a1-a83e-87f8e498ef68,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:32:59.912814 containerd[1519]: time="2025-07-06T23:32:59.912755551Z" level=info msg="connecting to shim e5d365299a5ef03c1efbc30ee85975bf7bf889b626ae9932d4b1de997c002e4d" address="unix:///run/containerd/s/f1088e8ebfc8c912f7f5a62631898ff93214cf783287049934b1c11eb71b5238" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:32:59.915421 kubelet[2620]: E0706 23:32:59.915381 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:59.915522 kubelet[2620]: E0706 23:32:59.915384 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:32:59.942495 kubelet[2620]: I0706 23:32:59.942442 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-59g8j" podStartSLOduration=0.942424104 podStartE2EDuration="942.424104ms" podCreationTimestamp="2025-07-06 23:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:32:59.930095051 +0000 UTC m=+8.156984988" watchObservedRunningTime="2025-07-06 23:32:59.942424104 +0000 UTC m=+8.169313961" Jul 6 23:32:59.948000 systemd[1]: Started cri-containerd-e5d365299a5ef03c1efbc30ee85975bf7bf889b626ae9932d4b1de997c002e4d.scope - libcontainer container e5d365299a5ef03c1efbc30ee85975bf7bf889b626ae9932d4b1de997c002e4d. Jul 6 23:32:59.979784 containerd[1519]: time="2025-07-06T23:32:59.979735477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-zx94b,Uid:716d8951-74ca-46a1-a83e-87f8e498ef68,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e5d365299a5ef03c1efbc30ee85975bf7bf889b626ae9932d4b1de997c002e4d\"" Jul 6 23:32:59.981509 containerd[1519]: time="2025-07-06T23:32:59.981475570Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:33:00.922223 kubelet[2620]: E0706 23:33:00.922111 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:01.056161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount465879648.mount: Deactivated successfully. Jul 6 23:33:01.717504 containerd[1519]: time="2025-07-06T23:33:01.717456317Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:01.718411 containerd[1519]: time="2025-07-06T23:33:01.718186027Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 6 23:33:01.719045 containerd[1519]: time="2025-07-06T23:33:01.719008222Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:01.721073 containerd[1519]: time="2025-07-06T23:33:01.721041358Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:01.721793 containerd[1519]: time="2025-07-06T23:33:01.721747697Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.740234348s" Jul 6 23:33:01.721793 containerd[1519]: time="2025-07-06T23:33:01.721790558Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 6 23:33:01.726671 containerd[1519]: time="2025-07-06T23:33:01.726638445Z" level=info msg="CreateContainer within sandbox \"e5d365299a5ef03c1efbc30ee85975bf7bf889b626ae9932d4b1de997c002e4d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:33:01.732603 containerd[1519]: time="2025-07-06T23:33:01.732562248Z" level=info msg="Container 197eb129dee3304c3c9b4f7565f7b4a1ed1bc9900b2038fc877095356f76b167: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:01.735590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2233757393.mount: Deactivated successfully. Jul 6 23:33:01.739244 containerd[1519]: time="2025-07-06T23:33:01.739202796Z" level=info msg="CreateContainer within sandbox \"e5d365299a5ef03c1efbc30ee85975bf7bf889b626ae9932d4b1de997c002e4d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"197eb129dee3304c3c9b4f7565f7b4a1ed1bc9900b2038fc877095356f76b167\"" Jul 6 23:33:01.739762 containerd[1519]: time="2025-07-06T23:33:01.739722325Z" level=info msg="StartContainer for \"197eb129dee3304c3c9b4f7565f7b4a1ed1bc9900b2038fc877095356f76b167\"" Jul 6 23:33:01.740909 containerd[1519]: time="2025-07-06T23:33:01.740878760Z" level=info msg="connecting to shim 197eb129dee3304c3c9b4f7565f7b4a1ed1bc9900b2038fc877095356f76b167" address="unix:///run/containerd/s/f1088e8ebfc8c912f7f5a62631898ff93214cf783287049934b1c11eb71b5238" protocol=ttrpc version=3 Jul 6 23:33:01.764944 systemd[1]: Started cri-containerd-197eb129dee3304c3c9b4f7565f7b4a1ed1bc9900b2038fc877095356f76b167.scope - libcontainer container 197eb129dee3304c3c9b4f7565f7b4a1ed1bc9900b2038fc877095356f76b167. Jul 6 23:33:01.795403 containerd[1519]: time="2025-07-06T23:33:01.793446072Z" level=info msg="StartContainer for \"197eb129dee3304c3c9b4f7565f7b4a1ed1bc9900b2038fc877095356f76b167\" returns successfully" Jul 6 23:33:01.951844 kubelet[2620]: I0706 23:33:01.951781 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-zx94b" podStartSLOduration=1.208500759 podStartE2EDuration="2.951749778s" podCreationTimestamp="2025-07-06 23:32:59 +0000 UTC" firstStartedPulling="2025-07-06 23:32:59.981073274 +0000 UTC m=+8.207963171" lastFinishedPulling="2025-07-06 23:33:01.724322293 +0000 UTC m=+9.951212190" observedRunningTime="2025-07-06 23:33:01.951569332 +0000 UTC m=+10.178459229" watchObservedRunningTime="2025-07-06 23:33:01.951749778 +0000 UTC m=+10.178639675" Jul 6 23:33:02.804294 kubelet[2620]: E0706 23:33:02.804264 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:02.942022 kubelet[2620]: E0706 23:33:02.941994 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:06.851216 kubelet[2620]: E0706 23:33:06.851142 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:07.288684 sudo[1711]: pam_unix(sudo:session): session closed for user root Jul 6 23:33:07.293714 sshd[1710]: Connection closed by 10.0.0.1 port 53714 Jul 6 23:33:07.295303 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:07.298191 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:53714.service: Deactivated successfully. Jul 6 23:33:07.300897 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:33:07.301110 systemd[1]: session-7.scope: Consumed 6.655s CPU time, 227.6M memory peak. Jul 6 23:33:07.305309 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:33:07.308322 systemd-logind[1497]: Removed session 7. Jul 6 23:33:08.424888 update_engine[1499]: I20250706 23:33:08.424811 1499 update_attempter.cc:509] Updating boot flags... Jul 6 23:33:14.009476 systemd[1]: Created slice kubepods-besteffort-pod7d565648_8bd6_40cf_8e0f_fae4d96853d9.slice - libcontainer container kubepods-besteffort-pod7d565648_8bd6_40cf_8e0f_fae4d96853d9.slice. Jul 6 23:33:14.093613 kubelet[2620]: I0706 23:33:14.093561 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7d565648-8bd6-40cf-8e0f-fae4d96853d9-typha-certs\") pod \"calico-typha-6cf69869d6-xkj79\" (UID: \"7d565648-8bd6-40cf-8e0f-fae4d96853d9\") " pod="calico-system/calico-typha-6cf69869d6-xkj79" Jul 6 23:33:14.094003 kubelet[2620]: I0706 23:33:14.093690 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fmsf\" (UniqueName: \"kubernetes.io/projected/7d565648-8bd6-40cf-8e0f-fae4d96853d9-kube-api-access-7fmsf\") pod \"calico-typha-6cf69869d6-xkj79\" (UID: \"7d565648-8bd6-40cf-8e0f-fae4d96853d9\") " pod="calico-system/calico-typha-6cf69869d6-xkj79" Jul 6 23:33:14.094003 kubelet[2620]: I0706 23:33:14.093720 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d565648-8bd6-40cf-8e0f-fae4d96853d9-tigera-ca-bundle\") pod \"calico-typha-6cf69869d6-xkj79\" (UID: \"7d565648-8bd6-40cf-8e0f-fae4d96853d9\") " pod="calico-system/calico-typha-6cf69869d6-xkj79" Jul 6 23:33:14.263577 systemd[1]: Created slice kubepods-besteffort-pod64936778_ea69_40de_b8f7_b3b49a73226b.slice - libcontainer container kubepods-besteffort-pod64936778_ea69_40de_b8f7_b3b49a73226b.slice. Jul 6 23:33:14.296235 kubelet[2620]: I0706 23:33:14.296097 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/64936778-ea69-40de-b8f7-b3b49a73226b-node-certs\") pod \"calico-node-zcg9k\" (UID: \"64936778-ea69-40de-b8f7-b3b49a73226b\") " pod="calico-system/calico-node-zcg9k" Jul 6 23:33:14.296363 kubelet[2620]: I0706 23:33:14.296255 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64936778-ea69-40de-b8f7-b3b49a73226b-xtables-lock\") pod \"calico-node-zcg9k\" (UID: \"64936778-ea69-40de-b8f7-b3b49a73226b\") " pod="calico-system/calico-node-zcg9k" Jul 6 23:33:14.296363 kubelet[2620]: I0706 23:33:14.296276 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/64936778-ea69-40de-b8f7-b3b49a73226b-cni-net-dir\") pod \"calico-node-zcg9k\" (UID: \"64936778-ea69-40de-b8f7-b3b49a73226b\") " pod="calico-system/calico-node-zcg9k" Jul 6 23:33:14.296363 kubelet[2620]: I0706 23:33:14.296295 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64936778-ea69-40de-b8f7-b3b49a73226b-tigera-ca-bundle\") pod \"calico-node-zcg9k\" (UID: \"64936778-ea69-40de-b8f7-b3b49a73226b\") " pod="calico-system/calico-node-zcg9k" Jul 6 23:33:14.296363 kubelet[2620]: I0706 23:33:14.296311 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64936778-ea69-40de-b8f7-b3b49a73226b-var-lib-calico\") pod \"calico-node-zcg9k\" (UID: \"64936778-ea69-40de-b8f7-b3b49a73226b\") " pod="calico-system/calico-node-zcg9k" Jul 6 23:33:14.296363 kubelet[2620]: I0706 23:33:14.296327 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/64936778-ea69-40de-b8f7-b3b49a73226b-cni-log-dir\") pod \"calico-node-zcg9k\" (UID: \"64936778-ea69-40de-b8f7-b3b49a73226b\") " pod="calico-system/calico-node-zcg9k" Jul 6 23:33:14.296487 kubelet[2620]: I0706 23:33:14.296350 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/64936778-ea69-40de-b8f7-b3b49a73226b-flexvol-driver-host\") pod \"calico-node-zcg9k\" (UID: \"64936778-ea69-40de-b8f7-b3b49a73226b\") " pod="calico-system/calico-node-zcg9k" Jul 6 23:33:14.296487 kubelet[2620]: I0706 23:33:14.296371 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64936778-ea69-40de-b8f7-b3b49a73226b-lib-modules\") pod \"calico-node-zcg9k\" (UID: \"64936778-ea69-40de-b8f7-b3b49a73226b\") " pod="calico-system/calico-node-zcg9k" Jul 6 23:33:14.296487 kubelet[2620]: I0706 23:33:14.296386 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/64936778-ea69-40de-b8f7-b3b49a73226b-policysync\") pod \"calico-node-zcg9k\" (UID: \"64936778-ea69-40de-b8f7-b3b49a73226b\") " pod="calico-system/calico-node-zcg9k" Jul 6 23:33:14.296487 kubelet[2620]: I0706 23:33:14.296404 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/64936778-ea69-40de-b8f7-b3b49a73226b-var-run-calico\") pod \"calico-node-zcg9k\" (UID: \"64936778-ea69-40de-b8f7-b3b49a73226b\") " pod="calico-system/calico-node-zcg9k" Jul 6 23:33:14.296487 kubelet[2620]: I0706 23:33:14.296423 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/64936778-ea69-40de-b8f7-b3b49a73226b-cni-bin-dir\") pod \"calico-node-zcg9k\" (UID: \"64936778-ea69-40de-b8f7-b3b49a73226b\") " pod="calico-system/calico-node-zcg9k" Jul 6 23:33:14.296588 kubelet[2620]: I0706 23:33:14.296439 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6tk6\" (UniqueName: \"kubernetes.io/projected/64936778-ea69-40de-b8f7-b3b49a73226b-kube-api-access-j6tk6\") pod \"calico-node-zcg9k\" (UID: \"64936778-ea69-40de-b8f7-b3b49a73226b\") " pod="calico-system/calico-node-zcg9k" Jul 6 23:33:14.316590 kubelet[2620]: E0706 23:33:14.316550 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:14.319254 containerd[1519]: time="2025-07-06T23:33:14.318920709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cf69869d6-xkj79,Uid:7d565648-8bd6-40cf-8e0f-fae4d96853d9,Namespace:calico-system,Attempt:0,}" Jul 6 23:33:14.366316 containerd[1519]: time="2025-07-06T23:33:14.366248148Z" level=info msg="connecting to shim d014b40d395db32f02fa52cacba349c14c78a618d37a6b1b295205fbeea693d1" address="unix:///run/containerd/s/9594e53c5431dd80c39789f301cb5c723a130efdb37e74fc3e98488c143d4472" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:33:14.398931 kubelet[2620]: E0706 23:33:14.398894 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.398931 kubelet[2620]: W0706 23:33:14.398921 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.399164 kubelet[2620]: E0706 23:33:14.398955 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.399278 kubelet[2620]: E0706 23:33:14.399261 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.399278 kubelet[2620]: W0706 23:33:14.399276 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.399438 kubelet[2620]: E0706 23:33:14.399410 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.399673 kubelet[2620]: E0706 23:33:14.399658 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.399713 kubelet[2620]: W0706 23:33:14.399673 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.399867 kubelet[2620]: E0706 23:33:14.399781 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.400012 kubelet[2620]: E0706 23:33:14.399998 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.400077 kubelet[2620]: W0706 23:33:14.400065 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.400130 kubelet[2620]: E0706 23:33:14.400120 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.400345 kubelet[2620]: E0706 23:33:14.400334 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.400675 kubelet[2620]: W0706 23:33:14.400417 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.400675 kubelet[2620]: E0706 23:33:14.400435 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.405883 kubelet[2620]: E0706 23:33:14.405846 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.405883 kubelet[2620]: W0706 23:33:14.405869 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.405976 kubelet[2620]: E0706 23:33:14.405888 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.417420 kubelet[2620]: E0706 23:33:14.417393 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.417420 kubelet[2620]: W0706 23:33:14.417414 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.417559 kubelet[2620]: E0706 23:33:14.417434 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.428972 systemd[1]: Started cri-containerd-d014b40d395db32f02fa52cacba349c14c78a618d37a6b1b295205fbeea693d1.scope - libcontainer container d014b40d395db32f02fa52cacba349c14c78a618d37a6b1b295205fbeea693d1. Jul 6 23:33:14.513047 containerd[1519]: time="2025-07-06T23:33:14.512998444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cf69869d6-xkj79,Uid:7d565648-8bd6-40cf-8e0f-fae4d96853d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"d014b40d395db32f02fa52cacba349c14c78a618d37a6b1b295205fbeea693d1\"" Jul 6 23:33:14.516808 kubelet[2620]: E0706 23:33:14.516327 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:14.522981 containerd[1519]: time="2025-07-06T23:33:14.522927663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:33:14.567520 kubelet[2620]: E0706 23:33:14.567362 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9775h" podUID="1a099398-b7a4-48cd-a32f-542006582ad1" Jul 6 23:33:14.570176 containerd[1519]: time="2025-07-06T23:33:14.570094862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zcg9k,Uid:64936778-ea69-40de-b8f7-b3b49a73226b,Namespace:calico-system,Attempt:0,}" Jul 6 23:33:14.578256 kubelet[2620]: E0706 23:33:14.578225 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.578256 kubelet[2620]: W0706 23:33:14.578250 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.578485 kubelet[2620]: E0706 23:33:14.578276 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.578601 kubelet[2620]: E0706 23:33:14.578582 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.578742 kubelet[2620]: W0706 23:33:14.578599 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.578742 kubelet[2620]: E0706 23:33:14.578681 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.579067 kubelet[2620]: E0706 23:33:14.579048 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.579067 kubelet[2620]: W0706 23:33:14.579065 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.579139 kubelet[2620]: E0706 23:33:14.579077 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.579543 kubelet[2620]: E0706 23:33:14.579521 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.579543 kubelet[2620]: W0706 23:33:14.579537 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.579629 kubelet[2620]: E0706 23:33:14.579548 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.580537 kubelet[2620]: E0706 23:33:14.580514 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.580537 kubelet[2620]: W0706 23:33:14.580531 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.580709 kubelet[2620]: E0706 23:33:14.580544 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.580997 kubelet[2620]: E0706 23:33:14.580979 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.580997 kubelet[2620]: W0706 23:33:14.580993 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.581087 kubelet[2620]: E0706 23:33:14.581004 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.581309 kubelet[2620]: E0706 23:33:14.581293 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.581309 kubelet[2620]: W0706 23:33:14.581307 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.581377 kubelet[2620]: E0706 23:33:14.581318 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.582281 kubelet[2620]: E0706 23:33:14.582258 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.582835 kubelet[2620]: W0706 23:33:14.582805 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.582835 kubelet[2620]: E0706 23:33:14.582833 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.583531 kubelet[2620]: E0706 23:33:14.583418 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.583531 kubelet[2620]: W0706 23:33:14.583435 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.583531 kubelet[2620]: E0706 23:33:14.583447 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.583982 kubelet[2620]: E0706 23:33:14.583898 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.583982 kubelet[2620]: W0706 23:33:14.583914 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.583982 kubelet[2620]: E0706 23:33:14.583925 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.584613 kubelet[2620]: E0706 23:33:14.584507 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.584613 kubelet[2620]: W0706 23:33:14.584524 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.584613 kubelet[2620]: E0706 23:33:14.584536 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.585563 kubelet[2620]: E0706 23:33:14.585541 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.585563 kubelet[2620]: W0706 23:33:14.585558 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.585804 kubelet[2620]: E0706 23:33:14.585571 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.586184 kubelet[2620]: E0706 23:33:14.586164 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.586184 kubelet[2620]: W0706 23:33:14.586181 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.586338 kubelet[2620]: E0706 23:33:14.586196 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.587140 kubelet[2620]: E0706 23:33:14.586852 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.587140 kubelet[2620]: W0706 23:33:14.586868 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.587140 kubelet[2620]: E0706 23:33:14.586880 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.587648 kubelet[2620]: E0706 23:33:14.587629 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.587648 kubelet[2620]: W0706 23:33:14.587645 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.587731 kubelet[2620]: E0706 23:33:14.587656 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.588656 kubelet[2620]: E0706 23:33:14.588255 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.588656 kubelet[2620]: W0706 23:33:14.588272 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.588656 kubelet[2620]: E0706 23:33:14.588283 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.590679 kubelet[2620]: E0706 23:33:14.590646 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.590679 kubelet[2620]: W0706 23:33:14.590670 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.590679 kubelet[2620]: E0706 23:33:14.590683 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.591793 kubelet[2620]: E0706 23:33:14.591711 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.591793 kubelet[2620]: W0706 23:33:14.591788 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.591867 kubelet[2620]: E0706 23:33:14.591804 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.592589 kubelet[2620]: E0706 23:33:14.592427 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.592589 kubelet[2620]: W0706 23:33:14.592446 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.592915 kubelet[2620]: E0706 23:33:14.592461 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.593794 kubelet[2620]: E0706 23:33:14.593323 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.593794 kubelet[2620]: W0706 23:33:14.593477 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.593794 kubelet[2620]: E0706 23:33:14.593493 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.594169 containerd[1519]: time="2025-07-06T23:33:14.594131694Z" level=info msg="connecting to shim 2609a5272dc4d705abf635538c98a2cc3ef6caf089485e78b8f3b7dfbe466c3d" address="unix:///run/containerd/s/40646e4071dbe3ed638cd02100ebcf05ea4cd7e67b6cc0038e1f04aedd606fc5" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:33:14.600913 kubelet[2620]: E0706 23:33:14.600875 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.600913 kubelet[2620]: W0706 23:33:14.600901 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.600913 kubelet[2620]: E0706 23:33:14.600925 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.601098 kubelet[2620]: I0706 23:33:14.600959 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjdh2\" (UniqueName: \"kubernetes.io/projected/1a099398-b7a4-48cd-a32f-542006582ad1-kube-api-access-pjdh2\") pod \"csi-node-driver-9775h\" (UID: \"1a099398-b7a4-48cd-a32f-542006582ad1\") " pod="calico-system/csi-node-driver-9775h" Jul 6 23:33:14.601473 kubelet[2620]: E0706 23:33:14.601425 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.601473 kubelet[2620]: W0706 23:33:14.601445 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.601473 kubelet[2620]: E0706 23:33:14.601463 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.601702 kubelet[2620]: I0706 23:33:14.601481 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1a099398-b7a4-48cd-a32f-542006582ad1-socket-dir\") pod \"csi-node-driver-9775h\" (UID: \"1a099398-b7a4-48cd-a32f-542006582ad1\") " pod="calico-system/csi-node-driver-9775h" Jul 6 23:33:14.602852 kubelet[2620]: E0706 23:33:14.602745 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.602852 kubelet[2620]: W0706 23:33:14.602763 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.602852 kubelet[2620]: E0706 23:33:14.602798 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.602852 kubelet[2620]: I0706 23:33:14.602820 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1a099398-b7a4-48cd-a32f-542006582ad1-registration-dir\") pod \"csi-node-driver-9775h\" (UID: \"1a099398-b7a4-48cd-a32f-542006582ad1\") " pod="calico-system/csi-node-driver-9775h" Jul 6 23:33:14.603064 kubelet[2620]: E0706 23:33:14.603018 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.603064 kubelet[2620]: W0706 23:33:14.603057 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.603295 kubelet[2620]: E0706 23:33:14.603078 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.603984 kubelet[2620]: E0706 23:33:14.603957 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.603984 kubelet[2620]: W0706 23:33:14.603980 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.603984 kubelet[2620]: E0706 23:33:14.604025 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.604802 kubelet[2620]: E0706 23:33:14.604780 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.604802 kubelet[2620]: W0706 23:33:14.604802 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.604918 kubelet[2620]: E0706 23:33:14.604839 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.605021 kubelet[2620]: E0706 23:33:14.605004 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.605021 kubelet[2620]: W0706 23:33:14.605018 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.605170 kubelet[2620]: E0706 23:33:14.605045 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.605170 kubelet[2620]: I0706 23:33:14.605075 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a099398-b7a4-48cd-a32f-542006582ad1-kubelet-dir\") pod \"csi-node-driver-9775h\" (UID: \"1a099398-b7a4-48cd-a32f-542006582ad1\") " pod="calico-system/csi-node-driver-9775h" Jul 6 23:33:14.605249 kubelet[2620]: E0706 23:33:14.605174 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.605249 kubelet[2620]: W0706 23:33:14.605183 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.605249 kubelet[2620]: E0706 23:33:14.605211 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.605342 kubelet[2620]: E0706 23:33:14.605328 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.605342 kubelet[2620]: W0706 23:33:14.605340 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.605410 kubelet[2620]: E0706 23:33:14.605350 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.606031 kubelet[2620]: E0706 23:33:14.606012 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.606031 kubelet[2620]: W0706 23:33:14.606031 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.606242 kubelet[2620]: E0706 23:33:14.606042 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.606620 kubelet[2620]: E0706 23:33:14.606501 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.606620 kubelet[2620]: W0706 23:33:14.606520 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.606620 kubelet[2620]: E0706 23:33:14.606538 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.606839 kubelet[2620]: E0706 23:33:14.606821 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.607033 kubelet[2620]: W0706 23:33:14.606900 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.607033 kubelet[2620]: E0706 23:33:14.606924 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.607492 kubelet[2620]: E0706 23:33:14.607322 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.607492 kubelet[2620]: W0706 23:33:14.607338 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.607492 kubelet[2620]: E0706 23:33:14.607348 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.607492 kubelet[2620]: I0706 23:33:14.607371 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1a099398-b7a4-48cd-a32f-542006582ad1-varrun\") pod \"csi-node-driver-9775h\" (UID: \"1a099398-b7a4-48cd-a32f-542006582ad1\") " pod="calico-system/csi-node-driver-9775h" Jul 6 23:33:14.607696 kubelet[2620]: E0706 23:33:14.607680 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.607943 kubelet[2620]: W0706 23:33:14.607744 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.607943 kubelet[2620]: E0706 23:33:14.607759 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.608106 kubelet[2620]: E0706 23:33:14.608091 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.608167 kubelet[2620]: W0706 23:33:14.608156 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.608217 kubelet[2620]: E0706 23:33:14.608207 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.638980 systemd[1]: Started cri-containerd-2609a5272dc4d705abf635538c98a2cc3ef6caf089485e78b8f3b7dfbe466c3d.scope - libcontainer container 2609a5272dc4d705abf635538c98a2cc3ef6caf089485e78b8f3b7dfbe466c3d. Jul 6 23:33:14.678427 containerd[1519]: time="2025-07-06T23:33:14.678385596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zcg9k,Uid:64936778-ea69-40de-b8f7-b3b49a73226b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2609a5272dc4d705abf635538c98a2cc3ef6caf089485e78b8f3b7dfbe466c3d\"" Jul 6 23:33:14.708583 kubelet[2620]: E0706 23:33:14.708425 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.708583 kubelet[2620]: W0706 23:33:14.708456 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.708583 kubelet[2620]: E0706 23:33:14.708478 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.708928 kubelet[2620]: E0706 23:33:14.708913 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.709092 kubelet[2620]: W0706 23:33:14.708969 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.709092 kubelet[2620]: E0706 23:33:14.708986 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.709244 kubelet[2620]: E0706 23:33:14.709231 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.709298 kubelet[2620]: W0706 23:33:14.709287 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.709347 kubelet[2620]: E0706 23:33:14.709337 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.709636 kubelet[2620]: E0706 23:33:14.709601 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.709727 kubelet[2620]: W0706 23:33:14.709705 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.709800 kubelet[2620]: E0706 23:33:14.709789 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.710100 kubelet[2620]: E0706 23:33:14.710030 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.710100 kubelet[2620]: W0706 23:33:14.710061 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.710100 kubelet[2620]: E0706 23:33:14.710089 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.711112 kubelet[2620]: E0706 23:33:14.710946 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.711112 kubelet[2620]: W0706 23:33:14.711001 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.711112 kubelet[2620]: E0706 23:33:14.711071 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.711285 kubelet[2620]: E0706 23:33:14.711272 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.711349 kubelet[2620]: W0706 23:33:14.711330 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.711467 kubelet[2620]: E0706 23:33:14.711439 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.711691 kubelet[2620]: E0706 23:33:14.711664 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.711822 kubelet[2620]: W0706 23:33:14.711677 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.711904 kubelet[2620]: E0706 23:33:14.711887 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.712179 kubelet[2620]: E0706 23:33:14.712151 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.712179 kubelet[2620]: W0706 23:33:14.712165 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.712362 kubelet[2620]: E0706 23:33:14.712341 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.712744 kubelet[2620]: E0706 23:33:14.712698 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.712744 kubelet[2620]: W0706 23:33:14.712713 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.712978 kubelet[2620]: E0706 23:33:14.712877 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.712978 kubelet[2620]: E0706 23:33:14.712966 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.712978 kubelet[2620]: W0706 23:33:14.712978 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.713213 kubelet[2620]: E0706 23:33:14.713017 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.713213 kubelet[2620]: E0706 23:33:14.713102 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.713213 kubelet[2620]: W0706 23:33:14.713109 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.713284 kubelet[2620]: E0706 23:33:14.713223 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.713284 kubelet[2620]: W0706 23:33:14.713229 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.713514 kubelet[2620]: E0706 23:33:14.713333 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.713514 kubelet[2620]: W0706 23:33:14.713344 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.713514 kubelet[2620]: E0706 23:33:14.713354 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.713514 kubelet[2620]: E0706 23:33:14.713365 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.713514 kubelet[2620]: E0706 23:33:14.713396 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.713514 kubelet[2620]: E0706 23:33:14.713483 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.713514 kubelet[2620]: W0706 23:33:14.713490 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.713514 kubelet[2620]: E0706 23:33:14.713504 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.714131 kubelet[2620]: E0706 23:33:14.713615 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.714131 kubelet[2620]: W0706 23:33:14.713623 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.714131 kubelet[2620]: E0706 23:33:14.713634 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.714492 kubelet[2620]: E0706 23:33:14.714381 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.714492 kubelet[2620]: W0706 23:33:14.714397 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.714492 kubelet[2620]: E0706 23:33:14.714418 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.714841 kubelet[2620]: E0706 23:33:14.714827 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.714920 kubelet[2620]: W0706 23:33:14.714907 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.715051 kubelet[2620]: E0706 23:33:14.715038 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.715304 kubelet[2620]: E0706 23:33:14.715269 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.715304 kubelet[2620]: W0706 23:33:14.715289 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.715456 kubelet[2620]: E0706 23:33:14.715412 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.715640 kubelet[2620]: E0706 23:33:14.715628 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.715750 kubelet[2620]: W0706 23:33:14.715700 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.715809 kubelet[2620]: E0706 23:33:14.715799 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.715990 kubelet[2620]: E0706 23:33:14.715976 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.715990 kubelet[2620]: W0706 23:33:14.715988 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.716079 kubelet[2620]: E0706 23:33:14.716057 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.716278 kubelet[2620]: E0706 23:33:14.716264 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.716278 kubelet[2620]: W0706 23:33:14.716276 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.716346 kubelet[2620]: E0706 23:33:14.716331 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.716489 kubelet[2620]: E0706 23:33:14.716476 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.716489 kubelet[2620]: W0706 23:33:14.716487 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.716535 kubelet[2620]: E0706 23:33:14.716506 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.716696 kubelet[2620]: E0706 23:33:14.716670 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.716696 kubelet[2620]: W0706 23:33:14.716683 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.716744 kubelet[2620]: E0706 23:33:14.716700 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.716888 kubelet[2620]: E0706 23:33:14.716876 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.716888 kubelet[2620]: W0706 23:33:14.716888 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.716942 kubelet[2620]: E0706 23:33:14.716896 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:14.733457 kubelet[2620]: E0706 23:33:14.733420 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:14.733457 kubelet[2620]: W0706 23:33:14.733443 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:14.733574 kubelet[2620]: E0706 23:33:14.733471 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:15.482786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1429550126.mount: Deactivated successfully. Jul 6 23:33:16.238871 containerd[1519]: time="2025-07-06T23:33:16.238821549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:16.239448 containerd[1519]: time="2025-07-06T23:33:16.239409762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 6 23:33:16.239941 containerd[1519]: time="2025-07-06T23:33:16.239918157Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:16.241869 containerd[1519]: time="2025-07-06T23:33:16.241834230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:16.242642 containerd[1519]: time="2025-07-06T23:33:16.242606525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.719642334s" Jul 6 23:33:16.242642 containerd[1519]: time="2025-07-06T23:33:16.242639573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 6 23:33:16.246342 containerd[1519]: time="2025-07-06T23:33:16.246314044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:33:16.262447 containerd[1519]: time="2025-07-06T23:33:16.262408404Z" level=info msg="CreateContainer within sandbox \"d014b40d395db32f02fa52cacba349c14c78a618d37a6b1b295205fbeea693d1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:33:16.273095 containerd[1519]: time="2025-07-06T23:33:16.273046450Z" level=info msg="Container 6d70a84a7a2e117a17ebeae66706cbd015754d0fc7c7cafa9eeb6cdb368224c7: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:16.275260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051740388.mount: Deactivated successfully. Jul 6 23:33:16.282968 containerd[1519]: time="2025-07-06T23:33:16.282916483Z" level=info msg="CreateContainer within sandbox \"d014b40d395db32f02fa52cacba349c14c78a618d37a6b1b295205fbeea693d1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6d70a84a7a2e117a17ebeae66706cbd015754d0fc7c7cafa9eeb6cdb368224c7\"" Jul 6 23:33:16.286750 containerd[1519]: time="2025-07-06T23:33:16.286695297Z" level=info msg="StartContainer for \"6d70a84a7a2e117a17ebeae66706cbd015754d0fc7c7cafa9eeb6cdb368224c7\"" Jul 6 23:33:16.288250 containerd[1519]: time="2025-07-06T23:33:16.288207359Z" level=info msg="connecting to shim 6d70a84a7a2e117a17ebeae66706cbd015754d0fc7c7cafa9eeb6cdb368224c7" address="unix:///run/containerd/s/9594e53c5431dd80c39789f301cb5c723a130efdb37e74fc3e98488c143d4472" protocol=ttrpc version=3 Jul 6 23:33:16.311004 systemd[1]: Started cri-containerd-6d70a84a7a2e117a17ebeae66706cbd015754d0fc7c7cafa9eeb6cdb368224c7.scope - libcontainer container 6d70a84a7a2e117a17ebeae66706cbd015754d0fc7c7cafa9eeb6cdb368224c7. Jul 6 23:33:16.351962 containerd[1519]: time="2025-07-06T23:33:16.351914369Z" level=info msg="StartContainer for \"6d70a84a7a2e117a17ebeae66706cbd015754d0fc7c7cafa9eeb6cdb368224c7\" returns successfully" Jul 6 23:33:16.867605 kubelet[2620]: E0706 23:33:16.867560 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9775h" podUID="1a099398-b7a4-48cd-a32f-542006582ad1" Jul 6 23:33:16.997794 kubelet[2620]: E0706 23:33:16.997720 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:17.024763 kubelet[2620]: E0706 23:33:17.024703 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.024763 kubelet[2620]: W0706 23:33:17.024728 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.024763 kubelet[2620]: E0706 23:33:17.024789 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.025614 kubelet[2620]: E0706 23:33:17.025566 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.025614 kubelet[2620]: W0706 23:33:17.025581 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.025614 kubelet[2620]: E0706 23:33:17.025638 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.026028 kubelet[2620]: E0706 23:33:17.025841 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.026028 kubelet[2620]: W0706 23:33:17.025851 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.026028 kubelet[2620]: E0706 23:33:17.025861 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.026307 kubelet[2620]: E0706 23:33:17.026112 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.026307 kubelet[2620]: W0706 23:33:17.026123 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.026307 kubelet[2620]: E0706 23:33:17.026133 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.026460 kubelet[2620]: E0706 23:33:17.026434 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.026460 kubelet[2620]: W0706 23:33:17.026444 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.026460 kubelet[2620]: E0706 23:33:17.026457 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.026815 kubelet[2620]: E0706 23:33:17.026580 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.026815 kubelet[2620]: W0706 23:33:17.026588 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.026815 kubelet[2620]: E0706 23:33:17.026595 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.026815 kubelet[2620]: E0706 23:33:17.026716 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.026815 kubelet[2620]: W0706 23:33:17.026723 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.026815 kubelet[2620]: E0706 23:33:17.026730 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.027275 kubelet[2620]: E0706 23:33:17.026870 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.027275 kubelet[2620]: W0706 23:33:17.026878 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.027275 kubelet[2620]: E0706 23:33:17.026910 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.027275 kubelet[2620]: E0706 23:33:17.027165 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.027275 kubelet[2620]: W0706 23:33:17.027175 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.027275 kubelet[2620]: E0706 23:33:17.027185 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.027498 kubelet[2620]: E0706 23:33:17.027356 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.027498 kubelet[2620]: W0706 23:33:17.027366 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.027498 kubelet[2620]: E0706 23:33:17.027375 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.027702 kubelet[2620]: E0706 23:33:17.027515 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.027702 kubelet[2620]: W0706 23:33:17.027522 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.027702 kubelet[2620]: E0706 23:33:17.027530 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.027702 kubelet[2620]: E0706 23:33:17.027717 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.027862 kubelet[2620]: W0706 23:33:17.027727 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.027862 kubelet[2620]: E0706 23:33:17.027736 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.028075 kubelet[2620]: E0706 23:33:17.028002 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.028075 kubelet[2620]: W0706 23:33:17.028015 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.028075 kubelet[2620]: E0706 23:33:17.028026 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.029018 kubelet[2620]: E0706 23:33:17.028218 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.029018 kubelet[2620]: W0706 23:33:17.028228 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.029018 kubelet[2620]: E0706 23:33:17.028238 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.029018 kubelet[2620]: E0706 23:33:17.028400 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.029018 kubelet[2620]: W0706 23:33:17.028409 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.029018 kubelet[2620]: E0706 23:33:17.028417 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.029018 kubelet[2620]: E0706 23:33:17.028687 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.029018 kubelet[2620]: W0706 23:33:17.028697 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.029018 kubelet[2620]: E0706 23:33:17.028706 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.029018 kubelet[2620]: E0706 23:33:17.028963 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.030103 kubelet[2620]: W0706 23:33:17.029003 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.030103 kubelet[2620]: E0706 23:33:17.029019 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.030103 kubelet[2620]: E0706 23:33:17.029472 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.030103 kubelet[2620]: W0706 23:33:17.029482 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.030103 kubelet[2620]: E0706 23:33:17.029894 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.031183 kubelet[2620]: E0706 23:33:17.030725 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.031183 kubelet[2620]: W0706 23:33:17.030922 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.031183 kubelet[2620]: E0706 23:33:17.030949 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.036614 kubelet[2620]: E0706 23:33:17.036447 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.036614 kubelet[2620]: W0706 23:33:17.036485 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.036614 kubelet[2620]: E0706 23:33:17.036515 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.037450 kubelet[2620]: E0706 23:33:17.036807 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.037450 kubelet[2620]: W0706 23:33:17.036838 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.037450 kubelet[2620]: E0706 23:33:17.036896 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.037450 kubelet[2620]: E0706 23:33:17.036972 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.037450 kubelet[2620]: W0706 23:33:17.036990 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.037450 kubelet[2620]: E0706 23:33:17.037064 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.037450 kubelet[2620]: E0706 23:33:17.037121 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.037450 kubelet[2620]: W0706 23:33:17.037129 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.037450 kubelet[2620]: E0706 23:33:17.037273 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.037450 kubelet[2620]: W0706 23:33:17.037280 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.038165 kubelet[2620]: E0706 23:33:17.037238 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.038165 kubelet[2620]: E0706 23:33:17.037294 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.038936 kubelet[2620]: E0706 23:33:17.038901 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.038936 kubelet[2620]: W0706 23:33:17.038923 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.039163 kubelet[2620]: E0706 23:33:17.038945 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.039576 kubelet[2620]: E0706 23:33:17.039168 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.039576 kubelet[2620]: W0706 23:33:17.039178 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.039576 kubelet[2620]: E0706 23:33:17.039218 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.039576 kubelet[2620]: E0706 23:33:17.039397 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.039576 kubelet[2620]: W0706 23:33:17.039406 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.039576 kubelet[2620]: E0706 23:33:17.039457 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.039576 kubelet[2620]: E0706 23:33:17.039558 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.039576 kubelet[2620]: W0706 23:33:17.039566 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.040293 kubelet[2620]: E0706 23:33:17.039652 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.040293 kubelet[2620]: E0706 23:33:17.039718 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.040293 kubelet[2620]: W0706 23:33:17.039753 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.040293 kubelet[2620]: E0706 23:33:17.039789 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.040293 kubelet[2620]: E0706 23:33:17.040217 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.040293 kubelet[2620]: W0706 23:33:17.040232 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.040293 kubelet[2620]: E0706 23:33:17.040245 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.040898 kubelet[2620]: E0706 23:33:17.040497 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.040898 kubelet[2620]: W0706 23:33:17.040508 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.040898 kubelet[2620]: E0706 23:33:17.040526 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.042913 kubelet[2620]: E0706 23:33:17.042879 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.042913 kubelet[2620]: W0706 23:33:17.042902 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.042913 kubelet[2620]: E0706 23:33:17.042924 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.043143 kubelet[2620]: E0706 23:33:17.043119 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:33:17.043143 kubelet[2620]: W0706 23:33:17.043135 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:33:17.043143 kubelet[2620]: E0706 23:33:17.043145 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:33:17.264247 containerd[1519]: time="2025-07-06T23:33:17.264186514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:17.264807 containerd[1519]: time="2025-07-06T23:33:17.264771121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 6 23:33:17.265457 containerd[1519]: time="2025-07-06T23:33:17.265424022Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:17.268063 containerd[1519]: time="2025-07-06T23:33:17.267995539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:17.268699 containerd[1519]: time="2025-07-06T23:33:17.268669164Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.022317833s" Jul 6 23:33:17.268823 containerd[1519]: time="2025-07-06T23:33:17.268696971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 6 23:33:17.272096 containerd[1519]: time="2025-07-06T23:33:17.272055378Z" level=info msg="CreateContainer within sandbox \"2609a5272dc4d705abf635538c98a2cc3ef6caf089485e78b8f3b7dfbe466c3d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:33:17.288408 containerd[1519]: time="2025-07-06T23:33:17.287292916Z" level=info msg="Container b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:17.294155 containerd[1519]: time="2025-07-06T23:33:17.294105431Z" level=info msg="CreateContainer within sandbox \"2609a5272dc4d705abf635538c98a2cc3ef6caf089485e78b8f3b7dfbe466c3d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d\"" Jul 6 23:33:17.294890 containerd[1519]: time="2025-07-06T23:33:17.294865115Z" level=info msg="StartContainer for \"b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d\"" Jul 6 23:33:17.296677 containerd[1519]: time="2025-07-06T23:33:17.296633978Z" level=info msg="connecting to shim b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d" address="unix:///run/containerd/s/40646e4071dbe3ed638cd02100ebcf05ea4cd7e67b6cc0038e1f04aedd606fc5" protocol=ttrpc version=3 Jul 6 23:33:17.321010 systemd[1]: Started cri-containerd-b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d.scope - libcontainer container b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d. Jul 6 23:33:17.368476 containerd[1519]: time="2025-07-06T23:33:17.368428039Z" level=info msg="StartContainer for \"b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d\" returns successfully" Jul 6 23:33:17.388514 systemd[1]: cri-containerd-b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d.scope: Deactivated successfully. Jul 6 23:33:17.413344 containerd[1519]: time="2025-07-06T23:33:17.413281149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d\" id:\"b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d\" pid:3323 exited_at:{seconds:1751844797 nanos:401566973}" Jul 6 23:33:17.414142 containerd[1519]: time="2025-07-06T23:33:17.414085923Z" level=info msg="received exit event container_id:\"b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d\" id:\"b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d\" pid:3323 exited_at:{seconds:1751844797 nanos:401566973}" Jul 6 23:33:17.455211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b16647e064b73cf1e54a677e567d353f9cda6ef8d0891eb7367db3746077a64d-rootfs.mount: Deactivated successfully. Jul 6 23:33:18.005561 containerd[1519]: time="2025-07-06T23:33:18.004513059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:33:18.005961 kubelet[2620]: I0706 23:33:18.005896 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:33:18.006459 kubelet[2620]: E0706 23:33:18.006318 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:18.023796 kubelet[2620]: I0706 23:33:18.023690 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cf69869d6-xkj79" podStartSLOduration=3.297411406 podStartE2EDuration="5.023669991s" podCreationTimestamp="2025-07-06 23:33:13 +0000 UTC" firstStartedPulling="2025-07-06 23:33:14.519470847 +0000 UTC m=+22.746360704" lastFinishedPulling="2025-07-06 23:33:16.245729392 +0000 UTC m=+24.472619289" observedRunningTime="2025-07-06 23:33:17.022688677 +0000 UTC m=+25.249578574" watchObservedRunningTime="2025-07-06 23:33:18.023669991 +0000 UTC m=+26.250559848" Jul 6 23:33:18.867945 kubelet[2620]: E0706 23:33:18.867834 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9775h" podUID="1a099398-b7a4-48cd-a32f-542006582ad1" Jul 6 23:33:20.754444 containerd[1519]: time="2025-07-06T23:33:20.754397590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:20.755480 containerd[1519]: time="2025-07-06T23:33:20.755443069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 6 23:33:20.756840 containerd[1519]: time="2025-07-06T23:33:20.756811050Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:20.760995 containerd[1519]: time="2025-07-06T23:33:20.760909112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:20.761356 containerd[1519]: time="2025-07-06T23:33:20.761332873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.756702231s" Jul 6 23:33:20.761413 containerd[1519]: time="2025-07-06T23:33:20.761363599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 6 23:33:20.764153 containerd[1519]: time="2025-07-06T23:33:20.764120525Z" level=info msg="CreateContainer within sandbox \"2609a5272dc4d705abf635538c98a2cc3ef6caf089485e78b8f3b7dfbe466c3d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:33:20.781031 containerd[1519]: time="2025-07-06T23:33:20.780987904Z" level=info msg="Container d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:20.782261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946903458.mount: Deactivated successfully. Jul 6 23:33:20.796782 containerd[1519]: time="2025-07-06T23:33:20.796706263Z" level=info msg="CreateContainer within sandbox \"2609a5272dc4d705abf635538c98a2cc3ef6caf089485e78b8f3b7dfbe466c3d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4\"" Jul 6 23:33:20.799846 containerd[1519]: time="2025-07-06T23:33:20.799009622Z" level=info msg="StartContainer for \"d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4\"" Jul 6 23:33:20.801529 containerd[1519]: time="2025-07-06T23:33:20.801451088Z" level=info msg="connecting to shim d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4" address="unix:///run/containerd/s/40646e4071dbe3ed638cd02100ebcf05ea4cd7e67b6cc0038e1f04aedd606fc5" protocol=ttrpc version=3 Jul 6 23:33:20.830974 systemd[1]: Started cri-containerd-d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4.scope - libcontainer container d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4. Jul 6 23:33:20.873820 containerd[1519]: time="2025-07-06T23:33:20.871673847Z" level=info msg="StartContainer for \"d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4\" returns successfully" Jul 6 23:33:20.877024 kubelet[2620]: E0706 23:33:20.876983 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9775h" podUID="1a099398-b7a4-48cd-a32f-542006582ad1" Jul 6 23:33:21.553994 systemd[1]: cri-containerd-d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4.scope: Deactivated successfully. Jul 6 23:33:21.554263 systemd[1]: cri-containerd-d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4.scope: Consumed 521ms CPU time, 176.2M memory peak, 3M read from disk, 165.8M written to disk. Jul 6 23:33:21.555750 containerd[1519]: time="2025-07-06T23:33:21.555695160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4\" id:\"d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4\" pid:3385 exited_at:{seconds:1751844801 nanos:555377462}" Jul 6 23:33:21.566402 containerd[1519]: time="2025-07-06T23:33:21.566339432Z" level=info msg="received exit event container_id:\"d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4\" id:\"d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4\" pid:3385 exited_at:{seconds:1751844801 nanos:555377462}" Jul 6 23:33:21.583616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d90707d5d1724445702f5255506e024d3cc42fe260642314d5a9439816e420d4-rootfs.mount: Deactivated successfully. Jul 6 23:33:21.608806 kubelet[2620]: I0706 23:33:21.608496 2620 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:33:21.663479 kubelet[2620]: I0706 23:33:21.663442 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cfjp\" (UniqueName: \"kubernetes.io/projected/b3ba864b-287e-42da-848e-5b46fe245109-kube-api-access-5cfjp\") pod \"coredns-668d6bf9bc-vpq69\" (UID: \"b3ba864b-287e-42da-848e-5b46fe245109\") " pod="kube-system/coredns-668d6bf9bc-vpq69" Jul 6 23:33:21.663479 kubelet[2620]: I0706 23:33:21.663484 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3ba864b-287e-42da-848e-5b46fe245109-config-volume\") pod \"coredns-668d6bf9bc-vpq69\" (UID: \"b3ba864b-287e-42da-848e-5b46fe245109\") " pod="kube-system/coredns-668d6bf9bc-vpq69" Jul 6 23:33:21.677258 systemd[1]: Created slice kubepods-burstable-podb3ba864b_287e_42da_848e_5b46fe245109.slice - libcontainer container kubepods-burstable-podb3ba864b_287e_42da_848e_5b46fe245109.slice. Jul 6 23:33:21.685456 systemd[1]: Created slice kubepods-burstable-podc2b5f119_3141_44a2_a801_4f52bfba3472.slice - libcontainer container kubepods-burstable-podc2b5f119_3141_44a2_a801_4f52bfba3472.slice. Jul 6 23:33:21.710215 systemd[1]: Created slice kubepods-besteffort-podd7ca1c0d_535a_46f3_a9b6_fac4deeb8a0d.slice - libcontainer container kubepods-besteffort-podd7ca1c0d_535a_46f3_a9b6_fac4deeb8a0d.slice. Jul 6 23:33:21.716454 systemd[1]: Created slice kubepods-besteffort-podbfd30fe9_b080_4b45_a070_84043baaf2eb.slice - libcontainer container kubepods-besteffort-podbfd30fe9_b080_4b45_a070_84043baaf2eb.slice. Jul 6 23:33:21.722434 systemd[1]: Created slice kubepods-besteffort-pod333fa840_0857_418b_8937_d4a8c6231197.slice - libcontainer container kubepods-besteffort-pod333fa840_0857_418b_8937_d4a8c6231197.slice. Jul 6 23:33:21.728800 systemd[1]: Created slice kubepods-besteffort-pod74437c90_fa40_4d77_94d3_7c329ae02598.slice - libcontainer container kubepods-besteffort-pod74437c90_fa40_4d77_94d3_7c329ae02598.slice. Jul 6 23:33:21.734836 systemd[1]: Created slice kubepods-besteffort-pod9cf84154_cc8e_4d07_859d_124b371b7ae2.slice - libcontainer container kubepods-besteffort-pod9cf84154_cc8e_4d07_859d_124b371b7ae2.slice. Jul 6 23:33:21.741744 systemd[1]: Created slice kubepods-besteffort-pod8e53fb0d_7e4d_41ce_af63_c768b7e8e895.slice - libcontainer container kubepods-besteffort-pod8e53fb0d_7e4d_41ce_af63_c768b7e8e895.slice. Jul 6 23:33:21.865094 kubelet[2620]: I0706 23:33:21.864969 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e53fb0d-7e4d-41ce-af63-c768b7e8e895-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-lw88t\" (UID: \"8e53fb0d-7e4d-41ce-af63-c768b7e8e895\") " pod="calico-system/goldmane-768f4c5c69-lw88t" Jul 6 23:33:21.865094 kubelet[2620]: I0706 23:33:21.865022 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqvgp\" (UniqueName: \"kubernetes.io/projected/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d-kube-api-access-rqvgp\") pod \"whisker-76dc787959-b5684\" (UID: \"d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d\") " pod="calico-system/whisker-76dc787959-b5684" Jul 6 23:33:21.865094 kubelet[2620]: I0706 23:33:21.865051 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sbvc\" (UniqueName: \"kubernetes.io/projected/bfd30fe9-b080-4b45-a070-84043baaf2eb-kube-api-access-4sbvc\") pod \"calico-kube-controllers-66c4d7674c-qx6wn\" (UID: \"bfd30fe9-b080-4b45-a070-84043baaf2eb\") " pod="calico-system/calico-kube-controllers-66c4d7674c-qx6wn" Jul 6 23:33:21.865094 kubelet[2620]: I0706 23:33:21.865073 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzkjw\" (UniqueName: \"kubernetes.io/projected/74437c90-fa40-4d77-94d3-7c329ae02598-kube-api-access-pzkjw\") pod \"calico-apiserver-567577c55-7x44x\" (UID: \"74437c90-fa40-4d77-94d3-7c329ae02598\") " pod="calico-apiserver/calico-apiserver-567577c55-7x44x" Jul 6 23:33:21.865288 kubelet[2620]: I0706 23:33:21.865159 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d-whisker-ca-bundle\") pod \"whisker-76dc787959-b5684\" (UID: \"d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d\") " pod="calico-system/whisker-76dc787959-b5684" Jul 6 23:33:21.865288 kubelet[2620]: I0706 23:33:21.865205 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj9qd\" (UniqueName: \"kubernetes.io/projected/333fa840-0857-418b-8937-d4a8c6231197-kube-api-access-pj9qd\") pod \"calico-apiserver-567577c55-p8k4w\" (UID: \"333fa840-0857-418b-8937-d4a8c6231197\") " pod="calico-apiserver/calico-apiserver-567577c55-p8k4w" Jul 6 23:33:21.865288 kubelet[2620]: I0706 23:33:21.865233 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8e53fb0d-7e4d-41ce-af63-c768b7e8e895-goldmane-key-pair\") pod \"goldmane-768f4c5c69-lw88t\" (UID: \"8e53fb0d-7e4d-41ce-af63-c768b7e8e895\") " pod="calico-system/goldmane-768f4c5c69-lw88t" Jul 6 23:33:21.865288 kubelet[2620]: I0706 23:33:21.865251 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfd30fe9-b080-4b45-a070-84043baaf2eb-tigera-ca-bundle\") pod \"calico-kube-controllers-66c4d7674c-qx6wn\" (UID: \"bfd30fe9-b080-4b45-a070-84043baaf2eb\") " pod="calico-system/calico-kube-controllers-66c4d7674c-qx6wn" Jul 6 23:33:21.865288 kubelet[2620]: I0706 23:33:21.865275 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9cf84154-cc8e-4d07-859d-124b371b7ae2-calico-apiserver-certs\") pod \"calico-apiserver-8d55b7c5-m6wtx\" (UID: \"9cf84154-cc8e-4d07-859d-124b371b7ae2\") " pod="calico-apiserver/calico-apiserver-8d55b7c5-m6wtx" Jul 6 23:33:21.865408 kubelet[2620]: I0706 23:33:21.865293 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q98td\" (UniqueName: \"kubernetes.io/projected/9cf84154-cc8e-4d07-859d-124b371b7ae2-kube-api-access-q98td\") pod \"calico-apiserver-8d55b7c5-m6wtx\" (UID: \"9cf84154-cc8e-4d07-859d-124b371b7ae2\") " pod="calico-apiserver/calico-apiserver-8d55b7c5-m6wtx" Jul 6 23:33:21.865408 kubelet[2620]: I0706 23:33:21.865310 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2b5f119-3141-44a2-a801-4f52bfba3472-config-volume\") pod \"coredns-668d6bf9bc-85k8q\" (UID: \"c2b5f119-3141-44a2-a801-4f52bfba3472\") " pod="kube-system/coredns-668d6bf9bc-85k8q" Jul 6 23:33:21.865408 kubelet[2620]: I0706 23:33:21.865327 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/74437c90-fa40-4d77-94d3-7c329ae02598-calico-apiserver-certs\") pod \"calico-apiserver-567577c55-7x44x\" (UID: \"74437c90-fa40-4d77-94d3-7c329ae02598\") " pod="calico-apiserver/calico-apiserver-567577c55-7x44x" Jul 6 23:33:21.865408 kubelet[2620]: I0706 23:33:21.865352 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/333fa840-0857-418b-8937-d4a8c6231197-calico-apiserver-certs\") pod \"calico-apiserver-567577c55-p8k4w\" (UID: \"333fa840-0857-418b-8937-d4a8c6231197\") " pod="calico-apiserver/calico-apiserver-567577c55-p8k4w" Jul 6 23:33:21.865408 kubelet[2620]: I0706 23:33:21.865368 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e53fb0d-7e4d-41ce-af63-c768b7e8e895-config\") pod \"goldmane-768f4c5c69-lw88t\" (UID: \"8e53fb0d-7e4d-41ce-af63-c768b7e8e895\") " pod="calico-system/goldmane-768f4c5c69-lw88t" Jul 6 23:33:21.865529 kubelet[2620]: I0706 23:33:21.865384 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d-whisker-backend-key-pair\") pod \"whisker-76dc787959-b5684\" (UID: \"d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d\") " pod="calico-system/whisker-76dc787959-b5684" Jul 6 23:33:21.865529 kubelet[2620]: I0706 23:33:21.865406 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp28m\" (UniqueName: \"kubernetes.io/projected/8e53fb0d-7e4d-41ce-af63-c768b7e8e895-kube-api-access-vp28m\") pod \"goldmane-768f4c5c69-lw88t\" (UID: \"8e53fb0d-7e4d-41ce-af63-c768b7e8e895\") " pod="calico-system/goldmane-768f4c5c69-lw88t" Jul 6 23:33:21.865529 kubelet[2620]: I0706 23:33:21.865439 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vql7t\" (UniqueName: \"kubernetes.io/projected/c2b5f119-3141-44a2-a801-4f52bfba3472-kube-api-access-vql7t\") pod \"coredns-668d6bf9bc-85k8q\" (UID: \"c2b5f119-3141-44a2-a801-4f52bfba3472\") " pod="kube-system/coredns-668d6bf9bc-85k8q" Jul 6 23:33:21.991477 kubelet[2620]: E0706 23:33:21.990038 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:21.996359 containerd[1519]: time="2025-07-06T23:33:21.996315648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vpq69,Uid:b3ba864b-287e-42da-848e-5b46fe245109,Namespace:kube-system,Attempt:0,}" Jul 6 23:33:22.005308 kubelet[2620]: E0706 23:33:22.005258 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:22.009263 containerd[1519]: time="2025-07-06T23:33:22.006856177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-85k8q,Uid:c2b5f119-3141-44a2-a801-4f52bfba3472,Namespace:kube-system,Attempt:0,}" Jul 6 23:33:22.016316 containerd[1519]: time="2025-07-06T23:33:22.016045557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76dc787959-b5684,Uid:d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d,Namespace:calico-system,Attempt:0,}" Jul 6 23:33:22.029148 containerd[1519]: time="2025-07-06T23:33:22.028897542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66c4d7674c-qx6wn,Uid:bfd30fe9-b080-4b45-a070-84043baaf2eb,Namespace:calico-system,Attempt:0,}" Jul 6 23:33:22.043340 containerd[1519]: time="2025-07-06T23:33:22.043297361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d55b7c5-m6wtx,Uid:9cf84154-cc8e-4d07-859d-124b371b7ae2,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:33:22.053134 containerd[1519]: time="2025-07-06T23:33:22.043471431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567577c55-p8k4w,Uid:333fa840-0857-418b-8937-d4a8c6231197,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:33:22.053134 containerd[1519]: time="2025-07-06T23:33:22.044320461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567577c55-7x44x,Uid:74437c90-fa40-4d77-94d3-7c329ae02598,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:33:22.053134 containerd[1519]: time="2025-07-06T23:33:22.052251259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lw88t,Uid:8e53fb0d-7e4d-41ce-af63-c768b7e8e895,Namespace:calico-system,Attempt:0,}" Jul 6 23:33:22.061798 containerd[1519]: time="2025-07-06T23:33:22.054192561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:33:22.779991 containerd[1519]: time="2025-07-06T23:33:22.779927084Z" level=error msg="Failed to destroy network for sandbox \"9f278a299da398bf2976bb1cdd495ff446a062e48f6588d89492c26c8e22b005\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.788497 containerd[1519]: time="2025-07-06T23:33:22.788319403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d55b7c5-m6wtx,Uid:9cf84154-cc8e-4d07-859d-124b371b7ae2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f278a299da398bf2976bb1cdd495ff446a062e48f6588d89492c26c8e22b005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.790318 kubelet[2620]: E0706 23:33:22.790242 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f278a299da398bf2976bb1cdd495ff446a062e48f6588d89492c26c8e22b005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.793204 containerd[1519]: time="2025-07-06T23:33:22.792784630Z" level=error msg="Failed to destroy network for sandbox \"0652a5669b27dae08b265c8c5459de2989089d2c949dd40f31deeaec8cc6376a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.794462 kubelet[2620]: E0706 23:33:22.794163 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f278a299da398bf2976bb1cdd495ff446a062e48f6588d89492c26c8e22b005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d55b7c5-m6wtx" Jul 6 23:33:22.796036 kubelet[2620]: E0706 23:33:22.795980 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f278a299da398bf2976bb1cdd495ff446a062e48f6588d89492c26c8e22b005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d55b7c5-m6wtx" Jul 6 23:33:22.796167 kubelet[2620]: E0706 23:33:22.796104 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8d55b7c5-m6wtx_calico-apiserver(9cf84154-cc8e-4d07-859d-124b371b7ae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8d55b7c5-m6wtx_calico-apiserver(9cf84154-cc8e-4d07-859d-124b371b7ae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f278a299da398bf2976bb1cdd495ff446a062e48f6588d89492c26c8e22b005\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8d55b7c5-m6wtx" podUID="9cf84154-cc8e-4d07-859d-124b371b7ae2" Jul 6 23:33:22.802296 containerd[1519]: time="2025-07-06T23:33:22.802245818Z" level=error msg="Failed to destroy network for sandbox \"c31c0c6aafd81d66019a23cdacf1be8b3291c22f89b7ae2e081206fb0ee281d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.802961 containerd[1519]: time="2025-07-06T23:33:22.802919896Z" level=error msg="Failed to destroy network for sandbox \"2725094bec218edbafac8d9737a55fd634964e05c5d8de8fddca8878cf13f637\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.803098 containerd[1519]: time="2025-07-06T23:33:22.802920016Z" level=error msg="Failed to destroy network for sandbox \"6fb82be8da1f82c574285fd5d00fadafaa43878cadca1951d7e31936ac71e164\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.809028 containerd[1519]: time="2025-07-06T23:33:22.808978804Z" level=error msg="Failed to destroy network for sandbox \"87d52b37b785ce5e02c8e04f7e842d482205717c5a3340148fb649db0668a594\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.809593 containerd[1519]: time="2025-07-06T23:33:22.809453408Z" level=error msg="Failed to destroy network for sandbox \"ce7904098dbf529dc7bc909b856b386ccd9626293d9c9d84af66af7b8d0a5220\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.811334 containerd[1519]: time="2025-07-06T23:33:22.811292412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76dc787959-b5684,Uid:d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0652a5669b27dae08b265c8c5459de2989089d2c949dd40f31deeaec8cc6376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.812972 kubelet[2620]: E0706 23:33:22.812918 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0652a5669b27dae08b265c8c5459de2989089d2c949dd40f31deeaec8cc6376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.813070 kubelet[2620]: E0706 23:33:22.812991 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0652a5669b27dae08b265c8c5459de2989089d2c949dd40f31deeaec8cc6376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76dc787959-b5684" Jul 6 23:33:22.813070 kubelet[2620]: E0706 23:33:22.813013 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0652a5669b27dae08b265c8c5459de2989089d2c949dd40f31deeaec8cc6376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76dc787959-b5684" Jul 6 23:33:22.813143 kubelet[2620]: E0706 23:33:22.813055 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-76dc787959-b5684_calico-system(d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-76dc787959-b5684_calico-system(d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0652a5669b27dae08b265c8c5459de2989089d2c949dd40f31deeaec8cc6376a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76dc787959-b5684" podUID="d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d" Jul 6 23:33:22.814377 containerd[1519]: time="2025-07-06T23:33:22.814319826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vpq69,Uid:b3ba864b-287e-42da-848e-5b46fe245109,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c31c0c6aafd81d66019a23cdacf1be8b3291c22f89b7ae2e081206fb0ee281d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.814914 kubelet[2620]: E0706 23:33:22.814867 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c31c0c6aafd81d66019a23cdacf1be8b3291c22f89b7ae2e081206fb0ee281d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.814914 kubelet[2620]: E0706 23:33:22.814939 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c31c0c6aafd81d66019a23cdacf1be8b3291c22f89b7ae2e081206fb0ee281d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vpq69" Jul 6 23:33:22.814914 kubelet[2620]: E0706 23:33:22.814960 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c31c0c6aafd81d66019a23cdacf1be8b3291c22f89b7ae2e081206fb0ee281d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vpq69" Jul 6 23:33:22.815311 kubelet[2620]: E0706 23:33:22.815001 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vpq69_kube-system(b3ba864b-287e-42da-848e-5b46fe245109)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vpq69_kube-system(b3ba864b-287e-42da-848e-5b46fe245109)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c31c0c6aafd81d66019a23cdacf1be8b3291c22f89b7ae2e081206fb0ee281d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vpq69" podUID="b3ba864b-287e-42da-848e-5b46fe245109" Jul 6 23:33:22.817216 containerd[1519]: time="2025-07-06T23:33:22.817109077Z" level=error msg="Failed to destroy network for sandbox \"ea74d9589446401487eb803a660c1eecc82a7296b3c6e4f565b75300f514ef9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.818612 containerd[1519]: time="2025-07-06T23:33:22.818567615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567577c55-p8k4w,Uid:333fa840-0857-418b-8937-d4a8c6231197,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2725094bec218edbafac8d9737a55fd634964e05c5d8de8fddca8878cf13f637\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.819127 kubelet[2620]: E0706 23:33:22.819071 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2725094bec218edbafac8d9737a55fd634964e05c5d8de8fddca8878cf13f637\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.819222 kubelet[2620]: E0706 23:33:22.819159 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2725094bec218edbafac8d9737a55fd634964e05c5d8de8fddca8878cf13f637\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567577c55-p8k4w" Jul 6 23:33:22.819222 kubelet[2620]: E0706 23:33:22.819179 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2725094bec218edbafac8d9737a55fd634964e05c5d8de8fddca8878cf13f637\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567577c55-p8k4w" Jul 6 23:33:22.819284 kubelet[2620]: E0706 23:33:22.819228 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567577c55-p8k4w_calico-apiserver(333fa840-0857-418b-8937-d4a8c6231197)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567577c55-p8k4w_calico-apiserver(333fa840-0857-418b-8937-d4a8c6231197)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2725094bec218edbafac8d9737a55fd634964e05c5d8de8fddca8878cf13f637\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567577c55-p8k4w" podUID="333fa840-0857-418b-8937-d4a8c6231197" Jul 6 23:33:22.827470 containerd[1519]: time="2025-07-06T23:33:22.827300314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lw88t,Uid:8e53fb0d-7e4d-41ce-af63-c768b7e8e895,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fb82be8da1f82c574285fd5d00fadafaa43878cadca1951d7e31936ac71e164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.827939 kubelet[2620]: E0706 23:33:22.827893 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fb82be8da1f82c574285fd5d00fadafaa43878cadca1951d7e31936ac71e164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.828055 kubelet[2620]: E0706 23:33:22.827953 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fb82be8da1f82c574285fd5d00fadafaa43878cadca1951d7e31936ac71e164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lw88t" Jul 6 23:33:22.828055 kubelet[2620]: E0706 23:33:22.827973 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fb82be8da1f82c574285fd5d00fadafaa43878cadca1951d7e31936ac71e164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lw88t" Jul 6 23:33:22.828055 kubelet[2620]: E0706 23:33:22.828014 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-lw88t_calico-system(8e53fb0d-7e4d-41ce-af63-c768b7e8e895)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-lw88t_calico-system(8e53fb0d-7e4d-41ce-af63-c768b7e8e895)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fb82be8da1f82c574285fd5d00fadafaa43878cadca1951d7e31936ac71e164\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-lw88t" podUID="8e53fb0d-7e4d-41ce-af63-c768b7e8e895" Jul 6 23:33:22.829215 containerd[1519]: time="2025-07-06T23:33:22.829123595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-85k8q,Uid:c2b5f119-3141-44a2-a801-4f52bfba3472,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87d52b37b785ce5e02c8e04f7e842d482205717c5a3340148fb649db0668a594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.829456 kubelet[2620]: E0706 23:33:22.829403 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87d52b37b785ce5e02c8e04f7e842d482205717c5a3340148fb649db0668a594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.829507 kubelet[2620]: E0706 23:33:22.829472 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87d52b37b785ce5e02c8e04f7e842d482205717c5a3340148fb649db0668a594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-85k8q" Jul 6 23:33:22.829531 kubelet[2620]: E0706 23:33:22.829492 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87d52b37b785ce5e02c8e04f7e842d482205717c5a3340148fb649db0668a594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-85k8q" Jul 6 23:33:22.829612 kubelet[2620]: E0706 23:33:22.829546 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-85k8q_kube-system(c2b5f119-3141-44a2-a801-4f52bfba3472)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-85k8q_kube-system(c2b5f119-3141-44a2-a801-4f52bfba3472)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87d52b37b785ce5e02c8e04f7e842d482205717c5a3340148fb649db0668a594\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-85k8q" podUID="c2b5f119-3141-44a2-a801-4f52bfba3472" Jul 6 23:33:22.830039 containerd[1519]: time="2025-07-06T23:33:22.829947620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567577c55-7x44x,Uid:74437c90-fa40-4d77-94d3-7c329ae02598,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7904098dbf529dc7bc909b856b386ccd9626293d9c9d84af66af7b8d0a5220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.830240 kubelet[2620]: E0706 23:33:22.830213 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7904098dbf529dc7bc909b856b386ccd9626293d9c9d84af66af7b8d0a5220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.830424 kubelet[2620]: E0706 23:33:22.830327 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7904098dbf529dc7bc909b856b386ccd9626293d9c9d84af66af7b8d0a5220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567577c55-7x44x" Jul 6 23:33:22.830424 kubelet[2620]: E0706 23:33:22.830350 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7904098dbf529dc7bc909b856b386ccd9626293d9c9d84af66af7b8d0a5220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567577c55-7x44x" Jul 6 23:33:22.830424 kubelet[2620]: E0706 23:33:22.830387 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567577c55-7x44x_calico-apiserver(74437c90-fa40-4d77-94d3-7c329ae02598)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567577c55-7x44x_calico-apiserver(74437c90-fa40-4d77-94d3-7c329ae02598)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce7904098dbf529dc7bc909b856b386ccd9626293d9c9d84af66af7b8d0a5220\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567577c55-7x44x" podUID="74437c90-fa40-4d77-94d3-7c329ae02598" Jul 6 23:33:22.832289 containerd[1519]: time="2025-07-06T23:33:22.831959855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66c4d7674c-qx6wn,Uid:bfd30fe9-b080-4b45-a070-84043baaf2eb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea74d9589446401487eb803a660c1eecc82a7296b3c6e4f565b75300f514ef9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.832617 kubelet[2620]: E0706 23:33:22.832449 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea74d9589446401487eb803a660c1eecc82a7296b3c6e4f565b75300f514ef9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.832617 kubelet[2620]: E0706 23:33:22.832519 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea74d9589446401487eb803a660c1eecc82a7296b3c6e4f565b75300f514ef9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66c4d7674c-qx6wn" Jul 6 23:33:22.832617 kubelet[2620]: E0706 23:33:22.832536 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea74d9589446401487eb803a660c1eecc82a7296b3c6e4f565b75300f514ef9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66c4d7674c-qx6wn" Jul 6 23:33:22.832806 kubelet[2620]: E0706 23:33:22.832580 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66c4d7674c-qx6wn_calico-system(bfd30fe9-b080-4b45-a070-84043baaf2eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66c4d7674c-qx6wn_calico-system(bfd30fe9-b080-4b45-a070-84043baaf2eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea74d9589446401487eb803a660c1eecc82a7296b3c6e4f565b75300f514ef9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66c4d7674c-qx6wn" podUID="bfd30fe9-b080-4b45-a070-84043baaf2eb" Jul 6 23:33:22.873016 systemd[1]: Created slice kubepods-besteffort-pod1a099398_b7a4_48cd_a32f_542006582ad1.slice - libcontainer container kubepods-besteffort-pod1a099398_b7a4_48cd_a32f_542006582ad1.slice. Jul 6 23:33:22.875998 containerd[1519]: time="2025-07-06T23:33:22.875951609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9775h,Uid:1a099398-b7a4-48cd-a32f-542006582ad1,Namespace:calico-system,Attempt:0,}" Jul 6 23:33:22.930058 containerd[1519]: time="2025-07-06T23:33:22.929987694Z" level=error msg="Failed to destroy network for sandbox \"99ecfd5f839e09629adeea11e7572a557759ed3db74737646ffe6376899e5ced\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.932891 containerd[1519]: time="2025-07-06T23:33:22.932836276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9775h,Uid:1a099398-b7a4-48cd-a32f-542006582ad1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ecfd5f839e09629adeea11e7572a557759ed3db74737646ffe6376899e5ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.933165 kubelet[2620]: E0706 23:33:22.933102 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ecfd5f839e09629adeea11e7572a557759ed3db74737646ffe6376899e5ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:33:22.933238 kubelet[2620]: E0706 23:33:22.933169 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ecfd5f839e09629adeea11e7572a557759ed3db74737646ffe6376899e5ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9775h" Jul 6 23:33:22.933238 kubelet[2620]: E0706 23:33:22.933194 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ecfd5f839e09629adeea11e7572a557759ed3db74737646ffe6376899e5ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9775h" Jul 6 23:33:22.933310 kubelet[2620]: E0706 23:33:22.933238 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9775h_calico-system(1a099398-b7a4-48cd-a32f-542006582ad1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9775h_calico-system(1a099398-b7a4-48cd-a32f-542006582ad1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99ecfd5f839e09629adeea11e7572a557759ed3db74737646ffe6376899e5ced\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9775h" podUID="1a099398-b7a4-48cd-a32f-542006582ad1" Jul 6 23:33:26.098164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152672892.mount: Deactivated successfully. Jul 6 23:33:26.387858 containerd[1519]: time="2025-07-06T23:33:26.387606185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:26.393257 containerd[1519]: time="2025-07-06T23:33:26.393220319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 6 23:33:26.399059 containerd[1519]: time="2025-07-06T23:33:26.399004600Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:26.408256 containerd[1519]: time="2025-07-06T23:33:26.408027654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:26.409279 containerd[1519]: time="2025-07-06T23:33:26.409248200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.354958902s" Jul 6 23:33:26.409469 containerd[1519]: time="2025-07-06T23:33:26.409375539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 6 23:33:26.420683 containerd[1519]: time="2025-07-06T23:33:26.420500433Z" level=info msg="CreateContainer within sandbox \"2609a5272dc4d705abf635538c98a2cc3ef6caf089485e78b8f3b7dfbe466c3d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:33:26.467840 containerd[1519]: time="2025-07-06T23:33:26.466859931Z" level=info msg="Container 0dfe2a76723eab5900263321796f5f958ab6d603b274de58fc84224ca729875b: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:26.566501 containerd[1519]: time="2025-07-06T23:33:26.566444372Z" level=info msg="CreateContainer within sandbox \"2609a5272dc4d705abf635538c98a2cc3ef6caf089485e78b8f3b7dfbe466c3d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0dfe2a76723eab5900263321796f5f958ab6d603b274de58fc84224ca729875b\"" Jul 6 23:33:26.567176 containerd[1519]: time="2025-07-06T23:33:26.567147159Z" level=info msg="StartContainer for \"0dfe2a76723eab5900263321796f5f958ab6d603b274de58fc84224ca729875b\"" Jul 6 23:33:26.568650 containerd[1519]: time="2025-07-06T23:33:26.568620023Z" level=info msg="connecting to shim 0dfe2a76723eab5900263321796f5f958ab6d603b274de58fc84224ca729875b" address="unix:///run/containerd/s/40646e4071dbe3ed638cd02100ebcf05ea4cd7e67b6cc0038e1f04aedd606fc5" protocol=ttrpc version=3 Jul 6 23:33:26.606991 systemd[1]: Started cri-containerd-0dfe2a76723eab5900263321796f5f958ab6d603b274de58fc84224ca729875b.scope - libcontainer container 0dfe2a76723eab5900263321796f5f958ab6d603b274de58fc84224ca729875b. Jul 6 23:33:26.767209 containerd[1519]: time="2025-07-06T23:33:26.767076997Z" level=info msg="StartContainer for \"0dfe2a76723eab5900263321796f5f958ab6d603b274de58fc84224ca729875b\" returns successfully" Jul 6 23:33:26.986641 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:33:26.986831 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:33:27.102033 kubelet[2620]: I0706 23:33:27.101627 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zcg9k" podStartSLOduration=1.371451391 podStartE2EDuration="13.101606534s" podCreationTimestamp="2025-07-06 23:33:14 +0000 UTC" firstStartedPulling="2025-07-06 23:33:14.679992554 +0000 UTC m=+22.906882451" lastFinishedPulling="2025-07-06 23:33:26.410147696 +0000 UTC m=+34.637037594" observedRunningTime="2025-07-06 23:33:27.100531136 +0000 UTC m=+35.327420993" watchObservedRunningTime="2025-07-06 23:33:27.101606534 +0000 UTC m=+35.328496391" Jul 6 23:33:27.226040 kubelet[2620]: I0706 23:33:27.225965 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqvgp\" (UniqueName: \"kubernetes.io/projected/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d-kube-api-access-rqvgp\") pod \"d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d\" (UID: \"d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d\") " Jul 6 23:33:27.226965 kubelet[2620]: I0706 23:33:27.226227 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d-whisker-backend-key-pair\") pod \"d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d\" (UID: \"d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d\") " Jul 6 23:33:27.226965 kubelet[2620]: I0706 23:33:27.226909 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d-whisker-ca-bundle\") pod \"d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d\" (UID: \"d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d\") " Jul 6 23:33:27.234835 kubelet[2620]: I0706 23:33:27.234193 2620 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d" (UID: "d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:33:27.238877 systemd[1]: var-lib-kubelet-pods-d7ca1c0d\x2d535a\x2d46f3\x2da9b6\x2dfac4deeb8a0d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drqvgp.mount: Deactivated successfully. Jul 6 23:33:27.239002 systemd[1]: var-lib-kubelet-pods-d7ca1c0d\x2d535a\x2d46f3\x2da9b6\x2dfac4deeb8a0d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:33:27.240216 kubelet[2620]: I0706 23:33:27.238984 2620 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d-kube-api-access-rqvgp" (OuterVolumeSpecName: "kube-api-access-rqvgp") pod "d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d" (UID: "d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d"). InnerVolumeSpecName "kube-api-access-rqvgp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:33:27.240216 kubelet[2620]: I0706 23:33:27.240124 2620 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d" (UID: "d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:33:27.328471 kubelet[2620]: I0706 23:33:27.328411 2620 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 6 23:33:27.328471 kubelet[2620]: I0706 23:33:27.328457 2620 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rqvgp\" (UniqueName: \"kubernetes.io/projected/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d-kube-api-access-rqvgp\") on node \"localhost\" DevicePath \"\"" Jul 6 23:33:27.328471 kubelet[2620]: I0706 23:33:27.328467 2620 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 6 23:33:27.874626 systemd[1]: Removed slice kubepods-besteffort-podd7ca1c0d_535a_46f3_a9b6_fac4deeb8a0d.slice - libcontainer container kubepods-besteffort-podd7ca1c0d_535a_46f3_a9b6_fac4deeb8a0d.slice. Jul 6 23:33:28.079945 kubelet[2620]: I0706 23:33:28.079890 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:33:28.153270 systemd[1]: Created slice kubepods-besteffort-pod6c239057_5c6c_4017_89c1_48a1756f7ecc.slice - libcontainer container kubepods-besteffort-pod6c239057_5c6c_4017_89c1_48a1756f7ecc.slice. Jul 6 23:33:28.338525 kubelet[2620]: I0706 23:33:28.338424 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c239057-5c6c-4017-89c1-48a1756f7ecc-whisker-ca-bundle\") pod \"whisker-6cf4cfd9c-lbl9r\" (UID: \"6c239057-5c6c-4017-89c1-48a1756f7ecc\") " pod="calico-system/whisker-6cf4cfd9c-lbl9r" Jul 6 23:33:28.338929 kubelet[2620]: I0706 23:33:28.338538 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brvpz\" (UniqueName: \"kubernetes.io/projected/6c239057-5c6c-4017-89c1-48a1756f7ecc-kube-api-access-brvpz\") pod \"whisker-6cf4cfd9c-lbl9r\" (UID: \"6c239057-5c6c-4017-89c1-48a1756f7ecc\") " pod="calico-system/whisker-6cf4cfd9c-lbl9r" Jul 6 23:33:28.338929 kubelet[2620]: I0706 23:33:28.338565 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c239057-5c6c-4017-89c1-48a1756f7ecc-whisker-backend-key-pair\") pod \"whisker-6cf4cfd9c-lbl9r\" (UID: \"6c239057-5c6c-4017-89c1-48a1756f7ecc\") " pod="calico-system/whisker-6cf4cfd9c-lbl9r" Jul 6 23:33:28.758315 containerd[1519]: time="2025-07-06T23:33:28.758277616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cf4cfd9c-lbl9r,Uid:6c239057-5c6c-4017-89c1-48a1756f7ecc,Namespace:calico-system,Attempt:0,}" Jul 6 23:33:29.205137 systemd-networkd[1436]: cali50058610c98: Link UP Jul 6 23:33:29.207381 systemd-networkd[1436]: cali50058610c98: Gained carrier Jul 6 23:33:29.230180 containerd[1519]: 2025-07-06 23:33:28.861 [INFO][3893] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:33:29.230180 containerd[1519]: 2025-07-06 23:33:29.002 [INFO][3893] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0 whisker-6cf4cfd9c- calico-system 6c239057-5c6c-4017-89c1-48a1756f7ecc 897 0 2025-07-06 23:33:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cf4cfd9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6cf4cfd9c-lbl9r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali50058610c98 [] [] }} ContainerID="4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" Namespace="calico-system" Pod="whisker-6cf4cfd9c-lbl9r" WorkloadEndpoint="localhost-k8s-whisker--6cf4cfd9c--lbl9r-" Jul 6 23:33:29.230180 containerd[1519]: 2025-07-06 23:33:29.002 [INFO][3893] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" Namespace="calico-system" Pod="whisker-6cf4cfd9c-lbl9r" WorkloadEndpoint="localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0" Jul 6 23:33:29.230180 containerd[1519]: 2025-07-06 23:33:29.150 [INFO][3908] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" HandleID="k8s-pod-network.4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" Workload="localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0" Jul 6 23:33:29.230414 containerd[1519]: 2025-07-06 23:33:29.150 [INFO][3908] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" HandleID="k8s-pod-network.4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" Workload="localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000512590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6cf4cfd9c-lbl9r", "timestamp":"2025-07-06 23:33:29.150041362 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:33:29.230414 containerd[1519]: 2025-07-06 23:33:29.150 [INFO][3908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:29.230414 containerd[1519]: 2025-07-06 23:33:29.150 [INFO][3908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:29.230414 containerd[1519]: 2025-07-06 23:33:29.150 [INFO][3908] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:33:29.230414 containerd[1519]: 2025-07-06 23:33:29.162 [INFO][3908] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" host="localhost" Jul 6 23:33:29.230414 containerd[1519]: 2025-07-06 23:33:29.176 [INFO][3908] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:33:29.230414 containerd[1519]: 2025-07-06 23:33:29.180 [INFO][3908] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:33:29.230414 containerd[1519]: 2025-07-06 23:33:29.181 [INFO][3908] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:29.230414 containerd[1519]: 2025-07-06 23:33:29.183 [INFO][3908] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:29.230414 containerd[1519]: 2025-07-06 23:33:29.183 [INFO][3908] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" host="localhost" Jul 6 23:33:29.230631 containerd[1519]: 2025-07-06 23:33:29.185 [INFO][3908] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883 Jul 6 23:33:29.230631 containerd[1519]: 2025-07-06 23:33:29.189 [INFO][3908] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" host="localhost" Jul 6 23:33:29.230631 containerd[1519]: 2025-07-06 23:33:29.194 [INFO][3908] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" host="localhost" Jul 6 23:33:29.230631 containerd[1519]: 2025-07-06 23:33:29.194 [INFO][3908] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" host="localhost" Jul 6 23:33:29.230631 containerd[1519]: 2025-07-06 23:33:29.194 [INFO][3908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:29.230631 containerd[1519]: 2025-07-06 23:33:29.194 [INFO][3908] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" HandleID="k8s-pod-network.4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" Workload="localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0" Jul 6 23:33:29.230744 containerd[1519]: 2025-07-06 23:33:29.196 [INFO][3893] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" Namespace="calico-system" Pod="whisker-6cf4cfd9c-lbl9r" WorkloadEndpoint="localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0", GenerateName:"whisker-6cf4cfd9c-", Namespace:"calico-system", SelfLink:"", UID:"6c239057-5c6c-4017-89c1-48a1756f7ecc", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cf4cfd9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6cf4cfd9c-lbl9r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali50058610c98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:29.230744 containerd[1519]: 2025-07-06 23:33:29.196 [INFO][3893] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" Namespace="calico-system" Pod="whisker-6cf4cfd9c-lbl9r" WorkloadEndpoint="localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0" Jul 6 23:33:29.230843 containerd[1519]: 2025-07-06 23:33:29.196 [INFO][3893] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50058610c98 ContainerID="4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" Namespace="calico-system" Pod="whisker-6cf4cfd9c-lbl9r" WorkloadEndpoint="localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0" Jul 6 23:33:29.230843 containerd[1519]: 2025-07-06 23:33:29.209 [INFO][3893] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" Namespace="calico-system" Pod="whisker-6cf4cfd9c-lbl9r" WorkloadEndpoint="localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0" Jul 6 23:33:29.230886 containerd[1519]: 2025-07-06 23:33:29.211 [INFO][3893] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" Namespace="calico-system" Pod="whisker-6cf4cfd9c-lbl9r" WorkloadEndpoint="localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0", GenerateName:"whisker-6cf4cfd9c-", Namespace:"calico-system", SelfLink:"", UID:"6c239057-5c6c-4017-89c1-48a1756f7ecc", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cf4cfd9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883", Pod:"whisker-6cf4cfd9c-lbl9r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali50058610c98", MAC:"06:e0:75:d1:64:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:29.230942 containerd[1519]: 2025-07-06 23:33:29.226 [INFO][3893] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" Namespace="calico-system" Pod="whisker-6cf4cfd9c-lbl9r" WorkloadEndpoint="localhost-k8s-whisker--6cf4cfd9c--lbl9r-eth0" Jul 6 23:33:29.285469 containerd[1519]: time="2025-07-06T23:33:29.285407108Z" level=info msg="connecting to shim 4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883" address="unix:///run/containerd/s/7fd6ffd0c8cf7efad31ccb851971298a2df3ba77735e4d0816894e3985706725" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:33:29.319991 systemd[1]: Started cri-containerd-4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883.scope - libcontainer container 4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883. Jul 6 23:33:29.331404 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:33:29.354335 containerd[1519]: time="2025-07-06T23:33:29.354293807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cf4cfd9c-lbl9r,Uid:6c239057-5c6c-4017-89c1-48a1756f7ecc,Namespace:calico-system,Attempt:0,} returns sandbox id \"4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883\"" Jul 6 23:33:29.356267 containerd[1519]: time="2025-07-06T23:33:29.356240915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:33:29.870088 kubelet[2620]: I0706 23:33:29.870034 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d" path="/var/lib/kubelet/pods/d7ca1c0d-535a-46f3-a9b6-fac4deeb8a0d/volumes" Jul 6 23:33:30.293918 containerd[1519]: time="2025-07-06T23:33:30.293866333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:30.294387 containerd[1519]: time="2025-07-06T23:33:30.294359159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 6 23:33:30.295052 containerd[1519]: time="2025-07-06T23:33:30.294993204Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:30.297125 containerd[1519]: time="2025-07-06T23:33:30.296835610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:30.297553 containerd[1519]: time="2025-07-06T23:33:30.297527663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 941.252942ms" Jul 6 23:33:30.297618 containerd[1519]: time="2025-07-06T23:33:30.297605153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 6 23:33:30.300119 containerd[1519]: time="2025-07-06T23:33:30.300081124Z" level=info msg="CreateContainer within sandbox \"4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:33:30.307011 containerd[1519]: time="2025-07-06T23:33:30.306958203Z" level=info msg="Container e99c90a743505410236fe6d8b605a10c49ce631603907cf1a41269f9c67b3002: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:30.316564 containerd[1519]: time="2025-07-06T23:33:30.316433350Z" level=info msg="CreateContainer within sandbox \"4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e99c90a743505410236fe6d8b605a10c49ce631603907cf1a41269f9c67b3002\"" Jul 6 23:33:30.316966 containerd[1519]: time="2025-07-06T23:33:30.316934377Z" level=info msg="StartContainer for \"e99c90a743505410236fe6d8b605a10c49ce631603907cf1a41269f9c67b3002\"" Jul 6 23:33:30.318489 containerd[1519]: time="2025-07-06T23:33:30.318437018Z" level=info msg="connecting to shim e99c90a743505410236fe6d8b605a10c49ce631603907cf1a41269f9c67b3002" address="unix:///run/containerd/s/7fd6ffd0c8cf7efad31ccb851971298a2df3ba77735e4d0816894e3985706725" protocol=ttrpc version=3 Jul 6 23:33:30.349982 systemd[1]: Started cri-containerd-e99c90a743505410236fe6d8b605a10c49ce631603907cf1a41269f9c67b3002.scope - libcontainer container e99c90a743505410236fe6d8b605a10c49ce631603907cf1a41269f9c67b3002. Jul 6 23:33:30.448479 containerd[1519]: time="2025-07-06T23:33:30.445607499Z" level=info msg="StartContainer for \"e99c90a743505410236fe6d8b605a10c49ce631603907cf1a41269f9c67b3002\" returns successfully" Jul 6 23:33:30.450007 containerd[1519]: time="2025-07-06T23:33:30.449946159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:33:31.022969 systemd-networkd[1436]: cali50058610c98: Gained IPv6LL Jul 6 23:33:31.691155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1052692291.mount: Deactivated successfully. Jul 6 23:33:31.731687 containerd[1519]: time="2025-07-06T23:33:31.731636294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:31.732463 containerd[1519]: time="2025-07-06T23:33:31.732256375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 6 23:33:31.735413 containerd[1519]: time="2025-07-06T23:33:31.735371739Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:31.741119 containerd[1519]: time="2025-07-06T23:33:31.740593297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:31.741650 containerd[1519]: time="2025-07-06T23:33:31.741603468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.291615903s" Jul 6 23:33:31.741777 containerd[1519]: time="2025-07-06T23:33:31.741750887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 6 23:33:31.747171 containerd[1519]: time="2025-07-06T23:33:31.747132305Z" level=info msg="CreateContainer within sandbox \"4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:33:31.767350 containerd[1519]: time="2025-07-06T23:33:31.765797767Z" level=info msg="Container 34451fed7f9ecae8dee55c01cd250b6c95677d86e2d8e5c563cc50b79a0cbe37: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:31.781455 containerd[1519]: time="2025-07-06T23:33:31.781042985Z" level=info msg="CreateContainer within sandbox \"4918b72713823de5c6131a880c2821baa6c555cc50c238de923de5f6cbaf5883\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"34451fed7f9ecae8dee55c01cd250b6c95677d86e2d8e5c563cc50b79a0cbe37\"" Jul 6 23:33:31.781627 containerd[1519]: time="2025-07-06T23:33:31.781555332Z" level=info msg="StartContainer for \"34451fed7f9ecae8dee55c01cd250b6c95677d86e2d8e5c563cc50b79a0cbe37\"" Jul 6 23:33:31.782681 containerd[1519]: time="2025-07-06T23:33:31.782631791Z" level=info msg="connecting to shim 34451fed7f9ecae8dee55c01cd250b6c95677d86e2d8e5c563cc50b79a0cbe37" address="unix:///run/containerd/s/7fd6ffd0c8cf7efad31ccb851971298a2df3ba77735e4d0816894e3985706725" protocol=ttrpc version=3 Jul 6 23:33:31.809048 systemd[1]: Started cri-containerd-34451fed7f9ecae8dee55c01cd250b6c95677d86e2d8e5c563cc50b79a0cbe37.scope - libcontainer container 34451fed7f9ecae8dee55c01cd250b6c95677d86e2d8e5c563cc50b79a0cbe37. Jul 6 23:33:31.873334 containerd[1519]: time="2025-07-06T23:33:31.873280193Z" level=info msg="StartContainer for \"34451fed7f9ecae8dee55c01cd250b6c95677d86e2d8e5c563cc50b79a0cbe37\" returns successfully" Jul 6 23:33:32.901851 kubelet[2620]: I0706 23:33:32.901791 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:33:32.902658 kubelet[2620]: E0706 23:33:32.902143 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:32.951555 kubelet[2620]: I0706 23:33:32.951478 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6cf4cfd9c-lbl9r" podStartSLOduration=2.564231949 podStartE2EDuration="4.951457988s" podCreationTimestamp="2025-07-06 23:33:28 +0000 UTC" firstStartedPulling="2025-07-06 23:33:29.355668676 +0000 UTC m=+37.582558573" lastFinishedPulling="2025-07-06 23:33:31.742894715 +0000 UTC m=+39.969784612" observedRunningTime="2025-07-06 23:33:32.118269596 +0000 UTC m=+40.345159493" watchObservedRunningTime="2025-07-06 23:33:32.951457988 +0000 UTC m=+41.178347885" Jul 6 23:33:33.102548 kubelet[2620]: E0706 23:33:33.102504 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:33.871427 kubelet[2620]: E0706 23:33:33.871232 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:33.871850 containerd[1519]: time="2025-07-06T23:33:33.871813394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d55b7c5-m6wtx,Uid:9cf84154-cc8e-4d07-859d-124b371b7ae2,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:33:33.872174 containerd[1519]: time="2025-07-06T23:33:33.871834517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-85k8q,Uid:c2b5f119-3141-44a2-a801-4f52bfba3472,Namespace:kube-system,Attempt:0,}" Jul 6 23:33:33.931457 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:34238.service - OpenSSH per-connection server daemon (10.0.0.1:34238). Jul 6 23:33:34.017785 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 34238 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:33:34.020446 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:33:34.034987 systemd-logind[1497]: New session 8 of user core. Jul 6 23:33:34.037935 systemd-networkd[1436]: vxlan.calico: Link UP Jul 6 23:33:34.037942 systemd-networkd[1436]: vxlan.calico: Gained carrier Jul 6 23:33:34.039058 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:33:34.080696 systemd-networkd[1436]: calid94f044d245: Link UP Jul 6 23:33:34.081890 systemd-networkd[1436]: calid94f044d245: Gained carrier Jul 6 23:33:34.109938 containerd[1519]: 2025-07-06 23:33:33.931 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0 calico-apiserver-8d55b7c5- calico-apiserver 9cf84154-cc8e-4d07-859d-124b371b7ae2 830 0 2025-07-06 23:33:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8d55b7c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8d55b7c5-m6wtx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid94f044d245 [] [] }} ContainerID="3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-m6wtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-" Jul 6 23:33:34.109938 containerd[1519]: 2025-07-06 23:33:33.931 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-m6wtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0" Jul 6 23:33:34.109938 containerd[1519]: 2025-07-06 23:33:34.014 [INFO][4239] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" HandleID="k8s-pod-network.3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" Workload="localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0" Jul 6 23:33:34.110208 containerd[1519]: 2025-07-06 23:33:34.017 [INFO][4239] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" HandleID="k8s-pod-network.3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" Workload="localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dc920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8d55b7c5-m6wtx", "timestamp":"2025-07-06 23:33:34.014706747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:33:34.110208 containerd[1519]: 2025-07-06 23:33:34.017 [INFO][4239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:34.110208 containerd[1519]: 2025-07-06 23:33:34.018 [INFO][4239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:34.110208 containerd[1519]: 2025-07-06 23:33:34.018 [INFO][4239] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:33:34.110208 containerd[1519]: 2025-07-06 23:33:34.031 [INFO][4239] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" host="localhost" Jul 6 23:33:34.110208 containerd[1519]: 2025-07-06 23:33:34.046 [INFO][4239] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:33:34.110208 containerd[1519]: 2025-07-06 23:33:34.051 [INFO][4239] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:33:34.110208 containerd[1519]: 2025-07-06 23:33:34.053 [INFO][4239] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:34.110208 containerd[1519]: 2025-07-06 23:33:34.056 [INFO][4239] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:34.110208 containerd[1519]: 2025-07-06 23:33:34.056 [INFO][4239] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" host="localhost" Jul 6 23:33:34.110492 containerd[1519]: 2025-07-06 23:33:34.058 [INFO][4239] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d Jul 6 23:33:34.110492 containerd[1519]: 2025-07-06 23:33:34.064 [INFO][4239] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" host="localhost" Jul 6 23:33:34.110492 containerd[1519]: 2025-07-06 23:33:34.071 [INFO][4239] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" host="localhost" Jul 6 23:33:34.110492 containerd[1519]: 2025-07-06 23:33:34.071 [INFO][4239] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" host="localhost" Jul 6 23:33:34.110492 containerd[1519]: 2025-07-06 23:33:34.071 [INFO][4239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:34.110492 containerd[1519]: 2025-07-06 23:33:34.071 [INFO][4239] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" HandleID="k8s-pod-network.3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" Workload="localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0" Jul 6 23:33:34.110748 containerd[1519]: 2025-07-06 23:33:34.075 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-m6wtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0", GenerateName:"calico-apiserver-8d55b7c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"9cf84154-cc8e-4d07-859d-124b371b7ae2", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d55b7c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8d55b7c5-m6wtx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid94f044d245", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:34.110878 containerd[1519]: 2025-07-06 23:33:34.075 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-m6wtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0" Jul 6 23:33:34.110878 containerd[1519]: 2025-07-06 23:33:34.075 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid94f044d245 ContainerID="3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-m6wtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0" Jul 6 23:33:34.110878 containerd[1519]: 2025-07-06 23:33:34.081 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-m6wtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0" Jul 6 23:33:34.111072 containerd[1519]: 2025-07-06 23:33:34.085 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-m6wtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0", GenerateName:"calico-apiserver-8d55b7c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"9cf84154-cc8e-4d07-859d-124b371b7ae2", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d55b7c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d", Pod:"calico-apiserver-8d55b7c5-m6wtx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid94f044d245", MAC:"9a:a4:dd:16:39:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:34.111125 containerd[1519]: 2025-07-06 23:33:34.101 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-m6wtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--m6wtx-eth0" Jul 6 23:33:34.173664 systemd-networkd[1436]: cali5077bbeda1a: Link UP Jul 6 23:33:34.176310 systemd-networkd[1436]: cali5077bbeda1a: Gained carrier Jul 6 23:33:34.196045 containerd[1519]: time="2025-07-06T23:33:34.195994785Z" level=info msg="connecting to shim 3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d" address="unix:///run/containerd/s/6cfbde2a2d93672867be3e9198a1e779f94f6bd45d1095784e6467c4b8cd0b56" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:33:34.199859 containerd[1519]: 2025-07-06 23:33:33.966 [INFO][4211] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--85k8q-eth0 coredns-668d6bf9bc- kube-system c2b5f119-3141-44a2-a801-4f52bfba3472 827 0 2025-07-06 23:32:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-85k8q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5077bbeda1a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" Namespace="kube-system" Pod="coredns-668d6bf9bc-85k8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--85k8q-" Jul 6 23:33:34.199859 containerd[1519]: 2025-07-06 23:33:33.968 [INFO][4211] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" Namespace="kube-system" Pod="coredns-668d6bf9bc-85k8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--85k8q-eth0" Jul 6 23:33:34.199859 containerd[1519]: 2025-07-06 23:33:34.043 [INFO][4260] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" HandleID="k8s-pod-network.1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" Workload="localhost-k8s-coredns--668d6bf9bc--85k8q-eth0" Jul 6 23:33:34.200024 containerd[1519]: 2025-07-06 23:33:34.044 [INFO][4260] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" HandleID="k8s-pod-network.1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" Workload="localhost-k8s-coredns--668d6bf9bc--85k8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c1a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-85k8q", "timestamp":"2025-07-06 23:33:34.043619998 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:33:34.200024 containerd[1519]: 2025-07-06 23:33:34.044 [INFO][4260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:34.200024 containerd[1519]: 2025-07-06 23:33:34.071 [INFO][4260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:34.200024 containerd[1519]: 2025-07-06 23:33:34.071 [INFO][4260] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:33:34.200024 containerd[1519]: 2025-07-06 23:33:34.131 [INFO][4260] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" host="localhost" Jul 6 23:33:34.200024 containerd[1519]: 2025-07-06 23:33:34.143 [INFO][4260] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:33:34.200024 containerd[1519]: 2025-07-06 23:33:34.151 [INFO][4260] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:33:34.200024 containerd[1519]: 2025-07-06 23:33:34.153 [INFO][4260] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:34.200024 containerd[1519]: 2025-07-06 23:33:34.155 [INFO][4260] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:34.200024 containerd[1519]: 2025-07-06 23:33:34.155 [INFO][4260] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" host="localhost" Jul 6 23:33:34.200213 containerd[1519]: 2025-07-06 23:33:34.156 [INFO][4260] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459 Jul 6 23:33:34.200213 containerd[1519]: 2025-07-06 23:33:34.161 [INFO][4260] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" host="localhost" Jul 6 23:33:34.200213 containerd[1519]: 2025-07-06 23:33:34.169 [INFO][4260] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" host="localhost" Jul 6 23:33:34.200213 containerd[1519]: 2025-07-06 23:33:34.169 [INFO][4260] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" host="localhost" Jul 6 23:33:34.200213 containerd[1519]: 2025-07-06 23:33:34.169 [INFO][4260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:34.200213 containerd[1519]: 2025-07-06 23:33:34.169 [INFO][4260] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" HandleID="k8s-pod-network.1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" Workload="localhost-k8s-coredns--668d6bf9bc--85k8q-eth0" Jul 6 23:33:34.200326 containerd[1519]: 2025-07-06 23:33:34.171 [INFO][4211] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" Namespace="kube-system" Pod="coredns-668d6bf9bc-85k8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--85k8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--85k8q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c2b5f119-3141-44a2-a801-4f52bfba3472", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 32, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-85k8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5077bbeda1a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:34.200387 containerd[1519]: 2025-07-06 23:33:34.171 [INFO][4211] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" Namespace="kube-system" Pod="coredns-668d6bf9bc-85k8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--85k8q-eth0" Jul 6 23:33:34.200387 containerd[1519]: 2025-07-06 23:33:34.171 [INFO][4211] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5077bbeda1a ContainerID="1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" Namespace="kube-system" Pod="coredns-668d6bf9bc-85k8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--85k8q-eth0" Jul 6 23:33:34.200387 containerd[1519]: 2025-07-06 23:33:34.173 [INFO][4211] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" Namespace="kube-system" Pod="coredns-668d6bf9bc-85k8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--85k8q-eth0" Jul 6 23:33:34.200446 containerd[1519]: 2025-07-06 23:33:34.173 [INFO][4211] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" Namespace="kube-system" Pod="coredns-668d6bf9bc-85k8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--85k8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--85k8q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c2b5f119-3141-44a2-a801-4f52bfba3472", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 32, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459", Pod:"coredns-668d6bf9bc-85k8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5077bbeda1a", MAC:"f6:90:0f:65:f7:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:34.200446 containerd[1519]: 2025-07-06 23:33:34.188 [INFO][4211] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" Namespace="kube-system" Pod="coredns-668d6bf9bc-85k8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--85k8q-eth0" Jul 6 23:33:34.251962 systemd[1]: Started cri-containerd-3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d.scope - libcontainer container 3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d. Jul 6 23:33:34.252387 containerd[1519]: time="2025-07-06T23:33:34.252031113Z" level=info msg="connecting to shim 1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459" address="unix:///run/containerd/s/4a9b826c190923bdfeb0a5f2a579eab023a06f9c7e05e1ce03f5ed403bf79557" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:33:34.297343 systemd[1]: Started cri-containerd-1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459.scope - libcontainer container 1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459. Jul 6 23:33:34.317109 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:33:34.356033 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:33:34.372701 containerd[1519]: time="2025-07-06T23:33:34.370670714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-85k8q,Uid:c2b5f119-3141-44a2-a801-4f52bfba3472,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459\"" Jul 6 23:33:34.372888 kubelet[2620]: E0706 23:33:34.372431 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:34.375500 containerd[1519]: time="2025-07-06T23:33:34.374843932Z" level=info msg="CreateContainer within sandbox \"1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:33:34.394894 sshd[4275]: Connection closed by 10.0.0.1 port 34238 Jul 6 23:33:34.395431 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:34.401145 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:34238.service: Deactivated successfully. Jul 6 23:33:34.402884 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:33:34.405276 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:33:34.407480 systemd-logind[1497]: Removed session 8. Jul 6 23:33:34.479190 containerd[1519]: time="2025-07-06T23:33:34.479148941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d55b7c5-m6wtx,Uid:9cf84154-cc8e-4d07-859d-124b371b7ae2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d\"" Jul 6 23:33:34.480834 containerd[1519]: time="2025-07-06T23:33:34.480806699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:33:34.484672 containerd[1519]: time="2025-07-06T23:33:34.484636396Z" level=info msg="Container a8da2d3e57eb454832096d429473fb13a98d55d6067b5292b2fbda80c44ef0b6: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:34.490653 containerd[1519]: time="2025-07-06T23:33:34.490616230Z" level=info msg="CreateContainer within sandbox \"1f178483d62e2f4524eb11348204b6b2745be48dfa83038fb15ef0071b181459\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8da2d3e57eb454832096d429473fb13a98d55d6067b5292b2fbda80c44ef0b6\"" Jul 6 23:33:34.491089 containerd[1519]: time="2025-07-06T23:33:34.491054562Z" level=info msg="StartContainer for \"a8da2d3e57eb454832096d429473fb13a98d55d6067b5292b2fbda80c44ef0b6\"" Jul 6 23:33:34.491980 containerd[1519]: time="2025-07-06T23:33:34.491950269Z" level=info msg="connecting to shim a8da2d3e57eb454832096d429473fb13a98d55d6067b5292b2fbda80c44ef0b6" address="unix:///run/containerd/s/4a9b826c190923bdfeb0a5f2a579eab023a06f9c7e05e1ce03f5ed403bf79557" protocol=ttrpc version=3 Jul 6 23:33:34.508966 systemd[1]: Started cri-containerd-a8da2d3e57eb454832096d429473fb13a98d55d6067b5292b2fbda80c44ef0b6.scope - libcontainer container a8da2d3e57eb454832096d429473fb13a98d55d6067b5292b2fbda80c44ef0b6. Jul 6 23:33:34.542294 containerd[1519]: time="2025-07-06T23:33:34.542253393Z" level=info msg="StartContainer for \"a8da2d3e57eb454832096d429473fb13a98d55d6067b5292b2fbda80c44ef0b6\" returns successfully" Jul 6 23:33:34.867612 containerd[1519]: time="2025-07-06T23:33:34.867492132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9775h,Uid:1a099398-b7a4-48cd-a32f-542006582ad1,Namespace:calico-system,Attempt:0,}" Jul 6 23:33:34.867725 containerd[1519]: time="2025-07-06T23:33:34.867493372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567577c55-p8k4w,Uid:333fa840-0857-418b-8937-d4a8c6231197,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:33:35.013299 systemd-networkd[1436]: cali986600c7dd0: Link UP Jul 6 23:33:35.013482 systemd-networkd[1436]: cali986600c7dd0: Gained carrier Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.927 [INFO][4494] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0 calico-apiserver-567577c55- calico-apiserver 333fa840-0857-418b-8937-d4a8c6231197 832 0 2025-07-06 23:33:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:567577c55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-567577c55-p8k4w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali986600c7dd0 [] [] }} ContainerID="69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-p8k4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--p8k4w-" Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.928 [INFO][4494] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-p8k4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0" Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.954 [INFO][4525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" HandleID="k8s-pod-network.69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" Workload="localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0" Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.954 [INFO][4525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" HandleID="k8s-pod-network.69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" Workload="localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-567577c55-p8k4w", "timestamp":"2025-07-06 23:33:34.954283531 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.954 [INFO][4525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.954 [INFO][4525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.954 [INFO][4525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.963 [INFO][4525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" host="localhost" Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.971 [INFO][4525] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.977 [INFO][4525] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.979 [INFO][4525] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.984 [INFO][4525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.984 [INFO][4525] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" host="localhost" Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.991 [INFO][4525] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848 Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:34.998 [INFO][4525] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" host="localhost" Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:35.007 [INFO][4525] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" host="localhost" Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:35.007 [INFO][4525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" host="localhost" Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:35.007 [INFO][4525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:35.034672 containerd[1519]: 2025-07-06 23:33:35.007 [INFO][4525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" HandleID="k8s-pod-network.69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" Workload="localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0" Jul 6 23:33:35.035470 containerd[1519]: 2025-07-06 23:33:35.010 [INFO][4494] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-p8k4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0", GenerateName:"calico-apiserver-567577c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"333fa840-0857-418b-8937-d4a8c6231197", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567577c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-567577c55-p8k4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali986600c7dd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:35.035470 containerd[1519]: 2025-07-06 23:33:35.010 [INFO][4494] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-p8k4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0" Jul 6 23:33:35.035470 containerd[1519]: 2025-07-06 23:33:35.010 [INFO][4494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali986600c7dd0 ContainerID="69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-p8k4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0" Jul 6 23:33:35.035470 containerd[1519]: 2025-07-06 23:33:35.014 [INFO][4494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-p8k4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0" Jul 6 23:33:35.035470 containerd[1519]: 2025-07-06 23:33:35.014 [INFO][4494] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-p8k4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0", GenerateName:"calico-apiserver-567577c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"333fa840-0857-418b-8937-d4a8c6231197", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567577c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848", Pod:"calico-apiserver-567577c55-p8k4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali986600c7dd0", MAC:"e2:47:86:78:a1:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:35.035470 containerd[1519]: 2025-07-06 23:33:35.032 [INFO][4494] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-p8k4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--p8k4w-eth0" Jul 6 23:33:35.060910 containerd[1519]: time="2025-07-06T23:33:35.060863109Z" level=info msg="connecting to shim 69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848" address="unix:///run/containerd/s/b00435fdfae9c7ee1518fb97247b823dc7553f0fef586ee007caef89f0ae761d" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:33:35.113010 systemd[1]: Started cri-containerd-69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848.scope - libcontainer container 69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848. Jul 6 23:33:35.117068 systemd-networkd[1436]: cali75eb2a5d86e: Link UP Jul 6 23:33:35.117979 systemd-networkd[1436]: cali75eb2a5d86e: Gained carrier Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:34.927 [INFO][4506] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9775h-eth0 csi-node-driver- calico-system 1a099398-b7a4-48cd-a32f-542006582ad1 705 0 2025-07-06 23:33:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9775h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali75eb2a5d86e [] [] }} ContainerID="8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" Namespace="calico-system" Pod="csi-node-driver-9775h" WorkloadEndpoint="localhost-k8s-csi--node--driver--9775h-" Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:34.928 [INFO][4506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" Namespace="calico-system" Pod="csi-node-driver-9775h" WorkloadEndpoint="localhost-k8s-csi--node--driver--9775h-eth0" Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:34.956 [INFO][4524] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" HandleID="k8s-pod-network.8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" Workload="localhost-k8s-csi--node--driver--9775h-eth0" Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:34.956 [INFO][4524] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" HandleID="k8s-pod-network.8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" Workload="localhost-k8s-csi--node--driver--9775h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a2e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9775h", "timestamp":"2025-07-06 23:33:34.956045701 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:34.956 [INFO][4524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.007 [INFO][4524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.007 [INFO][4524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.065 [INFO][4524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" host="localhost" Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.072 [INFO][4524] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.081 [INFO][4524] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.083 [INFO][4524] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.089 [INFO][4524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.089 [INFO][4524] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" host="localhost" Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.094 [INFO][4524] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.101 [INFO][4524] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" host="localhost" Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.109 [INFO][4524] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" host="localhost" Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.109 [INFO][4524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" host="localhost" Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.109 [INFO][4524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:35.147070 containerd[1519]: 2025-07-06 23:33:35.109 [INFO][4524] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" HandleID="k8s-pod-network.8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" Workload="localhost-k8s-csi--node--driver--9775h-eth0" Jul 6 23:33:35.147670 containerd[1519]: 2025-07-06 23:33:35.112 [INFO][4506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" Namespace="calico-system" Pod="csi-node-driver-9775h" WorkloadEndpoint="localhost-k8s-csi--node--driver--9775h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9775h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a099398-b7a4-48cd-a32f-542006582ad1", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9775h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali75eb2a5d86e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:35.147670 containerd[1519]: 2025-07-06 23:33:35.114 [INFO][4506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" Namespace="calico-system" Pod="csi-node-driver-9775h" WorkloadEndpoint="localhost-k8s-csi--node--driver--9775h-eth0" Jul 6 23:33:35.147670 containerd[1519]: 2025-07-06 23:33:35.114 [INFO][4506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75eb2a5d86e ContainerID="8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" Namespace="calico-system" Pod="csi-node-driver-9775h" WorkloadEndpoint="localhost-k8s-csi--node--driver--9775h-eth0" Jul 6 23:33:35.147670 containerd[1519]: 2025-07-06 23:33:35.118 [INFO][4506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" Namespace="calico-system" Pod="csi-node-driver-9775h" WorkloadEndpoint="localhost-k8s-csi--node--driver--9775h-eth0" Jul 6 23:33:35.147670 containerd[1519]: 2025-07-06 23:33:35.125 [INFO][4506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" Namespace="calico-system" Pod="csi-node-driver-9775h" WorkloadEndpoint="localhost-k8s-csi--node--driver--9775h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9775h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a099398-b7a4-48cd-a32f-542006582ad1", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b", Pod:"csi-node-driver-9775h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali75eb2a5d86e", MAC:"9e:a8:b1:b2:76:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:35.147670 containerd[1519]: 2025-07-06 23:33:35.138 [INFO][4506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" Namespace="calico-system" Pod="csi-node-driver-9775h" WorkloadEndpoint="localhost-k8s-csi--node--driver--9775h-eth0" Jul 6 23:33:35.148173 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:33:35.184693 systemd-networkd[1436]: vxlan.calico: Gained IPv6LL Jul 6 23:33:35.191519 kubelet[2620]: E0706 23:33:35.191482 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:35.195605 containerd[1519]: time="2025-07-06T23:33:35.195212976Z" level=info msg="connecting to shim 8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b" address="unix:///run/containerd/s/5bc4181dbabe6b2e63a9d971dc134894ed1f0414ece01845eae1f11bc5b60612" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:33:35.206788 containerd[1519]: time="2025-07-06T23:33:35.204922905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567577c55-p8k4w,Uid:333fa840-0857-418b-8937-d4a8c6231197,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848\"" Jul 6 23:33:35.209487 kubelet[2620]: I0706 23:33:35.209428 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-85k8q" podStartSLOduration=36.209211124 podStartE2EDuration="36.209211124s" podCreationTimestamp="2025-07-06 23:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:33:35.208099634 +0000 UTC m=+43.434989491" watchObservedRunningTime="2025-07-06 23:33:35.209211124 +0000 UTC m=+43.436101021" Jul 6 23:33:35.240031 systemd[1]: Started cri-containerd-8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b.scope - libcontainer container 8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b. Jul 6 23:33:35.254741 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:33:35.266308 containerd[1519]: time="2025-07-06T23:33:35.266262319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9775h,Uid:1a099398-b7a4-48cd-a32f-542006582ad1,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b\"" Jul 6 23:33:35.632361 systemd-networkd[1436]: cali5077bbeda1a: Gained IPv6LL Jul 6 23:33:35.868099 containerd[1519]: time="2025-07-06T23:33:35.868059076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lw88t,Uid:8e53fb0d-7e4d-41ce-af63-c768b7e8e895,Namespace:calico-system,Attempt:0,}" Jul 6 23:33:35.951457 systemd-networkd[1436]: calid94f044d245: Gained IPv6LL Jul 6 23:33:36.007942 systemd-networkd[1436]: cali6870b2b2e2f: Link UP Jul 6 23:33:36.008732 systemd-networkd[1436]: cali6870b2b2e2f: Gained carrier Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.918 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--lw88t-eth0 goldmane-768f4c5c69- calico-system 8e53fb0d-7e4d-41ce-af63-c768b7e8e895 834 0 2025-07-06 23:33:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-lw88t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6870b2b2e2f [] [] }} ContainerID="4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" Namespace="calico-system" Pod="goldmane-768f4c5c69-lw88t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lw88t-" Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.922 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" Namespace="calico-system" Pod="goldmane-768f4c5c69-lw88t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lw88t-eth0" Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.952 [INFO][4674] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" HandleID="k8s-pod-network.4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" Workload="localhost-k8s-goldmane--768f4c5c69--lw88t-eth0" Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.953 [INFO][4674] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" HandleID="k8s-pod-network.4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" Workload="localhost-k8s-goldmane--768f4c5c69--lw88t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-lw88t", "timestamp":"2025-07-06 23:33:35.952864981 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.953 [INFO][4674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.953 [INFO][4674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.953 [INFO][4674] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.963 [INFO][4674] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" host="localhost" Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.971 [INFO][4674] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.979 [INFO][4674] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.983 [INFO][4674] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.988 [INFO][4674] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.988 [INFO][4674] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" host="localhost" Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.991 [INFO][4674] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:35.996 [INFO][4674] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" host="localhost" Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:36.002 [INFO][4674] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" host="localhost" Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:36.002 [INFO][4674] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" host="localhost" Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:36.003 [INFO][4674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:36.030636 containerd[1519]: 2025-07-06 23:33:36.003 [INFO][4674] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" HandleID="k8s-pod-network.4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" Workload="localhost-k8s-goldmane--768f4c5c69--lw88t-eth0" Jul 6 23:33:36.031320 containerd[1519]: 2025-07-06 23:33:36.005 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" Namespace="calico-system" Pod="goldmane-768f4c5c69-lw88t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lw88t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lw88t-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"8e53fb0d-7e4d-41ce-af63-c768b7e8e895", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-lw88t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6870b2b2e2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:36.031320 containerd[1519]: 2025-07-06 23:33:36.005 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" Namespace="calico-system" Pod="goldmane-768f4c5c69-lw88t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lw88t-eth0" Jul 6 23:33:36.031320 containerd[1519]: 2025-07-06 23:33:36.005 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6870b2b2e2f ContainerID="4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" Namespace="calico-system" Pod="goldmane-768f4c5c69-lw88t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lw88t-eth0" Jul 6 23:33:36.031320 containerd[1519]: 2025-07-06 23:33:36.009 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" Namespace="calico-system" Pod="goldmane-768f4c5c69-lw88t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lw88t-eth0" Jul 6 23:33:36.031320 containerd[1519]: 2025-07-06 23:33:36.010 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" Namespace="calico-system" Pod="goldmane-768f4c5c69-lw88t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lw88t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lw88t-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"8e53fb0d-7e4d-41ce-af63-c768b7e8e895", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe", Pod:"goldmane-768f4c5c69-lw88t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6870b2b2e2f", MAC:"4a:ff:3b:a4:3c:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:36.031320 containerd[1519]: 2025-07-06 23:33:36.025 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" Namespace="calico-system" Pod="goldmane-768f4c5c69-lw88t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lw88t-eth0" Jul 6 23:33:36.060092 containerd[1519]: time="2025-07-06T23:33:36.059976590Z" level=info msg="connecting to shim 4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe" address="unix:///run/containerd/s/f82bf4c4234c06ab80aae00f086af03afcc09afc8cc4f9fb17a2d807f4799cb9" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:33:36.094051 systemd[1]: Started cri-containerd-4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe.scope - libcontainer container 4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe. Jul 6 23:33:36.110458 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:33:36.199290 kubelet[2620]: E0706 23:33:36.199254 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:36.210849 containerd[1519]: time="2025-07-06T23:33:36.210717454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lw88t,Uid:8e53fb0d-7e4d-41ce-af63-c768b7e8e895,Namespace:calico-system,Attempt:0,} returns sandbox id \"4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe\"" Jul 6 23:33:36.359623 containerd[1519]: time="2025-07-06T23:33:36.359545100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:36.360613 containerd[1519]: time="2025-07-06T23:33:36.360578617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 6 23:33:36.361435 containerd[1519]: time="2025-07-06T23:33:36.361408831Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:36.363178 containerd[1519]: time="2025-07-06T23:33:36.363138027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:36.364380 containerd[1519]: time="2025-07-06T23:33:36.364325242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.883483299s" Jul 6 23:33:36.364380 containerd[1519]: time="2025-07-06T23:33:36.364378208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:33:36.365273 containerd[1519]: time="2025-07-06T23:33:36.365243706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:33:36.366563 containerd[1519]: time="2025-07-06T23:33:36.366532693Z" level=info msg="CreateContainer within sandbox \"3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:33:36.374629 containerd[1519]: time="2025-07-06T23:33:36.374575085Z" level=info msg="Container ca1841a9e3597f99c4065925b599b0e6ee0bb9b75ad4493235e367314338913e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:36.382595 containerd[1519]: time="2025-07-06T23:33:36.382546510Z" level=info msg="CreateContainer within sandbox \"3b9b1438b02ca407a45bfea5a327795ed9b797d0ffd53404e1e55d07bdcc953d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ca1841a9e3597f99c4065925b599b0e6ee0bb9b75ad4493235e367314338913e\"" Jul 6 23:33:36.384051 containerd[1519]: time="2025-07-06T23:33:36.383989953Z" level=info msg="StartContainer for \"ca1841a9e3597f99c4065925b599b0e6ee0bb9b75ad4493235e367314338913e\"" Jul 6 23:33:36.385082 containerd[1519]: time="2025-07-06T23:33:36.385055594Z" level=info msg="connecting to shim ca1841a9e3597f99c4065925b599b0e6ee0bb9b75ad4493235e367314338913e" address="unix:///run/containerd/s/6cfbde2a2d93672867be3e9198a1e779f94f6bd45d1095784e6467c4b8cd0b56" protocol=ttrpc version=3 Jul 6 23:33:36.413961 systemd[1]: Started cri-containerd-ca1841a9e3597f99c4065925b599b0e6ee0bb9b75ad4493235e367314338913e.scope - libcontainer container ca1841a9e3597f99c4065925b599b0e6ee0bb9b75ad4493235e367314338913e. Jul 6 23:33:36.456738 containerd[1519]: time="2025-07-06T23:33:36.456626035Z" level=info msg="StartContainer for \"ca1841a9e3597f99c4065925b599b0e6ee0bb9b75ad4493235e367314338913e\" returns successfully" Jul 6 23:33:36.527029 systemd-networkd[1436]: cali75eb2a5d86e: Gained IPv6LL Jul 6 23:33:36.612428 containerd[1519]: time="2025-07-06T23:33:36.612378947Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:36.613039 containerd[1519]: time="2025-07-06T23:33:36.613012619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:33:36.615021 containerd[1519]: time="2025-07-06T23:33:36.614985442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 249.707252ms" Jul 6 23:33:36.615114 containerd[1519]: time="2025-07-06T23:33:36.615021967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:33:36.616637 containerd[1519]: time="2025-07-06T23:33:36.616607506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:33:36.621254 containerd[1519]: time="2025-07-06T23:33:36.619757904Z" level=info msg="CreateContainer within sandbox \"69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:33:36.626565 containerd[1519]: time="2025-07-06T23:33:36.626520271Z" level=info msg="Container 25fce061ef8226d368ac47c4dfb8a1cde73697f54440a71e063200474e806806: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:36.639812 containerd[1519]: time="2025-07-06T23:33:36.639762894Z" level=info msg="CreateContainer within sandbox \"69dca1c5ac66effbc3fcb357ce8f42d97d5631e7fe5f1fa0833f0482a2215848\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"25fce061ef8226d368ac47c4dfb8a1cde73697f54440a71e063200474e806806\"" Jul 6 23:33:36.640437 containerd[1519]: time="2025-07-06T23:33:36.640329398Z" level=info msg="StartContainer for \"25fce061ef8226d368ac47c4dfb8a1cde73697f54440a71e063200474e806806\"" Jul 6 23:33:36.641602 containerd[1519]: time="2025-07-06T23:33:36.641575779Z" level=info msg="connecting to shim 25fce061ef8226d368ac47c4dfb8a1cde73697f54440a71e063200474e806806" address="unix:///run/containerd/s/b00435fdfae9c7ee1518fb97247b823dc7553f0fef586ee007caef89f0ae761d" protocol=ttrpc version=3 Jul 6 23:33:36.666944 systemd[1]: Started cri-containerd-25fce061ef8226d368ac47c4dfb8a1cde73697f54440a71e063200474e806806.scope - libcontainer container 25fce061ef8226d368ac47c4dfb8a1cde73697f54440a71e063200474e806806. Jul 6 23:33:36.713782 containerd[1519]: time="2025-07-06T23:33:36.713742728Z" level=info msg="StartContainer for \"25fce061ef8226d368ac47c4dfb8a1cde73697f54440a71e063200474e806806\" returns successfully" Jul 6 23:33:36.867797 containerd[1519]: time="2025-07-06T23:33:36.867673073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66c4d7674c-qx6wn,Uid:bfd30fe9-b080-4b45-a070-84043baaf2eb,Namespace:calico-system,Attempt:0,}" Jul 6 23:33:36.867899 containerd[1519]: time="2025-07-06T23:33:36.867680994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567577c55-7x44x,Uid:74437c90-fa40-4d77-94d3-7c329ae02598,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:33:37.039866 systemd-networkd[1436]: cali986600c7dd0: Gained IPv6LL Jul 6 23:33:37.042982 systemd-networkd[1436]: calid958f82011c: Link UP Jul 6 23:33:37.043560 systemd-networkd[1436]: calid958f82011c: Gained carrier Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:36.947 [INFO][4822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0 calico-kube-controllers-66c4d7674c- calico-system bfd30fe9-b080-4b45-a070-84043baaf2eb 831 0 2025-07-06 23:33:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66c4d7674c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66c4d7674c-qx6wn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid958f82011c [] [] }} ContainerID="5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" Namespace="calico-system" Pod="calico-kube-controllers-66c4d7674c-qx6wn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-" Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:36.948 [INFO][4822] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" Namespace="calico-system" Pod="calico-kube-controllers-66c4d7674c-qx6wn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0" Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:36.987 [INFO][4845] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" HandleID="k8s-pod-network.5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" Workload="localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0" Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:36.988 [INFO][4845] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" HandleID="k8s-pod-network.5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" Workload="localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042d2c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66c4d7674c-qx6wn", "timestamp":"2025-07-06 23:33:36.987219557 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:36.988 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:36.988 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:36.988 [INFO][4845] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:36.999 [INFO][4845] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" host="localhost" Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:37.005 [INFO][4845] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:37.011 [INFO][4845] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:37.014 [INFO][4845] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:37.017 [INFO][4845] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:37.017 [INFO][4845] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" host="localhost" Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:37.018 [INFO][4845] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400 Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:37.023 [INFO][4845] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" host="localhost" Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:37.034 [INFO][4845] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" host="localhost" Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:37.035 [INFO][4845] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" host="localhost" Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:37.035 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:37.089093 containerd[1519]: 2025-07-06 23:33:37.035 [INFO][4845] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" HandleID="k8s-pod-network.5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" Workload="localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0" Jul 6 23:33:37.091995 containerd[1519]: 2025-07-06 23:33:37.039 [INFO][4822] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" Namespace="calico-system" Pod="calico-kube-controllers-66c4d7674c-qx6wn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0", GenerateName:"calico-kube-controllers-66c4d7674c-", Namespace:"calico-system", SelfLink:"", UID:"bfd30fe9-b080-4b45-a070-84043baaf2eb", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66c4d7674c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66c4d7674c-qx6wn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid958f82011c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:37.091995 containerd[1519]: 2025-07-06 23:33:37.039 [INFO][4822] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" Namespace="calico-system" Pod="calico-kube-controllers-66c4d7674c-qx6wn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0" Jul 6 23:33:37.091995 containerd[1519]: 2025-07-06 23:33:37.039 [INFO][4822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid958f82011c ContainerID="5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" Namespace="calico-system" Pod="calico-kube-controllers-66c4d7674c-qx6wn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0" Jul 6 23:33:37.091995 containerd[1519]: 2025-07-06 23:33:37.046 [INFO][4822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" Namespace="calico-system" Pod="calico-kube-controllers-66c4d7674c-qx6wn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0" Jul 6 23:33:37.091995 containerd[1519]: 2025-07-06 23:33:37.051 [INFO][4822] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" Namespace="calico-system" Pod="calico-kube-controllers-66c4d7674c-qx6wn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0", GenerateName:"calico-kube-controllers-66c4d7674c-", Namespace:"calico-system", SelfLink:"", UID:"bfd30fe9-b080-4b45-a070-84043baaf2eb", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66c4d7674c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400", Pod:"calico-kube-controllers-66c4d7674c-qx6wn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid958f82011c", MAC:"66:fb:87:a4:c4:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:37.091995 containerd[1519]: 2025-07-06 23:33:37.084 [INFO][4822] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" Namespace="calico-system" Pod="calico-kube-controllers-66c4d7674c-qx6wn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66c4d7674c--qx6wn-eth0" Jul 6 23:33:37.160942 systemd-networkd[1436]: cali055c1e01aec: Link UP Jul 6 23:33:37.161568 systemd-networkd[1436]: cali055c1e01aec: Gained carrier Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:36.944 [INFO][4818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--567577c55--7x44x-eth0 calico-apiserver-567577c55- calico-apiserver 74437c90-fa40-4d77-94d3-7c329ae02598 833 0 2025-07-06 23:33:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:567577c55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-567577c55-7x44x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali055c1e01aec [] [] }} ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-7x44x" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--7x44x-" Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:36.945 [INFO][4818] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-7x44x" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.003 [INFO][4843] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" HandleID="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Workload="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.003 [INFO][4843] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" HandleID="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Workload="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-567577c55-7x44x", "timestamp":"2025-07-06 23:33:37.003106353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.003 [INFO][4843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.035 [INFO][4843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.036 [INFO][4843] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.099 [INFO][4843] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" host="localhost" Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.107 [INFO][4843] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.111 [INFO][4843] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.113 [INFO][4843] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.115 [INFO][4843] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.115 [INFO][4843] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" host="localhost" Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.117 [INFO][4843] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.126 [INFO][4843] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" host="localhost" Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.151 [INFO][4843] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" host="localhost" Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.151 [INFO][4843] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" host="localhost" Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.151 [INFO][4843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:37.182318 containerd[1519]: 2025-07-06 23:33:37.152 [INFO][4843] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" HandleID="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Workload="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:37.183041 containerd[1519]: 2025-07-06 23:33:37.158 [INFO][4818] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-7x44x" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567577c55--7x44x-eth0", GenerateName:"calico-apiserver-567577c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"74437c90-fa40-4d77-94d3-7c329ae02598", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567577c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-567577c55-7x44x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali055c1e01aec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:37.183041 containerd[1519]: 2025-07-06 23:33:37.159 [INFO][4818] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-7x44x" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:37.183041 containerd[1519]: 2025-07-06 23:33:37.159 [INFO][4818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali055c1e01aec ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-7x44x" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:37.183041 containerd[1519]: 2025-07-06 23:33:37.162 [INFO][4818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-7x44x" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:37.183041 containerd[1519]: 2025-07-06 23:33:37.162 [INFO][4818] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-7x44x" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567577c55--7x44x-eth0", GenerateName:"calico-apiserver-567577c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"74437c90-fa40-4d77-94d3-7c329ae02598", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567577c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e", Pod:"calico-apiserver-567577c55-7x44x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali055c1e01aec", MAC:"ea:ed:31:d9:d8:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:37.183041 containerd[1519]: 2025-07-06 23:33:37.171 [INFO][4818] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Namespace="calico-apiserver" Pod="calico-apiserver-567577c55-7x44x" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:37.184360 containerd[1519]: time="2025-07-06T23:33:37.184321109Z" level=info msg="connecting to shim 5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400" address="unix:///run/containerd/s/9f76a9e16c105b734e273904239292c5a15f04a2ed660c5cd5708b743e7df5f3" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:33:37.216787 kubelet[2620]: E0706 23:33:37.216716 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:37.249520 kubelet[2620]: I0706 23:33:37.248507 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-567577c55-p8k4w" podStartSLOduration=26.843822192 podStartE2EDuration="28.248267834s" podCreationTimestamp="2025-07-06 23:33:09 +0000 UTC" firstStartedPulling="2025-07-06 23:33:35.211295646 +0000 UTC m=+43.438185543" lastFinishedPulling="2025-07-06 23:33:36.615741288 +0000 UTC m=+44.842631185" observedRunningTime="2025-07-06 23:33:37.247660807 +0000 UTC m=+45.474550704" watchObservedRunningTime="2025-07-06 23:33:37.248267834 +0000 UTC m=+45.475157731" Jul 6 23:33:37.252599 kubelet[2620]: I0706 23:33:37.252445 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8d55b7c5-m6wtx" podStartSLOduration=26.36784219 podStartE2EDuration="28.252425175s" podCreationTimestamp="2025-07-06 23:33:09 +0000 UTC" firstStartedPulling="2025-07-06 23:33:34.480520745 +0000 UTC m=+42.707410642" lastFinishedPulling="2025-07-06 23:33:36.36510373 +0000 UTC m=+44.591993627" observedRunningTime="2025-07-06 23:33:37.229328376 +0000 UTC m=+45.456218273" watchObservedRunningTime="2025-07-06 23:33:37.252425175 +0000 UTC m=+45.479315072" Jul 6 23:33:37.262998 containerd[1519]: time="2025-07-06T23:33:37.262894815Z" level=info msg="connecting to shim abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" address="unix:///run/containerd/s/0e24ea6c2c2bc1177622ed0470b22ae5651f58d1916b4ffdd2c081fc83e3a251" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:33:37.265914 systemd[1]: Started cri-containerd-5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400.scope - libcontainer container 5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400. Jul 6 23:33:37.281814 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:33:37.296935 systemd[1]: Started cri-containerd-abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e.scope - libcontainer container abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e. Jul 6 23:33:37.315426 containerd[1519]: time="2025-07-06T23:33:37.315376349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66c4d7674c-qx6wn,Uid:bfd30fe9-b080-4b45-a070-84043baaf2eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400\"" Jul 6 23:33:37.336982 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:33:37.359515 containerd[1519]: time="2025-07-06T23:33:37.359476555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567577c55-7x44x,Uid:74437c90-fa40-4d77-94d3-7c329ae02598,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\"" Jul 6 23:33:37.364017 containerd[1519]: time="2025-07-06T23:33:37.363968332Z" level=info msg="CreateContainer within sandbox \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:33:37.375519 containerd[1519]: time="2025-07-06T23:33:37.375475127Z" level=info msg="Container 7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:37.385987 containerd[1519]: time="2025-07-06T23:33:37.385921884Z" level=info msg="CreateContainer within sandbox \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\"" Jul 6 23:33:37.386639 containerd[1519]: time="2025-07-06T23:33:37.386547714Z" level=info msg="StartContainer for \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\"" Jul 6 23:33:37.388917 containerd[1519]: time="2025-07-06T23:33:37.388884933Z" level=info msg="connecting to shim 7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1" address="unix:///run/containerd/s/0e24ea6c2c2bc1177622ed0470b22ae5651f58d1916b4ffdd2c081fc83e3a251" protocol=ttrpc version=3 Jul 6 23:33:37.411950 systemd[1]: Started cri-containerd-7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1.scope - libcontainer container 7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1. Jul 6 23:33:37.451926 containerd[1519]: time="2025-07-06T23:33:37.451888513Z" level=info msg="StartContainer for \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\" returns successfully" Jul 6 23:33:37.609441 containerd[1519]: time="2025-07-06T23:33:37.609368280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:37.610770 containerd[1519]: time="2025-07-06T23:33:37.610445199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 6 23:33:37.611203 containerd[1519]: time="2025-07-06T23:33:37.611172080Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:37.615549 containerd[1519]: time="2025-07-06T23:33:37.614908853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:37.614955 systemd-networkd[1436]: cali6870b2b2e2f: Gained IPv6LL Jul 6 23:33:37.616667 containerd[1519]: time="2025-07-06T23:33:37.616609442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 999.872081ms" Jul 6 23:33:37.616667 containerd[1519]: time="2025-07-06T23:33:37.616648126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 6 23:33:37.617714 containerd[1519]: time="2025-07-06T23:33:37.617685761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:33:37.619198 containerd[1519]: time="2025-07-06T23:33:37.619159004Z" level=info msg="CreateContainer within sandbox \"8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:33:37.630774 containerd[1519]: time="2025-07-06T23:33:37.630579590Z" level=info msg="Container beb560ab0d6dd58e7fe08b59fe7595b2b697557aea85d864d5190c023e6a4c9e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:37.639774 containerd[1519]: time="2025-07-06T23:33:37.639294315Z" level=info msg="CreateContainer within sandbox \"8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"beb560ab0d6dd58e7fe08b59fe7595b2b697557aea85d864d5190c023e6a4c9e\"" Jul 6 23:33:37.640244 containerd[1519]: time="2025-07-06T23:33:37.640210977Z" level=info msg="StartContainer for \"beb560ab0d6dd58e7fe08b59fe7595b2b697557aea85d864d5190c023e6a4c9e\"" Jul 6 23:33:37.641976 containerd[1519]: time="2025-07-06T23:33:37.641946169Z" level=info msg="connecting to shim beb560ab0d6dd58e7fe08b59fe7595b2b697557aea85d864d5190c023e6a4c9e" address="unix:///run/containerd/s/5bc4181dbabe6b2e63a9d971dc134894ed1f0414ece01845eae1f11bc5b60612" protocol=ttrpc version=3 Jul 6 23:33:37.661942 systemd[1]: Started cri-containerd-beb560ab0d6dd58e7fe08b59fe7595b2b697557aea85d864d5190c023e6a4c9e.scope - libcontainer container beb560ab0d6dd58e7fe08b59fe7595b2b697557aea85d864d5190c023e6a4c9e. Jul 6 23:33:37.716843 containerd[1519]: time="2025-07-06T23:33:37.715600249Z" level=info msg="StartContainer for \"beb560ab0d6dd58e7fe08b59fe7595b2b697557aea85d864d5190c023e6a4c9e\" returns successfully" Jul 6 23:33:37.867653 kubelet[2620]: E0706 23:33:37.867621 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:37.868428 containerd[1519]: time="2025-07-06T23:33:37.868385256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vpq69,Uid:b3ba864b-287e-42da-848e-5b46fe245109,Namespace:kube-system,Attempt:0,}" Jul 6 23:33:38.036629 systemd-networkd[1436]: cali383cfef2b59: Link UP Jul 6 23:33:38.037797 systemd-networkd[1436]: cali383cfef2b59: Gained carrier Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:37.931 [INFO][5041] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--vpq69-eth0 coredns-668d6bf9bc- kube-system b3ba864b-287e-42da-848e-5b46fe245109 819 0 2025-07-06 23:32:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-vpq69 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali383cfef2b59 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-vpq69" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vpq69-" Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:37.931 [INFO][5041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-vpq69" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vpq69-eth0" Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:37.975 [INFO][5057] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" HandleID="k8s-pod-network.731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" Workload="localhost-k8s-coredns--668d6bf9bc--vpq69-eth0" Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:37.975 [INFO][5057] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" HandleID="k8s-pod-network.731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" Workload="localhost-k8s-coredns--668d6bf9bc--vpq69-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-vpq69", "timestamp":"2025-07-06 23:33:37.975617136 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:37.976 [INFO][5057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:37.976 [INFO][5057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:37.976 [INFO][5057] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:37.990 [INFO][5057] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" host="localhost" Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:37.995 [INFO][5057] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:38.003 [INFO][5057] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:38.007 [INFO][5057] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:38.011 [INFO][5057] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:38.011 [INFO][5057] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" host="localhost" Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:38.013 [INFO][5057] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:38.018 [INFO][5057] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" host="localhost" Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:38.028 [INFO][5057] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" host="localhost" Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:38.029 [INFO][5057] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" host="localhost" Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:38.029 [INFO][5057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:38.061382 containerd[1519]: 2025-07-06 23:33:38.029 [INFO][5057] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" HandleID="k8s-pod-network.731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" Workload="localhost-k8s-coredns--668d6bf9bc--vpq69-eth0" Jul 6 23:33:38.062261 containerd[1519]: 2025-07-06 23:33:38.031 [INFO][5041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-vpq69" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vpq69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vpq69-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b3ba864b-287e-42da-848e-5b46fe245109", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 32, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-vpq69", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali383cfef2b59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:38.062261 containerd[1519]: 2025-07-06 23:33:38.031 [INFO][5041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-vpq69" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vpq69-eth0" Jul 6 23:33:38.062261 containerd[1519]: 2025-07-06 23:33:38.031 [INFO][5041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali383cfef2b59 ContainerID="731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-vpq69" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vpq69-eth0" Jul 6 23:33:38.062261 containerd[1519]: 2025-07-06 23:33:38.038 [INFO][5041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-vpq69" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vpq69-eth0" Jul 6 23:33:38.062261 containerd[1519]: 2025-07-06 23:33:38.039 [INFO][5041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-vpq69" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vpq69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vpq69-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b3ba864b-287e-42da-848e-5b46fe245109", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 32, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b", Pod:"coredns-668d6bf9bc-vpq69", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali383cfef2b59", MAC:"5a:48:c0:10:ff:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:38.062261 containerd[1519]: 2025-07-06 23:33:38.055 [INFO][5041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-vpq69" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vpq69-eth0" Jul 6 23:33:38.088504 containerd[1519]: time="2025-07-06T23:33:38.088401331Z" level=info msg="connecting to shim 731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b" address="unix:///run/containerd/s/2c0a424d7ad5567a25266846aff8fa4df752f3c2c0869f24d4024139d9a2c624" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:33:38.120085 systemd[1]: Started cri-containerd-731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b.scope - libcontainer container 731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b. Jul 6 23:33:38.135476 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:33:38.171288 containerd[1519]: time="2025-07-06T23:33:38.171243382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vpq69,Uid:b3ba864b-287e-42da-848e-5b46fe245109,Namespace:kube-system,Attempt:0,} returns sandbox id \"731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b\"" Jul 6 23:33:38.172732 kubelet[2620]: E0706 23:33:38.172513 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:38.175159 containerd[1519]: time="2025-07-06T23:33:38.175115201Z" level=info msg="CreateContainer within sandbox \"731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:33:38.209861 containerd[1519]: time="2025-07-06T23:33:38.209817998Z" level=info msg="Container 2b7bc2d7770789fa3cbdf3bf0c8b3a1039931be5b287936a11073d9ed90ab863: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:38.227338 containerd[1519]: time="2025-07-06T23:33:38.227288730Z" level=info msg="CreateContainer within sandbox \"731ee79d743be52aab34f5322cfeeba0a3fe7d01e683c4d0cd7e3d4d74780c1b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b7bc2d7770789fa3cbdf3bf0c8b3a1039931be5b287936a11073d9ed90ab863\"" Jul 6 23:33:38.228103 containerd[1519]: time="2025-07-06T23:33:38.228025130Z" level=info msg="StartContainer for \"2b7bc2d7770789fa3cbdf3bf0c8b3a1039931be5b287936a11073d9ed90ab863\"" Jul 6 23:33:38.230705 containerd[1519]: time="2025-07-06T23:33:38.230459474Z" level=info msg="connecting to shim 2b7bc2d7770789fa3cbdf3bf0c8b3a1039931be5b287936a11073d9ed90ab863" address="unix:///run/containerd/s/2c0a424d7ad5567a25266846aff8fa4df752f3c2c0869f24d4024139d9a2c624" protocol=ttrpc version=3 Jul 6 23:33:38.235587 kubelet[2620]: E0706 23:33:38.235542 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:38.247752 kubelet[2620]: I0706 23:33:38.247217 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-567577c55-7x44x" podStartSLOduration=29.247198966 podStartE2EDuration="29.247198966s" podCreationTimestamp="2025-07-06 23:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:33:38.246567498 +0000 UTC m=+46.473457395" watchObservedRunningTime="2025-07-06 23:33:38.247198966 +0000 UTC m=+46.474088863" Jul 6 23:33:38.272193 systemd[1]: Started cri-containerd-2b7bc2d7770789fa3cbdf3bf0c8b3a1039931be5b287936a11073d9ed90ab863.scope - libcontainer container 2b7bc2d7770789fa3cbdf3bf0c8b3a1039931be5b287936a11073d9ed90ab863. Jul 6 23:33:38.335230 containerd[1519]: time="2025-07-06T23:33:38.334543504Z" level=info msg="StartContainer for \"2b7bc2d7770789fa3cbdf3bf0c8b3a1039931be5b287936a11073d9ed90ab863\" returns successfully" Jul 6 23:33:38.766936 systemd-networkd[1436]: calid958f82011c: Gained IPv6LL Jul 6 23:33:38.767975 systemd-networkd[1436]: cali055c1e01aec: Gained IPv6LL Jul 6 23:33:38.772078 systemd[1]: Created slice kubepods-besteffort-pode1589e5e_bbf7_4760_9925_c8ac70f91619.slice - libcontainer container kubepods-besteffort-pode1589e5e_bbf7_4760_9925_c8ac70f91619.slice. Jul 6 23:33:38.910614 kubelet[2620]: I0706 23:33:38.910534 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1589e5e-bbf7-4760-9925-c8ac70f91619-calico-apiserver-certs\") pod \"calico-apiserver-8d55b7c5-r4vwz\" (UID: \"e1589e5e-bbf7-4760-9925-c8ac70f91619\") " pod="calico-apiserver/calico-apiserver-8d55b7c5-r4vwz" Jul 6 23:33:38.910614 kubelet[2620]: I0706 23:33:38.910586 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvhxr\" (UniqueName: \"kubernetes.io/projected/e1589e5e-bbf7-4760-9925-c8ac70f91619-kube-api-access-gvhxr\") pod \"calico-apiserver-8d55b7c5-r4vwz\" (UID: \"e1589e5e-bbf7-4760-9925-c8ac70f91619\") " pod="calico-apiserver/calico-apiserver-8d55b7c5-r4vwz" Jul 6 23:33:39.076391 containerd[1519]: time="2025-07-06T23:33:39.075976371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d55b7c5-r4vwz,Uid:e1589e5e-bbf7-4760-9925-c8ac70f91619,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:33:39.234101 systemd-networkd[1436]: cali4eb059c0ab3: Link UP Jul 6 23:33:39.235535 systemd-networkd[1436]: cali4eb059c0ab3: Gained carrier Jul 6 23:33:39.248372 kubelet[2620]: E0706 23:33:39.248333 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:39.249600 kubelet[2620]: I0706 23:33:39.248889 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.125 [INFO][5163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0 calico-apiserver-8d55b7c5- calico-apiserver e1589e5e-bbf7-4760-9925-c8ac70f91619 1096 0 2025-07-06 23:33:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8d55b7c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8d55b7c5-r4vwz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4eb059c0ab3 [] [] }} ContainerID="d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-r4vwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.125 [INFO][5163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-r4vwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.162 [INFO][5179] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" HandleID="k8s-pod-network.d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" Workload="localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.162 [INFO][5179] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" HandleID="k8s-pod-network.d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" Workload="localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8d55b7c5-r4vwz", "timestamp":"2025-07-06 23:33:39.16224559 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.162 [INFO][5179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.162 [INFO][5179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.162 [INFO][5179] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.174 [INFO][5179] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" host="localhost" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.179 [INFO][5179] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.185 [INFO][5179] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.188 [INFO][5179] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.192 [INFO][5179] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.193 [INFO][5179] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" host="localhost" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.195 [INFO][5179] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.208 [INFO][5179] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" host="localhost" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.223 [INFO][5179] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" host="localhost" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.223 [INFO][5179] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" host="localhost" Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.223 [INFO][5179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:39.262829 containerd[1519]: 2025-07-06 23:33:39.223 [INFO][5179] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" HandleID="k8s-pod-network.d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" Workload="localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0" Jul 6 23:33:39.264571 containerd[1519]: 2025-07-06 23:33:39.227 [INFO][5163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-r4vwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0", GenerateName:"calico-apiserver-8d55b7c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1589e5e-bbf7-4760-9925-c8ac70f91619", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d55b7c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8d55b7c5-r4vwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4eb059c0ab3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:39.264571 containerd[1519]: 2025-07-06 23:33:39.228 [INFO][5163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-r4vwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0" Jul 6 23:33:39.264571 containerd[1519]: 2025-07-06 23:33:39.228 [INFO][5163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4eb059c0ab3 ContainerID="d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-r4vwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0" Jul 6 23:33:39.264571 containerd[1519]: 2025-07-06 23:33:39.236 [INFO][5163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-r4vwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0" Jul 6 23:33:39.264571 containerd[1519]: 2025-07-06 23:33:39.237 [INFO][5163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-r4vwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0", GenerateName:"calico-apiserver-8d55b7c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1589e5e-bbf7-4760-9925-c8ac70f91619", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 33, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d55b7c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c", Pod:"calico-apiserver-8d55b7c5-r4vwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4eb059c0ab3", MAC:"f6:c6:59:fd:8b:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:33:39.264571 containerd[1519]: 2025-07-06 23:33:39.252 [INFO][5163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" Namespace="calico-apiserver" Pod="calico-apiserver-8d55b7c5-r4vwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d55b7c5--r4vwz-eth0" Jul 6 23:33:39.272635 kubelet[2620]: I0706 23:33:39.272561 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vpq69" podStartSLOduration=40.272530832 podStartE2EDuration="40.272530832s" podCreationTimestamp="2025-07-06 23:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:33:39.267361605 +0000 UTC m=+47.494251502" watchObservedRunningTime="2025-07-06 23:33:39.272530832 +0000 UTC m=+47.499420729" Jul 6 23:33:39.341858 containerd[1519]: time="2025-07-06T23:33:39.340910636Z" level=info msg="connecting to shim d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c" address="unix:///run/containerd/s/b2fe10ddf3d1c78fe34f68b1b20b23a53b6ce634fc6e44a20da5d069abc261ef" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:33:39.375502 systemd[1]: Started cri-containerd-d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c.scope - libcontainer container d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c. Jul 6 23:33:39.407848 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:34244.service - OpenSSH per-connection server daemon (10.0.0.1:34244). Jul 6 23:33:39.425852 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:33:39.506091 sshd[5243]: Accepted publickey for core from 10.0.0.1 port 34244 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:33:39.515788 sshd-session[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:33:39.525580 containerd[1519]: time="2025-07-06T23:33:39.525173275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d55b7c5-r4vwz,Uid:e1589e5e-bbf7-4760-9925-c8ac70f91619,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c\"" Jul 6 23:33:39.527562 systemd-logind[1497]: New session 9 of user core. Jul 6 23:33:39.529403 containerd[1519]: time="2025-07-06T23:33:39.529363239Z" level=info msg="CreateContainer within sandbox \"d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:33:39.534186 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:33:39.561077 containerd[1519]: time="2025-07-06T23:33:39.561034514Z" level=info msg="Container 2d5e7951e1224a0260cdd169b99274b24a496b22f7d857b3d7f78431b7b42d71: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:39.582595 containerd[1519]: time="2025-07-06T23:33:39.582482546Z" level=info msg="CreateContainer within sandbox \"d7569e60420826faba61f00b3090c8b83df606a62bf94c17ff765b70f987727c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d5e7951e1224a0260cdd169b99274b24a496b22f7d857b3d7f78431b7b42d71\"" Jul 6 23:33:39.583308 containerd[1519]: time="2025-07-06T23:33:39.583259388Z" level=info msg="StartContainer for \"2d5e7951e1224a0260cdd169b99274b24a496b22f7d857b3d7f78431b7b42d71\"" Jul 6 23:33:39.585156 containerd[1519]: time="2025-07-06T23:33:39.585114425Z" level=info msg="connecting to shim 2d5e7951e1224a0260cdd169b99274b24a496b22f7d857b3d7f78431b7b42d71" address="unix:///run/containerd/s/b2fe10ddf3d1c78fe34f68b1b20b23a53b6ce634fc6e44a20da5d069abc261ef" protocol=ttrpc version=3 Jul 6 23:33:39.652002 systemd[1]: Started cri-containerd-2d5e7951e1224a0260cdd169b99274b24a496b22f7d857b3d7f78431b7b42d71.scope - libcontainer container 2d5e7951e1224a0260cdd169b99274b24a496b22f7d857b3d7f78431b7b42d71. Jul 6 23:33:39.726893 systemd-networkd[1436]: cali383cfef2b59: Gained IPv6LL Jul 6 23:33:39.886064 containerd[1519]: time="2025-07-06T23:33:39.884483378Z" level=info msg="StartContainer for \"2d5e7951e1224a0260cdd169b99274b24a496b22f7d857b3d7f78431b7b42d71\" returns successfully" Jul 6 23:33:39.885295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2779657367.mount: Deactivated successfully. Jul 6 23:33:40.055188 sshd[5258]: Connection closed by 10.0.0.1 port 34244 Jul 6 23:33:40.055630 sshd-session[5243]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:40.059438 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:34244.service: Deactivated successfully. Jul 6 23:33:40.061551 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:33:40.065679 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:33:40.067725 systemd-logind[1497]: Removed session 9. Jul 6 23:33:40.225753 containerd[1519]: time="2025-07-06T23:33:40.225698430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:40.233934 containerd[1519]: time="2025-07-06T23:33:40.233880959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 6 23:33:40.236525 containerd[1519]: time="2025-07-06T23:33:40.236451185Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:40.238835 containerd[1519]: time="2025-07-06T23:33:40.238795508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:40.239180 containerd[1519]: time="2025-07-06T23:33:40.239149545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.62134145s" Jul 6 23:33:40.239253 containerd[1519]: time="2025-07-06T23:33:40.239182429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 6 23:33:40.241601 containerd[1519]: time="2025-07-06T23:33:40.241567716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:33:40.257157 kubelet[2620]: I0706 23:33:40.257108 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:33:40.258407 kubelet[2620]: E0706 23:33:40.258382 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:40.266908 containerd[1519]: time="2025-07-06T23:33:40.266859180Z" level=info msg="CreateContainer within sandbox \"4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:33:40.276798 containerd[1519]: time="2025-07-06T23:33:40.275939721Z" level=info msg="Container 8d9904a0a5de4577144561efea24f91338b5049cc2ec48df098fb3b756a05c66: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:40.284380 kubelet[2620]: I0706 23:33:40.283815 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8d55b7c5-r4vwz" podStartSLOduration=2.283796616 podStartE2EDuration="2.283796616s" podCreationTimestamp="2025-07-06 23:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:33:40.283655362 +0000 UTC m=+48.510545259" watchObservedRunningTime="2025-07-06 23:33:40.283796616 +0000 UTC m=+48.510686513" Jul 6 23:33:40.287837 containerd[1519]: time="2025-07-06T23:33:40.287522403Z" level=info msg="CreateContainer within sandbox \"4260e67292d85bcad8137bfa54f449ce80bfa89d9865f8e59727b4b6955edebe\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8d9904a0a5de4577144561efea24f91338b5049cc2ec48df098fb3b756a05c66\"" Jul 6 23:33:40.288838 containerd[1519]: time="2025-07-06T23:33:40.288803416Z" level=info msg="StartContainer for \"8d9904a0a5de4577144561efea24f91338b5049cc2ec48df098fb3b756a05c66\"" Jul 6 23:33:40.290548 containerd[1519]: time="2025-07-06T23:33:40.290516513Z" level=info msg="connecting to shim 8d9904a0a5de4577144561efea24f91338b5049cc2ec48df098fb3b756a05c66" address="unix:///run/containerd/s/f82bf4c4234c06ab80aae00f086af03afcc09afc8cc4f9fb17a2d807f4799cb9" protocol=ttrpc version=3 Jul 6 23:33:40.319188 containerd[1519]: time="2025-07-06T23:33:40.319081076Z" level=info msg="StopContainer for \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\" with timeout 30 (s)" Jul 6 23:33:40.320154 containerd[1519]: time="2025-07-06T23:33:40.319621372Z" level=info msg="Stop container \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\" with signal terminated" Jul 6 23:33:40.341955 systemd[1]: Started cri-containerd-8d9904a0a5de4577144561efea24f91338b5049cc2ec48df098fb3b756a05c66.scope - libcontainer container 8d9904a0a5de4577144561efea24f91338b5049cc2ec48df098fb3b756a05c66. Jul 6 23:33:40.342174 systemd[1]: cri-containerd-7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1.scope: Deactivated successfully. Jul 6 23:33:40.342450 systemd[1]: cri-containerd-7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1.scope: Consumed 1.402s CPU time, 26.6M memory peak. Jul 6 23:33:40.359905 containerd[1519]: time="2025-07-06T23:33:40.359860146Z" level=info msg="received exit event container_id:\"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\" id:\"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\" pid:4984 exit_status:1 exited_at:{seconds:1751844820 nanos:357509663}" Jul 6 23:33:40.360095 containerd[1519]: time="2025-07-06T23:33:40.360046766Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\" id:\"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\" pid:4984 exit_status:1 exited_at:{seconds:1751844820 nanos:357509663}" Jul 6 23:33:40.402029 containerd[1519]: time="2025-07-06T23:33:40.401739891Z" level=info msg="StartContainer for \"8d9904a0a5de4577144561efea24f91338b5049cc2ec48df098fb3b756a05c66\" returns successfully" Jul 6 23:33:40.413787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1-rootfs.mount: Deactivated successfully. Jul 6 23:33:40.568964 containerd[1519]: time="2025-07-06T23:33:40.568907671Z" level=info msg="StopContainer for \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\" returns successfully" Jul 6 23:33:40.571945 containerd[1519]: time="2025-07-06T23:33:40.571484058Z" level=info msg="StopPodSandbox for \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\"" Jul 6 23:33:40.573784 containerd[1519]: time="2025-07-06T23:33:40.573737972Z" level=info msg="Container to stop \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:33:40.583291 systemd[1]: cri-containerd-abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e.scope: Deactivated successfully. Jul 6 23:33:40.585584 containerd[1519]: time="2025-07-06T23:33:40.585543836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\" id:\"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\" pid:4953 exit_status:137 exited_at:{seconds:1751844820 nanos:585195000}" Jul 6 23:33:40.613944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e-rootfs.mount: Deactivated successfully. Jul 6 23:33:40.616355 containerd[1519]: time="2025-07-06T23:33:40.616284385Z" level=info msg="shim disconnected" id=abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e namespace=k8s.io Jul 6 23:33:40.628987 containerd[1519]: time="2025-07-06T23:33:40.616351552Z" level=warning msg="cleaning up after shim disconnected" id=abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e namespace=k8s.io Jul 6 23:33:40.628987 containerd[1519]: time="2025-07-06T23:33:40.628979022Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:33:40.658845 containerd[1519]: time="2025-07-06T23:33:40.658300663Z" level=info msg="received exit event sandbox_id:\"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\" exit_status:137 exited_at:{seconds:1751844820 nanos:585195000}" Jul 6 23:33:40.687010 systemd-networkd[1436]: cali4eb059c0ab3: Gained IPv6LL Jul 6 23:33:40.730569 systemd-networkd[1436]: cali055c1e01aec: Link DOWN Jul 6 23:33:40.730576 systemd-networkd[1436]: cali055c1e01aec: Lost carrier Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.729 [INFO][5422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.729 [INFO][5422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" iface="eth0" netns="/var/run/netns/cni-f8822227-69a7-2d19-123c-2ddb57a5c519" Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.729 [INFO][5422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" iface="eth0" netns="/var/run/netns/cni-f8822227-69a7-2d19-123c-2ddb57a5c519" Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.738 [INFO][5422] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" after=9.192634ms iface="eth0" netns="/var/run/netns/cni-f8822227-69a7-2d19-123c-2ddb57a5c519" Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.738 [INFO][5422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.738 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.768 [INFO][5433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" HandleID="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Workload="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.769 [INFO][5433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.769 [INFO][5433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.816 [INFO][5433] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" HandleID="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Workload="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.817 [INFO][5433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" HandleID="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Workload="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.820 [INFO][5433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:40.826077 containerd[1519]: 2025-07-06 23:33:40.823 [INFO][5422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Jul 6 23:33:40.827164 containerd[1519]: time="2025-07-06T23:33:40.826316451Z" level=info msg="TearDown network for sandbox \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\" successfully" Jul 6 23:33:40.827164 containerd[1519]: time="2025-07-06T23:33:40.826459906Z" level=info msg="StopPodSandbox for \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\" returns successfully" Jul 6 23:33:40.880133 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e-shm.mount: Deactivated successfully. Jul 6 23:33:40.880561 systemd[1]: run-netns-cni\x2df8822227\x2d69a7\x2d2d19\x2d123c\x2d2ddb57a5c519.mount: Deactivated successfully. Jul 6 23:33:40.927354 kubelet[2620]: I0706 23:33:40.927318 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/74437c90-fa40-4d77-94d3-7c329ae02598-calico-apiserver-certs\") pod \"74437c90-fa40-4d77-94d3-7c329ae02598\" (UID: \"74437c90-fa40-4d77-94d3-7c329ae02598\") " Jul 6 23:33:40.927354 kubelet[2620]: I0706 23:33:40.927362 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzkjw\" (UniqueName: \"kubernetes.io/projected/74437c90-fa40-4d77-94d3-7c329ae02598-kube-api-access-pzkjw\") pod \"74437c90-fa40-4d77-94d3-7c329ae02598\" (UID: \"74437c90-fa40-4d77-94d3-7c329ae02598\") " Jul 6 23:33:40.931243 kubelet[2620]: I0706 23:33:40.931185 2620 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74437c90-fa40-4d77-94d3-7c329ae02598-kube-api-access-pzkjw" (OuterVolumeSpecName: "kube-api-access-pzkjw") pod "74437c90-fa40-4d77-94d3-7c329ae02598" (UID: "74437c90-fa40-4d77-94d3-7c329ae02598"). InnerVolumeSpecName "kube-api-access-pzkjw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:33:40.933287 systemd[1]: var-lib-kubelet-pods-74437c90\x2dfa40\x2d4d77\x2d94d3\x2d7c329ae02598-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpzkjw.mount: Deactivated successfully. Jul 6 23:33:40.933395 systemd[1]: var-lib-kubelet-pods-74437c90\x2dfa40\x2d4d77\x2d94d3\x2d7c329ae02598-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 6 23:33:40.934066 kubelet[2620]: I0706 23:33:40.934012 2620 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74437c90-fa40-4d77-94d3-7c329ae02598-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "74437c90-fa40-4d77-94d3-7c329ae02598" (UID: "74437c90-fa40-4d77-94d3-7c329ae02598"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:33:41.028537 kubelet[2620]: I0706 23:33:41.028483 2620 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/74437c90-fa40-4d77-94d3-7c329ae02598-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 6 23:33:41.028762 kubelet[2620]: I0706 23:33:41.028614 2620 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pzkjw\" (UniqueName: \"kubernetes.io/projected/74437c90-fa40-4d77-94d3-7c329ae02598-kube-api-access-pzkjw\") on node \"localhost\" DevicePath \"\"" Jul 6 23:33:41.275457 kubelet[2620]: I0706 23:33:41.275384 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:33:41.277258 kubelet[2620]: E0706 23:33:41.276439 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:33:41.279787 kubelet[2620]: I0706 23:33:41.277919 2620 scope.go:117] "RemoveContainer" containerID="7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1" Jul 6 23:33:41.281774 kubelet[2620]: I0706 23:33:41.281352 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-lw88t" podStartSLOduration=23.254178627 podStartE2EDuration="27.281315151s" podCreationTimestamp="2025-07-06 23:33:14 +0000 UTC" firstStartedPulling="2025-07-06 23:33:36.214077515 +0000 UTC m=+44.440967372" lastFinishedPulling="2025-07-06 23:33:40.241213999 +0000 UTC m=+48.468103896" observedRunningTime="2025-07-06 23:33:41.277220214 +0000 UTC m=+49.504110071" watchObservedRunningTime="2025-07-06 23:33:41.281315151 +0000 UTC m=+49.508205368" Jul 6 23:33:41.285999 containerd[1519]: time="2025-07-06T23:33:41.285939501Z" level=info msg="RemoveContainer for \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\"" Jul 6 23:33:41.289809 systemd[1]: Removed slice kubepods-besteffort-pod74437c90_fa40_4d77_94d3_7c329ae02598.slice - libcontainer container kubepods-besteffort-pod74437c90_fa40_4d77_94d3_7c329ae02598.slice. Jul 6 23:33:41.289931 systemd[1]: kubepods-besteffort-pod74437c90_fa40_4d77_94d3_7c329ae02598.slice: Consumed 1.420s CPU time, 26.8M memory peak. Jul 6 23:33:41.299324 containerd[1519]: time="2025-07-06T23:33:41.299259135Z" level=info msg="RemoveContainer for \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\" returns successfully" Jul 6 23:33:41.302149 kubelet[2620]: I0706 23:33:41.301899 2620 scope.go:117] "RemoveContainer" containerID="7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1" Jul 6 23:33:41.302294 containerd[1519]: time="2025-07-06T23:33:41.302235398Z" level=error msg="ContainerStatus for \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\": not found" Jul 6 23:33:41.302463 kubelet[2620]: E0706 23:33:41.302417 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\": not found" containerID="7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1" Jul 6 23:33:41.313805 kubelet[2620]: I0706 23:33:41.313645 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1"} err="failed to get container status \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d978f78f5ae342063e73d3008aca0ddc6f45cc973fe24f4570a7830ced871a1\": not found" Jul 6 23:33:41.441271 containerd[1519]: time="2025-07-06T23:33:41.441234569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d9904a0a5de4577144561efea24f91338b5049cc2ec48df098fb3b756a05c66\" id:\"050037095009aaa944f9e08d66326b4406962e1140a68e24de6954df6343a9b0\" pid:5462 exit_status:1 exited_at:{seconds:1751844821 nanos:440830448}" Jul 6 23:33:41.871459 kubelet[2620]: I0706 23:33:41.871403 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74437c90-fa40-4d77-94d3-7c329ae02598" path="/var/lib/kubelet/pods/74437c90-fa40-4d77-94d3-7c329ae02598/volumes" Jul 6 23:33:42.372121 containerd[1519]: time="2025-07-06T23:33:42.372083004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d9904a0a5de4577144561efea24f91338b5049cc2ec48df098fb3b756a05c66\" id:\"c1baa314d0408ece1e0f80ce65e944cb7a603005a05a420aa0f0ab18bd108c0f\" pid:5490 exit_status:1 exited_at:{seconds:1751844822 nanos:371806096}" Jul 6 23:33:42.495190 containerd[1519]: time="2025-07-06T23:33:42.495126194Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1751844820 nanos:585195000}" Jul 6 23:33:43.494917 containerd[1519]: time="2025-07-06T23:33:43.494866919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:43.495827 containerd[1519]: time="2025-07-06T23:33:43.495794210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 6 23:33:43.497475 containerd[1519]: time="2025-07-06T23:33:43.497238792Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:43.499436 containerd[1519]: time="2025-07-06T23:33:43.499406764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:43.500047 containerd[1519]: time="2025-07-06T23:33:43.500020944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.258413024s" Jul 6 23:33:43.500115 containerd[1519]: time="2025-07-06T23:33:43.500053987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 6 23:33:43.501218 containerd[1519]: time="2025-07-06T23:33:43.501188298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:33:43.518580 containerd[1519]: time="2025-07-06T23:33:43.518539637Z" level=info msg="CreateContainer within sandbox \"5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:33:43.532499 containerd[1519]: time="2025-07-06T23:33:43.531871222Z" level=info msg="Container 03bcf0f007cb55968d7cbb381cfd4eb3202352590a2179b874c438be2aa26b71: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:43.688489 containerd[1519]: time="2025-07-06T23:33:43.688447873Z" level=info msg="CreateContainer within sandbox \"5a5e0cb22649b9b4f0bbadd8085ca15618a4ab38d5150b25409a01a150289400\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"03bcf0f007cb55968d7cbb381cfd4eb3202352590a2179b874c438be2aa26b71\"" Jul 6 23:33:43.689284 containerd[1519]: time="2025-07-06T23:33:43.689125540Z" level=info msg="StartContainer for \"03bcf0f007cb55968d7cbb381cfd4eb3202352590a2179b874c438be2aa26b71\"" Jul 6 23:33:43.690442 containerd[1519]: time="2025-07-06T23:33:43.690371542Z" level=info msg="connecting to shim 03bcf0f007cb55968d7cbb381cfd4eb3202352590a2179b874c438be2aa26b71" address="unix:///run/containerd/s/9f76a9e16c105b734e273904239292c5a15f04a2ed660c5cd5708b743e7df5f3" protocol=ttrpc version=3 Jul 6 23:33:43.723990 systemd[1]: Started cri-containerd-03bcf0f007cb55968d7cbb381cfd4eb3202352590a2179b874c438be2aa26b71.scope - libcontainer container 03bcf0f007cb55968d7cbb381cfd4eb3202352590a2179b874c438be2aa26b71. Jul 6 23:33:43.795906 containerd[1519]: time="2025-07-06T23:33:43.795384744Z" level=info msg="StartContainer for \"03bcf0f007cb55968d7cbb381cfd4eb3202352590a2179b874c438be2aa26b71\" returns successfully" Jul 6 23:33:44.102208 kubelet[2620]: I0706 23:33:44.102092 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:33:44.211507 containerd[1519]: time="2025-07-06T23:33:44.211471085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dfe2a76723eab5900263321796f5f958ab6d603b274de58fc84224ca729875b\" id:\"68c1cfe4408aec539419450bb100b287feaeb23814736f4931093fc84ed6a627\" pid:5564 exited_at:{seconds:1751844824 nanos:211164776}" Jul 6 23:33:44.311577 kubelet[2620]: I0706 23:33:44.311505 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66c4d7674c-qx6wn" podStartSLOduration=24.127663096 podStartE2EDuration="30.311476587s" podCreationTimestamp="2025-07-06 23:33:14 +0000 UTC" firstStartedPulling="2025-07-06 23:33:37.317205111 +0000 UTC m=+45.544095008" lastFinishedPulling="2025-07-06 23:33:43.501018602 +0000 UTC m=+51.727908499" observedRunningTime="2025-07-06 23:33:44.309917517 +0000 UTC m=+52.536807414" watchObservedRunningTime="2025-07-06 23:33:44.311476587 +0000 UTC m=+52.538366484" Jul 6 23:33:44.336256 containerd[1519]: time="2025-07-06T23:33:44.336137560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03bcf0f007cb55968d7cbb381cfd4eb3202352590a2179b874c438be2aa26b71\" id:\"86123571df52bd19e5bd692639770e8ecb291e343e67c785c741df1ed455bb58\" pid:5617 exited_at:{seconds:1751844824 nanos:335329482}" Jul 6 23:33:44.351121 containerd[1519]: time="2025-07-06T23:33:44.351077997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dfe2a76723eab5900263321796f5f958ab6d603b274de58fc84224ca729875b\" id:\"af95d0a0681a20668dacef60bcab9c9cedc7f2089b506c5651a04b1ec16b0613\" pid:5593 exit_status:1 exited_at:{seconds:1751844824 nanos:350725323}" Jul 6 23:33:45.074679 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:36424.service - OpenSSH per-connection server daemon (10.0.0.1:36424). Jul 6 23:33:45.167466 sshd[5637]: Accepted publickey for core from 10.0.0.1 port 36424 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:33:45.169429 sshd-session[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:33:45.181862 systemd-logind[1497]: New session 10 of user core. Jul 6 23:33:45.191010 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:33:45.390452 containerd[1519]: time="2025-07-06T23:33:45.390333363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:45.391602 containerd[1519]: time="2025-07-06T23:33:45.391349739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 6 23:33:45.392534 containerd[1519]: time="2025-07-06T23:33:45.392498968Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:45.395034 containerd[1519]: time="2025-07-06T23:33:45.395002645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:45.395759 containerd[1519]: time="2025-07-06T23:33:45.395730794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.894509772s" Jul 6 23:33:45.395942 containerd[1519]: time="2025-07-06T23:33:45.395883408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 6 23:33:45.399182 containerd[1519]: time="2025-07-06T23:33:45.399144757Z" level=info msg="CreateContainer within sandbox \"8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:33:45.413993 containerd[1519]: time="2025-07-06T23:33:45.413950638Z" level=info msg="Container ef66b31885abc3920e0088dfeb03a7608be3ae63645f7fbf0db196f35e12cbc3: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:33:45.416613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3490866521.mount: Deactivated successfully. Jul 6 23:33:45.426034 containerd[1519]: time="2025-07-06T23:33:45.425986136Z" level=info msg="CreateContainer within sandbox \"8c7ef8b32fd44aeb881d02f2a89364477bb2ed2a63d6fcce630cddd860f16e7b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ef66b31885abc3920e0088dfeb03a7608be3ae63645f7fbf0db196f35e12cbc3\"" Jul 6 23:33:45.426523 containerd[1519]: time="2025-07-06T23:33:45.426493624Z" level=info msg="StartContainer for \"ef66b31885abc3920e0088dfeb03a7608be3ae63645f7fbf0db196f35e12cbc3\"" Jul 6 23:33:45.428498 containerd[1519]: time="2025-07-06T23:33:45.428458290Z" level=info msg="connecting to shim ef66b31885abc3920e0088dfeb03a7608be3ae63645f7fbf0db196f35e12cbc3" address="unix:///run/containerd/s/5bc4181dbabe6b2e63a9d971dc134894ed1f0414ece01845eae1f11bc5b60612" protocol=ttrpc version=3 Jul 6 23:33:45.440006 sshd[5641]: Connection closed by 10.0.0.1 port 36424 Jul 6 23:33:45.440001 sshd-session[5637]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:45.448350 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:36424.service: Deactivated successfully. Jul 6 23:33:45.451014 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:33:45.452253 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:33:45.471042 systemd[1]: Started cri-containerd-ef66b31885abc3920e0088dfeb03a7608be3ae63645f7fbf0db196f35e12cbc3.scope - libcontainer container ef66b31885abc3920e0088dfeb03a7608be3ae63645f7fbf0db196f35e12cbc3. Jul 6 23:33:45.473063 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:36438.service - OpenSSH per-connection server daemon (10.0.0.1:36438). Jul 6 23:33:45.473883 systemd-logind[1497]: Removed session 10. Jul 6 23:33:45.528762 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 36438 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:33:45.529435 containerd[1519]: time="2025-07-06T23:33:45.529309112Z" level=info msg="StartContainer for \"ef66b31885abc3920e0088dfeb03a7608be3ae63645f7fbf0db196f35e12cbc3\" returns successfully" Jul 6 23:33:45.530456 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:33:45.536044 systemd-logind[1497]: New session 11 of user core. Jul 6 23:33:45.543078 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:33:45.799071 sshd[5692]: Connection closed by 10.0.0.1 port 36438 Jul 6 23:33:45.799264 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:45.813560 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:36438.service: Deactivated successfully. Jul 6 23:33:45.821631 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:33:45.823064 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:33:45.828445 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:36446.service - OpenSSH per-connection server daemon (10.0.0.1:36446). Jul 6 23:33:45.834614 systemd-logind[1497]: Removed session 11. Jul 6 23:33:45.899609 sshd[5709]: Accepted publickey for core from 10.0.0.1 port 36446 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:33:45.901308 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:33:45.907572 systemd-logind[1497]: New session 12 of user core. Jul 6 23:33:45.916047 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:33:46.001353 kubelet[2620]: I0706 23:33:46.001303 2620 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:33:46.014038 kubelet[2620]: I0706 23:33:46.013987 2620 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:33:46.122077 sshd[5711]: Connection closed by 10.0.0.1 port 36446 Jul 6 23:33:46.122582 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:46.126425 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:36446.service: Deactivated successfully. Jul 6 23:33:46.129034 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:33:46.130600 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:33:46.132217 systemd-logind[1497]: Removed session 12. Jul 6 23:33:46.373869 kubelet[2620]: I0706 23:33:46.373315 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9775h" podStartSLOduration=22.244161916 podStartE2EDuration="32.373297249s" podCreationTimestamp="2025-07-06 23:33:14 +0000 UTC" firstStartedPulling="2025-07-06 23:33:35.267586474 +0000 UTC m=+43.494476371" lastFinishedPulling="2025-07-06 23:33:45.396721807 +0000 UTC m=+53.623611704" observedRunningTime="2025-07-06 23:33:46.372992061 +0000 UTC m=+54.599881958" watchObservedRunningTime="2025-07-06 23:33:46.373297249 +0000 UTC m=+54.600187146" Jul 6 23:33:51.139828 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:36462.service - OpenSSH per-connection server daemon (10.0.0.1:36462). Jul 6 23:33:51.217730 sshd[5726]: Accepted publickey for core from 10.0.0.1 port 36462 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:33:51.220374 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:33:51.225329 systemd-logind[1497]: New session 13 of user core. Jul 6 23:33:51.243629 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:33:51.394485 sshd[5728]: Connection closed by 10.0.0.1 port 36462 Jul 6 23:33:51.395052 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:51.406648 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:36462.service: Deactivated successfully. Jul 6 23:33:51.409472 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:33:51.410448 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:33:51.412501 systemd-logind[1497]: Removed session 13. Jul 6 23:33:51.414212 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:36466.service - OpenSSH per-connection server daemon (10.0.0.1:36466). Jul 6 23:33:51.473415 sshd[5741]: Accepted publickey for core from 10.0.0.1 port 36466 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:33:51.475077 sshd-session[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:33:51.479623 systemd-logind[1497]: New session 14 of user core. Jul 6 23:33:51.489950 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:33:51.746988 sshd[5744]: Connection closed by 10.0.0.1 port 36466 Jul 6 23:33:51.747606 sshd-session[5741]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:51.761391 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:36466.service: Deactivated successfully. Jul 6 23:33:51.763146 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:33:51.763968 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:33:51.767477 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:36468.service - OpenSSH per-connection server daemon (10.0.0.1:36468). Jul 6 23:33:51.768204 systemd-logind[1497]: Removed session 14. Jul 6 23:33:51.825726 sshd[5756]: Accepted publickey for core from 10.0.0.1 port 36468 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:33:51.827287 sshd-session[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:33:51.833863 systemd-logind[1497]: New session 15 of user core. Jul 6 23:33:51.842945 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:33:51.844986 containerd[1519]: time="2025-07-06T23:33:51.844353282Z" level=info msg="StopPodSandbox for \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\"" Jul 6 23:33:51.959759 containerd[1519]: 2025-07-06 23:33:51.908 [WARNING][5769] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:51.959759 containerd[1519]: 2025-07-06 23:33:51.908 [INFO][5769] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Jul 6 23:33:51.959759 containerd[1519]: 2025-07-06 23:33:51.909 [INFO][5769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" iface="eth0" netns="" Jul 6 23:33:51.959759 containerd[1519]: 2025-07-06 23:33:51.909 [INFO][5769] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Jul 6 23:33:51.959759 containerd[1519]: 2025-07-06 23:33:51.909 [INFO][5769] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Jul 6 23:33:51.959759 containerd[1519]: 2025-07-06 23:33:51.937 [INFO][5786] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" HandleID="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Workload="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:51.959759 containerd[1519]: 2025-07-06 23:33:51.938 [INFO][5786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:51.959759 containerd[1519]: 2025-07-06 23:33:51.938 [INFO][5786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:51.959759 containerd[1519]: 2025-07-06 23:33:51.950 [WARNING][5786] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" HandleID="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Workload="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:51.959759 containerd[1519]: 2025-07-06 23:33:51.950 [INFO][5786] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" HandleID="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Workload="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:51.959759 containerd[1519]: 2025-07-06 23:33:51.955 [INFO][5786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:51.959759 containerd[1519]: 2025-07-06 23:33:51.958 [INFO][5769] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Jul 6 23:33:51.960452 containerd[1519]: time="2025-07-06T23:33:51.959804039Z" level=info msg="TearDown network for sandbox \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\" successfully" Jul 6 23:33:51.960452 containerd[1519]: time="2025-07-06T23:33:51.959825721Z" level=info msg="StopPodSandbox for \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\" returns successfully" Jul 6 23:33:51.960452 containerd[1519]: time="2025-07-06T23:33:51.960337405Z" level=info msg="RemovePodSandbox for \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\"" Jul 6 23:33:51.960452 containerd[1519]: time="2025-07-06T23:33:51.960364688Z" level=info msg="Forcibly stopping sandbox \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\"" Jul 6 23:33:52.036660 containerd[1519]: 2025-07-06 23:33:51.996 [WARNING][5804] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" WorkloadEndpoint="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:52.036660 containerd[1519]: 2025-07-06 23:33:51.997 [INFO][5804] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Jul 6 23:33:52.036660 containerd[1519]: 2025-07-06 23:33:51.997 [INFO][5804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" iface="eth0" netns="" Jul 6 23:33:52.036660 containerd[1519]: 2025-07-06 23:33:51.997 [INFO][5804] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Jul 6 23:33:52.036660 containerd[1519]: 2025-07-06 23:33:51.997 [INFO][5804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Jul 6 23:33:52.036660 containerd[1519]: 2025-07-06 23:33:52.018 [INFO][5813] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" HandleID="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Workload="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:52.036660 containerd[1519]: 2025-07-06 23:33:52.018 [INFO][5813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:33:52.036660 containerd[1519]: 2025-07-06 23:33:52.018 [INFO][5813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:33:52.036660 containerd[1519]: 2025-07-06 23:33:52.030 [WARNING][5813] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" HandleID="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Workload="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:52.036660 containerd[1519]: 2025-07-06 23:33:52.030 [INFO][5813] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" HandleID="k8s-pod-network.abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Workload="localhost-k8s-calico--apiserver--567577c55--7x44x-eth0" Jul 6 23:33:52.036660 containerd[1519]: 2025-07-06 23:33:52.032 [INFO][5813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:33:52.036660 containerd[1519]: 2025-07-06 23:33:52.034 [INFO][5804] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e" Jul 6 23:33:52.036660 containerd[1519]: time="2025-07-06T23:33:52.036399782Z" level=info msg="TearDown network for sandbox \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\" successfully" Jul 6 23:33:52.039341 containerd[1519]: time="2025-07-06T23:33:52.039308632Z" level=info msg="Ensure that sandbox abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e in task-service has been cleanup successfully" Jul 6 23:33:52.070746 containerd[1519]: time="2025-07-06T23:33:52.070242890Z" level=info msg="RemovePodSandbox \"abe2d54df74ecf273e89600db9290dcce8ad84a3d0431f005135a8108671083e\" returns successfully" Jul 6 23:33:52.709462 sshd[5759]: Connection closed by 10.0.0.1 port 36468 Jul 6 23:33:52.712048 sshd-session[5756]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:52.722049 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:36468.service: Deactivated successfully. Jul 6 23:33:52.727529 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:33:52.729044 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:33:52.738361 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:36606.service - OpenSSH per-connection server daemon (10.0.0.1:36606). Jul 6 23:33:52.740810 systemd-logind[1497]: Removed session 15. Jul 6 23:33:52.796434 sshd[5831]: Accepted publickey for core from 10.0.0.1 port 36606 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:33:52.798128 sshd-session[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:33:52.803258 systemd-logind[1497]: New session 16 of user core. Jul 6 23:33:52.812988 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:33:53.187544 sshd[5835]: Connection closed by 10.0.0.1 port 36606 Jul 6 23:33:53.188122 sshd-session[5831]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:53.203302 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:36606.service: Deactivated successfully. Jul 6 23:33:53.206750 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:33:53.207581 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:33:53.213188 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:36614.service - OpenSSH per-connection server daemon (10.0.0.1:36614). Jul 6 23:33:53.215085 systemd-logind[1497]: Removed session 16. Jul 6 23:33:53.278631 sshd[5847]: Accepted publickey for core from 10.0.0.1 port 36614 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:33:53.280483 sshd-session[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:33:53.285389 systemd-logind[1497]: New session 17 of user core. Jul 6 23:33:53.293016 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:33:53.450877 sshd[5849]: Connection closed by 10.0.0.1 port 36614 Jul 6 23:33:53.451177 sshd-session[5847]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:53.454798 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:36614.service: Deactivated successfully. Jul 6 23:33:53.458915 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:33:53.459949 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:33:53.461744 systemd-logind[1497]: Removed session 17. Jul 6 23:33:58.468278 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:36630.service - OpenSSH per-connection server daemon (10.0.0.1:36630). Jul 6 23:33:58.536252 sshd[5868]: Accepted publickey for core from 10.0.0.1 port 36630 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:33:58.538406 sshd-session[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:33:58.550670 systemd-logind[1497]: New session 18 of user core. Jul 6 23:33:58.559067 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:33:58.747557 sshd[5872]: Connection closed by 10.0.0.1 port 36630 Jul 6 23:33:58.747910 sshd-session[5868]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:58.752456 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:33:58.752595 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:36630.service: Deactivated successfully. Jul 6 23:33:58.755393 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:33:58.757096 systemd-logind[1497]: Removed session 18. Jul 6 23:34:02.867463 kubelet[2620]: E0706 23:34:02.867368 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:34:03.764131 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:40288.service - OpenSSH per-connection server daemon (10.0.0.1:40288). Jul 6 23:34:03.829054 sshd[5892]: Accepted publickey for core from 10.0.0.1 port 40288 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:34:03.830744 sshd-session[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:34:03.835228 systemd-logind[1497]: New session 19 of user core. Jul 6 23:34:03.850965 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:34:03.977643 sshd[5894]: Connection closed by 10.0.0.1 port 40288 Jul 6 23:34:03.978102 sshd-session[5892]: pam_unix(sshd:session): session closed for user core Jul 6 23:34:03.983947 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:40288.service: Deactivated successfully. Jul 6 23:34:03.986083 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:34:03.988714 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:34:03.990325 systemd-logind[1497]: Removed session 19. Jul 6 23:34:04.867387 kubelet[2620]: E0706 23:34:04.867350 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:34:07.867261 kubelet[2620]: E0706 23:34:07.867201 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:34:08.994328 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:40300.service - OpenSSH per-connection server daemon (10.0.0.1:40300). Jul 6 23:34:09.068083 sshd[5910]: Accepted publickey for core from 10.0.0.1 port 40300 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:34:09.069602 sshd-session[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:34:09.074071 systemd-logind[1497]: New session 20 of user core. Jul 6 23:34:09.083003 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:34:09.245219 sshd[5912]: Connection closed by 10.0.0.1 port 40300 Jul 6 23:34:09.245279 sshd-session[5910]: pam_unix(sshd:session): session closed for user core Jul 6 23:34:09.250316 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:40300.service: Deactivated successfully. Jul 6 23:34:09.254189 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:34:09.255084 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:34:09.257432 systemd-logind[1497]: Removed session 20. Jul 6 23:34:12.362965 containerd[1519]: time="2025-07-06T23:34:12.362784848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d9904a0a5de4577144561efea24f91338b5049cc2ec48df098fb3b756a05c66\" id:\"60eb48252e6531cef401876f92954bd959dab41a2712d66280460b8b8c6820ee\" pid:5937 exited_at:{seconds:1751844852 nanos:362464942}" Jul 6 23:34:13.868199 kubelet[2620]: E0706 23:34:13.868074 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:34:14.255112 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:60324.service - OpenSSH per-connection server daemon (10.0.0.1:60324). Jul 6 23:34:14.305413 containerd[1519]: time="2025-07-06T23:34:14.305343094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dfe2a76723eab5900263321796f5f958ab6d603b274de58fc84224ca729875b\" id:\"d1abfef6bfcfd9e53024beab3709f716389b4e7e48cd5bdcd9e9c2cd7333f88a\" pid:5963 exited_at:{seconds:1751844854 nanos:304366411}" Jul 6 23:34:14.318474 sshd[5975]: Accepted publickey for core from 10.0.0.1 port 60324 ssh2: RSA SHA256:ofwWcTGfluC09fJifL7FeMrGUPIWpa/IRyJG3wm1av4 Jul 6 23:34:14.320011 sshd-session[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:34:14.325971 systemd-logind[1497]: New session 21 of user core. Jul 6 23:34:14.335998 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:34:14.345003 containerd[1519]: time="2025-07-06T23:34:14.344854412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03bcf0f007cb55968d7cbb381cfd4eb3202352590a2179b874c438be2aa26b71\" id:\"a0c0420e125faa7fa23f695a70ba68e44a5c6bdabcd32d4095588565ad4cf331\" pid:5990 exited_at:{seconds:1751844854 nanos:344610221}" Jul 6 23:34:14.479448 sshd[5996]: Connection closed by 10.0.0.1 port 60324 Jul 6 23:34:14.479789 sshd-session[5975]: pam_unix(sshd:session): session closed for user core Jul 6 23:34:14.483421 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:60324.service: Deactivated successfully. Jul 6 23:34:14.485244 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:34:14.485987 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:34:14.487107 systemd-logind[1497]: Removed session 21. Jul 6 23:34:14.828025 containerd[1519]: time="2025-07-06T23:34:14.827986759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03bcf0f007cb55968d7cbb381cfd4eb3202352590a2179b874c438be2aa26b71\" id:\"8864b4005036c38f0d77a091067311bdcf64b7feece6f467a6faee35e7c88cf3\" pid:6031 exited_at:{seconds:1751844854 nanos:827664052}"