Jul 15 04:32:33.803117 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 15 04:32:33.803138 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 03:28:41 -00 2025 Jul 15 04:32:33.803148 kernel: KASLR enabled Jul 15 04:32:33.803153 kernel: efi: EFI v2.7 by EDK II Jul 15 04:32:33.803159 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 15 04:32:33.803164 kernel: random: crng init done Jul 15 04:32:33.803171 kernel: secureboot: Secure boot disabled Jul 15 04:32:33.803176 kernel: ACPI: Early table checksum verification disabled Jul 15 04:32:33.803182 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 15 04:32:33.803190 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 15 04:32:33.803196 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:32:33.803202 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:32:33.803207 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:32:33.803213 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:32:33.803220 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:32:33.803227 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:32:33.803234 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:32:33.803239 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:32:33.803245 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:32:33.803251 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 15 04:32:33.803257 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 04:32:33.803263 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 04:32:33.803269 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 15 04:32:33.803275 kernel: Zone ranges: Jul 15 04:32:33.803281 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 04:32:33.803288 kernel: DMA32 empty Jul 15 04:32:33.803294 kernel: Normal empty Jul 15 04:32:33.803300 kernel: Device empty Jul 15 04:32:33.803306 kernel: Movable zone start for each node Jul 15 04:32:33.803312 kernel: Early memory node ranges Jul 15 04:32:33.803318 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 15 04:32:33.803324 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 15 04:32:33.803330 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 15 04:32:33.803336 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 15 04:32:33.803342 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 15 04:32:33.803348 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 15 04:32:33.803354 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 15 04:32:33.803361 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 15 04:32:33.803367 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 15 04:32:33.803373 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 15 04:32:33.803382 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 15 04:32:33.803388 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 15 04:32:33.803394 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 15 04:32:33.803402 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 04:32:33.803408 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 15 04:32:33.803415 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 15 04:32:33.803421 kernel: psci: probing for conduit method from ACPI. Jul 15 04:32:33.803427 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 04:32:33.803434 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 04:32:33.803440 kernel: psci: Trusted OS migration not required Jul 15 04:32:33.803454 kernel: psci: SMC Calling Convention v1.1 Jul 15 04:32:33.803471 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 15 04:32:33.803478 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 04:32:33.803487 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 04:32:33.803494 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 15 04:32:33.803500 kernel: Detected PIPT I-cache on CPU0 Jul 15 04:32:33.803506 kernel: CPU features: detected: GIC system register CPU interface Jul 15 04:32:33.803513 kernel: CPU features: detected: Spectre-v4 Jul 15 04:32:33.803519 kernel: CPU features: detected: Spectre-BHB Jul 15 04:32:33.803525 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 04:32:33.803532 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 04:32:33.803538 kernel: CPU features: detected: ARM erratum 1418040 Jul 15 04:32:33.803545 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 04:32:33.803551 kernel: alternatives: applying boot alternatives Jul 15 04:32:33.803558 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:32:33.803566 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 04:32:33.803573 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 04:32:33.803580 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 04:32:33.803586 kernel: Fallback order for Node 0: 0 Jul 15 04:32:33.803593 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 15 04:32:33.803599 kernel: Policy zone: DMA Jul 15 04:32:33.803606 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 04:32:33.803612 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 15 04:32:33.803618 kernel: software IO TLB: area num 4. Jul 15 04:32:33.803625 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 15 04:32:33.803631 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 15 04:32:33.803639 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 04:32:33.803646 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 04:32:33.803653 kernel: rcu: RCU event tracing is enabled. Jul 15 04:32:33.803659 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 04:32:33.803666 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 04:32:33.803672 kernel: Tracing variant of Tasks RCU enabled. Jul 15 04:32:33.803679 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 04:32:33.803685 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 04:32:33.803692 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 04:32:33.803698 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 04:32:33.803705 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 04:32:33.803712 kernel: GICv3: 256 SPIs implemented Jul 15 04:32:33.803719 kernel: GICv3: 0 Extended SPIs implemented Jul 15 04:32:33.803725 kernel: Root IRQ handler: gic_handle_irq Jul 15 04:32:33.803732 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 15 04:32:33.803738 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 04:32:33.803744 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 15 04:32:33.803751 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 15 04:32:33.803757 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 15 04:32:33.803764 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 15 04:32:33.803770 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 15 04:32:33.803777 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 15 04:32:33.803783 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 04:32:33.803791 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:32:33.803797 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 15 04:32:33.803804 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 15 04:32:33.803811 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 15 04:32:33.803817 kernel: arm-pv: using stolen time PV Jul 15 04:32:33.803824 kernel: Console: colour dummy device 80x25 Jul 15 04:32:33.803830 kernel: ACPI: Core revision 20240827 Jul 15 04:32:33.803837 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 15 04:32:33.803844 kernel: pid_max: default: 32768 minimum: 301 Jul 15 04:32:33.803850 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 04:32:33.803858 kernel: landlock: Up and running. Jul 15 04:32:33.803864 kernel: SELinux: Initializing. Jul 15 04:32:33.803871 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:32:33.803878 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:32:33.803884 kernel: rcu: Hierarchical SRCU implementation. Jul 15 04:32:33.803891 kernel: rcu: Max phase no-delay instances is 400. Jul 15 04:32:33.803898 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 04:32:33.803904 kernel: Remapping and enabling EFI services. Jul 15 04:32:33.803911 kernel: smp: Bringing up secondary CPUs ... Jul 15 04:32:33.803923 kernel: Detected PIPT I-cache on CPU1 Jul 15 04:32:33.803930 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 15 04:32:33.803937 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 15 04:32:33.803945 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:32:33.803952 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 15 04:32:33.803959 kernel: Detected PIPT I-cache on CPU2 Jul 15 04:32:33.803966 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 15 04:32:33.803973 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 15 04:32:33.803981 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:32:33.803988 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 15 04:32:33.803995 kernel: Detected PIPT I-cache on CPU3 Jul 15 04:32:33.804002 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 15 04:32:33.804009 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 15 04:32:33.804016 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:32:33.804023 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 15 04:32:33.804030 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 04:32:33.804037 kernel: SMP: Total of 4 processors activated. Jul 15 04:32:33.804045 kernel: CPU: All CPU(s) started at EL1 Jul 15 04:32:33.804052 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 04:32:33.804059 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 04:32:33.804066 kernel: CPU features: detected: Common not Private translations Jul 15 04:32:33.804072 kernel: CPU features: detected: CRC32 instructions Jul 15 04:32:33.804079 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 15 04:32:33.804086 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 04:32:33.804093 kernel: CPU features: detected: LSE atomic instructions Jul 15 04:32:33.804100 kernel: CPU features: detected: Privileged Access Never Jul 15 04:32:33.804107 kernel: CPU features: detected: RAS Extension Support Jul 15 04:32:33.804115 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 04:32:33.804122 kernel: alternatives: applying system-wide alternatives Jul 15 04:32:33.804129 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 15 04:32:33.804136 kernel: Memory: 2424032K/2572288K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 125920K reserved, 16384K cma-reserved) Jul 15 04:32:33.804143 kernel: devtmpfs: initialized Jul 15 04:32:33.804150 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 04:32:33.804157 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 04:32:33.804164 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 04:32:33.804173 kernel: 0 pages in range for non-PLT usage Jul 15 04:32:33.804180 kernel: 508448 pages in range for PLT usage Jul 15 04:32:33.804186 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 04:32:33.804193 kernel: SMBIOS 3.0.0 present. Jul 15 04:32:33.804200 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 15 04:32:33.804207 kernel: DMI: Memory slots populated: 1/1 Jul 15 04:32:33.804214 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 04:32:33.804221 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 04:32:33.804228 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 04:32:33.804236 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 04:32:33.804243 kernel: audit: initializing netlink subsys (disabled) Jul 15 04:32:33.804250 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 15 04:32:33.804257 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 04:32:33.804264 kernel: cpuidle: using governor menu Jul 15 04:32:33.804271 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 04:32:33.804278 kernel: ASID allocator initialised with 32768 entries Jul 15 04:32:33.804285 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 04:32:33.804292 kernel: Serial: AMBA PL011 UART driver Jul 15 04:32:33.804300 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 04:32:33.804307 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 04:32:33.804314 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 04:32:33.804321 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 04:32:33.804327 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 04:32:33.804334 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 04:32:33.804341 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 04:32:33.804348 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 04:32:33.804355 kernel: ACPI: Added _OSI(Module Device) Jul 15 04:32:33.804363 kernel: ACPI: Added _OSI(Processor Device) Jul 15 04:32:33.804370 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 04:32:33.804377 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 04:32:33.804384 kernel: ACPI: Interpreter enabled Jul 15 04:32:33.804391 kernel: ACPI: Using GIC for interrupt routing Jul 15 04:32:33.804398 kernel: ACPI: MCFG table detected, 1 entries Jul 15 04:32:33.804405 kernel: ACPI: CPU0 has been hot-added Jul 15 04:32:33.804411 kernel: ACPI: CPU1 has been hot-added Jul 15 04:32:33.804418 kernel: ACPI: CPU2 has been hot-added Jul 15 04:32:33.804425 kernel: ACPI: CPU3 has been hot-added Jul 15 04:32:33.804434 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 15 04:32:33.804441 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 04:32:33.804452 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 04:32:33.804641 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 04:32:33.804709 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 04:32:33.804769 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 04:32:33.804828 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 15 04:32:33.804888 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 15 04:32:33.804898 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 15 04:32:33.804905 kernel: PCI host bridge to bus 0000:00 Jul 15 04:32:33.804973 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 15 04:32:33.805029 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 04:32:33.805083 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 15 04:32:33.805136 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 04:32:33.805213 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 15 04:32:33.805284 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 04:32:33.805345 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 15 04:32:33.805405 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 15 04:32:33.805490 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 04:32:33.805553 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 15 04:32:33.805614 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 15 04:32:33.805677 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 15 04:32:33.805732 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 15 04:32:33.805785 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 04:32:33.805838 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 15 04:32:33.805847 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 04:32:33.805855 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 04:32:33.805862 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 04:32:33.805871 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 04:32:33.805878 kernel: iommu: Default domain type: Translated Jul 15 04:32:33.805885 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 04:32:33.805892 kernel: efivars: Registered efivars operations Jul 15 04:32:33.805899 kernel: vgaarb: loaded Jul 15 04:32:33.805905 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 04:32:33.805912 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 04:32:33.805919 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 04:32:33.805926 kernel: pnp: PnP ACPI init Jul 15 04:32:33.805999 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 15 04:32:33.806009 kernel: pnp: PnP ACPI: found 1 devices Jul 15 04:32:33.806016 kernel: NET: Registered PF_INET protocol family Jul 15 04:32:33.806023 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 04:32:33.806030 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 04:32:33.806037 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 04:32:33.806044 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 04:32:33.806051 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 04:32:33.806060 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 04:32:33.806067 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:32:33.806074 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:32:33.806081 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 04:32:33.806088 kernel: PCI: CLS 0 bytes, default 64 Jul 15 04:32:33.806095 kernel: kvm [1]: HYP mode not available Jul 15 04:32:33.806102 kernel: Initialise system trusted keyrings Jul 15 04:32:33.806109 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 04:32:33.806116 kernel: Key type asymmetric registered Jul 15 04:32:33.806124 kernel: Asymmetric key parser 'x509' registered Jul 15 04:32:33.806131 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 04:32:33.806138 kernel: io scheduler mq-deadline registered Jul 15 04:32:33.806145 kernel: io scheduler kyber registered Jul 15 04:32:33.806152 kernel: io scheduler bfq registered Jul 15 04:32:33.806159 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 04:32:33.806166 kernel: ACPI: button: Power Button [PWRB] Jul 15 04:32:33.806174 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 04:32:33.806236 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 15 04:32:33.806247 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 04:32:33.806254 kernel: thunder_xcv, ver 1.0 Jul 15 04:32:33.806262 kernel: thunder_bgx, ver 1.0 Jul 15 04:32:33.806269 kernel: nicpf, ver 1.0 Jul 15 04:32:33.806276 kernel: nicvf, ver 1.0 Jul 15 04:32:33.806352 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 04:32:33.806415 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T04:32:33 UTC (1752553953) Jul 15 04:32:33.806428 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 04:32:33.806439 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 04:32:33.806452 kernel: NET: Registered PF_INET6 protocol family Jul 15 04:32:33.806473 kernel: watchdog: NMI not fully supported Jul 15 04:32:33.806481 kernel: watchdog: Hard watchdog permanently disabled Jul 15 04:32:33.806488 kernel: Segment Routing with IPv6 Jul 15 04:32:33.806495 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 04:32:33.806502 kernel: NET: Registered PF_PACKET protocol family Jul 15 04:32:33.806508 kernel: Key type dns_resolver registered Jul 15 04:32:33.806515 kernel: registered taskstats version 1 Jul 15 04:32:33.806522 kernel: Loading compiled-in X.509 certificates Jul 15 04:32:33.806534 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: b5c59c413839929aea5bd4b52ae6eaff0e245cd2' Jul 15 04:32:33.806541 kernel: Demotion targets for Node 0: null Jul 15 04:32:33.806548 kernel: Key type .fscrypt registered Jul 15 04:32:33.806559 kernel: Key type fscrypt-provisioning registered Jul 15 04:32:33.806579 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 04:32:33.806588 kernel: ima: Allocated hash algorithm: sha1 Jul 15 04:32:33.806596 kernel: ima: No architecture policies found Jul 15 04:32:33.806603 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 04:32:33.806612 kernel: clk: Disabling unused clocks Jul 15 04:32:33.806619 kernel: PM: genpd: Disabling unused power domains Jul 15 04:32:33.806632 kernel: Warning: unable to open an initial console. Jul 15 04:32:33.806639 kernel: Freeing unused kernel memory: 39424K Jul 15 04:32:33.806646 kernel: Run /init as init process Jul 15 04:32:33.806653 kernel: with arguments: Jul 15 04:32:33.806660 kernel: /init Jul 15 04:32:33.806667 kernel: with environment: Jul 15 04:32:33.806674 kernel: HOME=/ Jul 15 04:32:33.806681 kernel: TERM=linux Jul 15 04:32:33.806689 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 04:32:33.806697 systemd[1]: Successfully made /usr/ read-only. Jul 15 04:32:33.806707 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:32:33.806715 systemd[1]: Detected virtualization kvm. Jul 15 04:32:33.806722 systemd[1]: Detected architecture arm64. Jul 15 04:32:33.806729 systemd[1]: Running in initrd. Jul 15 04:32:33.806736 systemd[1]: No hostname configured, using default hostname. Jul 15 04:32:33.806746 systemd[1]: Hostname set to . Jul 15 04:32:33.806753 systemd[1]: Initializing machine ID from VM UUID. Jul 15 04:32:33.806761 systemd[1]: Queued start job for default target initrd.target. Jul 15 04:32:33.806768 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:32:33.806776 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:32:33.806784 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 04:32:33.806792 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:32:33.806800 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 04:32:33.806810 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 04:32:33.806818 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 04:32:33.806826 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 04:32:33.806834 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:32:33.806841 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:32:33.806849 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:32:33.806857 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:32:33.806866 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:32:33.806873 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:32:33.806881 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:32:33.806889 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:32:33.806897 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 04:32:33.806905 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 04:32:33.806913 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:32:33.806921 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:32:33.806930 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:32:33.806938 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:32:33.806945 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 04:32:33.806953 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:32:33.806961 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 04:32:33.806969 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 04:32:33.806977 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 04:32:33.806984 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:32:33.806992 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:32:33.807001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:32:33.807009 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:32:33.807017 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 04:32:33.807025 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 04:32:33.807034 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:32:33.807058 systemd-journald[242]: Collecting audit messages is disabled. Jul 15 04:32:33.807077 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:32:33.807085 systemd-journald[242]: Journal started Jul 15 04:32:33.807105 systemd-journald[242]: Runtime Journal (/run/log/journal/bea7e9fadc9342b9b9a1fdf643e0f5dd) is 6M, max 48.5M, 42.4M free. Jul 15 04:32:33.794489 systemd-modules-load[244]: Inserted module 'overlay' Jul 15 04:32:33.810097 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 04:32:33.812021 systemd-modules-load[244]: Inserted module 'br_netfilter' Jul 15 04:32:33.813665 kernel: Bridge firewalling registered Jul 15 04:32:33.813681 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:32:33.814850 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:32:33.816076 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:32:33.820073 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 04:32:33.821835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:32:33.823910 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:32:33.831947 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:32:33.840431 systemd-tmpfiles[270]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 04:32:33.840507 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:32:33.842072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:32:33.844691 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:32:33.847875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:32:33.851161 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 04:32:33.853372 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:32:33.874220 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:32:33.888018 systemd-resolved[288]: Positive Trust Anchors: Jul 15 04:32:33.888033 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:32:33.888063 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:32:33.892700 systemd-resolved[288]: Defaulting to hostname 'linux'. Jul 15 04:32:33.893595 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:32:33.897216 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:32:33.944482 kernel: SCSI subsystem initialized Jul 15 04:32:33.949478 kernel: Loading iSCSI transport class v2.0-870. Jul 15 04:32:33.958511 kernel: iscsi: registered transport (tcp) Jul 15 04:32:33.971483 kernel: iscsi: registered transport (qla4xxx) Jul 15 04:32:33.971514 kernel: QLogic iSCSI HBA Driver Jul 15 04:32:33.988577 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:32:34.010492 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:32:34.013072 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:32:34.058505 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 04:32:34.060416 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 04:32:34.125513 kernel: raid6: neonx8 gen() 15788 MB/s Jul 15 04:32:34.142482 kernel: raid6: neonx4 gen() 15829 MB/s Jul 15 04:32:34.159492 kernel: raid6: neonx2 gen() 13204 MB/s Jul 15 04:32:34.176490 kernel: raid6: neonx1 gen() 10451 MB/s Jul 15 04:32:34.193485 kernel: raid6: int64x8 gen() 6884 MB/s Jul 15 04:32:34.210484 kernel: raid6: int64x4 gen() 7344 MB/s Jul 15 04:32:34.227480 kernel: raid6: int64x2 gen() 6062 MB/s Jul 15 04:32:34.244568 kernel: raid6: int64x1 gen() 5043 MB/s Jul 15 04:32:34.244583 kernel: raid6: using algorithm neonx4 gen() 15829 MB/s Jul 15 04:32:34.262535 kernel: raid6: .... xor() 12339 MB/s, rmw enabled Jul 15 04:32:34.262553 kernel: raid6: using neon recovery algorithm Jul 15 04:32:34.267945 kernel: xor: measuring software checksum speed Jul 15 04:32:34.267966 kernel: 8regs : 21601 MB/sec Jul 15 04:32:34.268661 kernel: 32regs : 21584 MB/sec Jul 15 04:32:34.269912 kernel: arm64_neon : 28070 MB/sec Jul 15 04:32:34.269925 kernel: xor: using function: arm64_neon (28070 MB/sec) Jul 15 04:32:34.325489 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 04:32:34.331436 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:32:34.334064 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:32:34.368186 systemd-udevd[499]: Using default interface naming scheme 'v255'. Jul 15 04:32:34.372307 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:32:34.374619 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 04:32:34.405878 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jul 15 04:32:34.427529 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:32:34.429904 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:32:34.476701 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:32:34.479009 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 04:32:34.527199 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 15 04:32:34.527387 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 04:32:34.535010 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 04:32:34.535057 kernel: GPT:9289727 != 19775487 Jul 15 04:32:34.535068 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 04:32:34.535128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:32:34.538322 kernel: GPT:9289727 != 19775487 Jul 15 04:32:34.538348 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 04:32:34.538357 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 04:32:34.535258 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:32:34.540625 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:32:34.542893 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:32:34.559428 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 04:32:34.569367 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 04:32:34.570786 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:32:34.579828 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 04:32:34.592807 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 04:32:34.599138 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 04:32:34.600393 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 04:32:34.602798 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:32:34.605647 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:32:34.607858 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:32:34.610578 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 04:32:34.612359 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 04:32:34.640269 disk-uuid[590]: Primary Header is updated. Jul 15 04:32:34.640269 disk-uuid[590]: Secondary Entries is updated. Jul 15 04:32:34.640269 disk-uuid[590]: Secondary Header is updated. Jul 15 04:32:34.644488 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 04:32:34.646748 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:32:35.657498 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 04:32:35.659478 disk-uuid[593]: The operation has completed successfully. Jul 15 04:32:35.683532 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 04:32:35.683625 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 04:32:35.707057 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 04:32:35.725901 sh[610]: Success Jul 15 04:32:35.739906 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 04:32:35.739961 kernel: device-mapper: uevent: version 1.0.3 Jul 15 04:32:35.743494 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 04:32:35.751479 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 04:32:35.775411 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 04:32:35.778057 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 04:32:35.793279 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 04:32:35.799916 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 04:32:35.801473 kernel: BTRFS: device fsid a7b7592d-2d1d-4236-b04f-dc58147b4692 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (622) Jul 15 04:32:35.803655 kernel: BTRFS info (device dm-0): first mount of filesystem a7b7592d-2d1d-4236-b04f-dc58147b4692 Jul 15 04:32:35.803683 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:32:35.803693 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 04:32:35.809266 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 04:32:35.810572 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:32:35.811993 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 04:32:35.812788 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 04:32:35.814292 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 04:32:35.835631 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (655) Jul 15 04:32:35.835675 kernel: BTRFS info (device vda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:32:35.835685 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:32:35.837128 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 04:32:35.842485 kernel: BTRFS info (device vda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:32:35.843756 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 04:32:35.846044 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 04:32:35.906228 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:32:35.910602 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:32:35.948282 systemd-networkd[796]: lo: Link UP Jul 15 04:32:35.948291 systemd-networkd[796]: lo: Gained carrier Jul 15 04:32:35.949018 systemd-networkd[796]: Enumeration completed Jul 15 04:32:35.949107 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:32:35.949711 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:32:35.949715 systemd-networkd[796]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:32:35.950637 systemd[1]: Reached target network.target - Network. Jul 15 04:32:35.950676 systemd-networkd[796]: eth0: Link UP Jul 15 04:32:35.950679 systemd-networkd[796]: eth0: Gained carrier Jul 15 04:32:35.950687 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:32:35.979514 systemd-networkd[796]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 04:32:35.995806 ignition[702]: Ignition 2.21.0 Jul 15 04:32:35.995818 ignition[702]: Stage: fetch-offline Jul 15 04:32:35.995848 ignition[702]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:32:35.995856 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:32:35.996033 ignition[702]: parsed url from cmdline: "" Jul 15 04:32:35.996036 ignition[702]: no config URL provided Jul 15 04:32:35.996040 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 04:32:35.996046 ignition[702]: no config at "/usr/lib/ignition/user.ign" Jul 15 04:32:35.996069 ignition[702]: op(1): [started] loading QEMU firmware config module Jul 15 04:32:35.996073 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 04:32:36.004642 ignition[702]: op(1): [finished] loading QEMU firmware config module Jul 15 04:32:36.004663 ignition[702]: QEMU firmware config was not found. Ignoring... Jul 15 04:32:36.043409 ignition[702]: parsing config with SHA512: e8e29a2941ffb2b79f47cf0fe908314561be8ec41898a8874352f77384a7dcdeaafb532f7893dcf2eaa4015f0c1bfec36f33aeb326de436b4f95565395e12f04 Jul 15 04:32:36.047395 unknown[702]: fetched base config from "system" Jul 15 04:32:36.047410 unknown[702]: fetched user config from "qemu" Jul 15 04:32:36.047851 ignition[702]: fetch-offline: fetch-offline passed Jul 15 04:32:36.049936 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:32:36.047906 ignition[702]: Ignition finished successfully Jul 15 04:32:36.051180 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 04:32:36.051963 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 04:32:36.077220 ignition[810]: Ignition 2.21.0 Jul 15 04:32:36.077238 ignition[810]: Stage: kargs Jul 15 04:32:36.077371 ignition[810]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:32:36.077380 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:32:36.079846 ignition[810]: kargs: kargs passed Jul 15 04:32:36.079914 ignition[810]: Ignition finished successfully Jul 15 04:32:36.081869 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 04:32:36.083862 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 04:32:36.103946 ignition[818]: Ignition 2.21.0 Jul 15 04:32:36.103964 ignition[818]: Stage: disks Jul 15 04:32:36.104103 ignition[818]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:32:36.104112 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:32:36.105328 ignition[818]: disks: disks passed Jul 15 04:32:36.108011 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 04:32:36.105390 ignition[818]: Ignition finished successfully Jul 15 04:32:36.109234 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 04:32:36.111520 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 04:32:36.113217 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:32:36.115012 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:32:36.116925 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:32:36.119493 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 04:32:36.143778 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 04:32:36.148482 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 04:32:36.152377 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 04:32:36.224483 kernel: EXT4-fs (vda9): mounted filesystem 4818953b-9d82-47bd-ab58-d0aa5641a19a r/w with ordered data mode. Quota mode: none. Jul 15 04:32:36.224697 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 04:32:36.225886 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 04:32:36.228176 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:32:36.229707 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 04:32:36.230625 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 04:32:36.230664 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 04:32:36.230685 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:32:36.250968 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 04:32:36.253004 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 04:32:36.260073 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (837) Jul 15 04:32:36.260109 kernel: BTRFS info (device vda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:32:36.261193 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:32:36.261971 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 04:32:36.265164 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:32:36.296759 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 04:32:36.299585 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Jul 15 04:32:36.302430 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 04:32:36.305728 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 04:32:36.370514 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 04:32:36.372429 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 04:32:36.373950 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 04:32:36.388472 kernel: BTRFS info (device vda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:32:36.405490 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 04:32:36.407374 ignition[950]: INFO : Ignition 2.21.0 Jul 15 04:32:36.407374 ignition[950]: INFO : Stage: mount Jul 15 04:32:36.409607 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:32:36.409607 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:32:36.409607 ignition[950]: INFO : mount: mount passed Jul 15 04:32:36.409607 ignition[950]: INFO : Ignition finished successfully Jul 15 04:32:36.410193 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 04:32:36.412223 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 04:32:36.798483 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 04:32:36.799948 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:32:36.817267 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (963) Jul 15 04:32:36.817304 kernel: BTRFS info (device vda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:32:36.817314 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:32:36.818222 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 04:32:36.821277 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:32:36.851591 ignition[980]: INFO : Ignition 2.21.0 Jul 15 04:32:36.852577 ignition[980]: INFO : Stage: files Jul 15 04:32:36.852577 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:32:36.852577 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:32:36.855462 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Jul 15 04:32:36.855462 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 04:32:36.855462 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 04:32:36.859155 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 04:32:36.859155 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 04:32:36.859155 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 04:32:36.858648 unknown[980]: wrote ssh authorized keys file for user: core Jul 15 04:32:36.864165 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 04:32:36.864165 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 15 04:32:36.901863 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 04:32:37.176599 systemd-networkd[796]: eth0: Gained IPv6LL Jul 15 04:32:37.729771 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 04:32:37.731766 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 04:32:37.731766 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 04:32:37.731766 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:32:37.731766 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:32:37.731766 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:32:37.731766 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:32:37.731766 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:32:37.731766 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:32:37.746254 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:32:37.746254 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:32:37.746254 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:32:37.746254 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:32:37.746254 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:32:37.746254 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 15 04:32:38.117826 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 04:32:38.452215 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:32:38.452215 ignition[980]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 04:32:38.455849 ignition[980]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:32:38.455849 ignition[980]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:32:38.455849 ignition[980]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 04:32:38.455849 ignition[980]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 15 04:32:38.455849 ignition[980]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 04:32:38.455849 ignition[980]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 04:32:38.455849 ignition[980]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 15 04:32:38.455849 ignition[980]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 04:32:38.471685 ignition[980]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 04:32:38.475592 ignition[980]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 04:32:38.477026 ignition[980]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 04:32:38.477026 ignition[980]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 15 04:32:38.477026 ignition[980]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 04:32:38.477026 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:32:38.477026 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:32:38.477026 ignition[980]: INFO : files: files passed Jul 15 04:32:38.477026 ignition[980]: INFO : Ignition finished successfully Jul 15 04:32:38.478770 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 04:32:38.483564 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 04:32:38.486600 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 04:32:38.500079 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 04:32:38.500168 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 04:32:38.503099 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 04:32:38.504384 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:32:38.504384 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:32:38.507598 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:32:38.506680 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:32:38.508810 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 04:32:38.513295 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 04:32:38.542268 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 04:32:38.542395 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 04:32:38.544593 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 04:32:38.546374 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 04:32:38.548164 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 04:32:38.548916 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 04:32:38.562013 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:32:38.564284 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 04:32:38.584691 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:32:38.586942 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:32:38.588150 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 04:32:38.589894 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 04:32:38.590003 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:32:38.592399 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 04:32:38.594393 systemd[1]: Stopped target basic.target - Basic System. Jul 15 04:32:38.596033 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 04:32:38.597671 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:32:38.599573 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 04:32:38.601538 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:32:38.603513 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 04:32:38.605336 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:32:38.607255 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 04:32:38.609161 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 04:32:38.610842 systemd[1]: Stopped target swap.target - Swaps. Jul 15 04:32:38.612293 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 04:32:38.612402 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:32:38.614713 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:32:38.616592 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:32:38.618548 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 04:32:38.619535 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:32:38.621551 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 04:32:38.621658 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 04:32:38.624435 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 04:32:38.624559 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:32:38.626549 systemd[1]: Stopped target paths.target - Path Units. Jul 15 04:32:38.628225 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 04:32:38.628330 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:32:38.630451 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 04:32:38.632062 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 04:32:38.633824 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 04:32:38.633906 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:32:38.636084 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 04:32:38.636158 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:32:38.637797 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 04:32:38.637908 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:32:38.639716 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 04:32:38.639811 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 04:32:38.642153 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 04:32:38.644499 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 04:32:38.645515 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 04:32:38.645630 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:32:38.647739 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 04:32:38.647834 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:32:38.653342 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 04:32:38.654609 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 04:32:38.663042 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 04:32:38.666894 ignition[1035]: INFO : Ignition 2.21.0 Jul 15 04:32:38.666894 ignition[1035]: INFO : Stage: umount Jul 15 04:32:38.668676 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:32:38.668676 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:32:38.668676 ignition[1035]: INFO : umount: umount passed Jul 15 04:32:38.668676 ignition[1035]: INFO : Ignition finished successfully Jul 15 04:32:38.670211 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 04:32:38.670339 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 04:32:38.672120 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 04:32:38.672221 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 04:32:38.674779 systemd[1]: Stopped target network.target - Network. Jul 15 04:32:38.676216 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 04:32:38.676282 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 04:32:38.679384 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 04:32:38.679441 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 04:32:38.680508 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 04:32:38.680561 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 04:32:38.682289 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 04:32:38.682328 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 04:32:38.684102 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 04:32:38.684150 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 04:32:38.686019 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 04:32:38.687701 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 04:32:38.690980 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 04:32:38.691085 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 04:32:38.694092 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 04:32:38.694315 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 04:32:38.694350 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:32:38.698032 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:32:38.698978 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 04:32:38.699070 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 04:32:38.701972 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 04:32:38.702096 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 04:32:38.703333 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 04:32:38.703364 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:32:38.705798 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 04:32:38.706868 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 04:32:38.706935 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:32:38.710218 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 04:32:38.710260 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:32:38.714160 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 04:32:38.714204 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 04:32:38.716343 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:32:38.720002 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 04:32:38.727408 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 04:32:38.727528 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 04:32:38.732654 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 04:32:38.732775 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:32:38.734970 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 04:32:38.735029 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 04:32:38.736885 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 04:32:38.736915 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:32:38.738945 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 04:32:38.738994 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:32:38.741662 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 04:32:38.741707 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 04:32:38.744579 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 04:32:38.744625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:32:38.747604 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 04:32:38.748959 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 04:32:38.749013 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:32:38.752019 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 04:32:38.752062 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:32:38.755049 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 04:32:38.755088 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:32:38.758066 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 04:32:38.758107 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:32:38.760289 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:32:38.760331 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:32:38.764219 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 04:32:38.764304 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 04:32:38.766215 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 04:32:38.768778 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 04:32:38.789089 systemd[1]: Switching root. Jul 15 04:32:38.827634 systemd-journald[242]: Journal stopped Jul 15 04:32:39.604807 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). Jul 15 04:32:39.604873 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 04:32:39.604888 kernel: SELinux: policy capability open_perms=1 Jul 15 04:32:39.604898 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 04:32:39.604906 kernel: SELinux: policy capability always_check_network=0 Jul 15 04:32:39.604918 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 04:32:39.604932 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 04:32:39.604942 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 04:32:39.604955 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 04:32:39.604964 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 04:32:39.604974 kernel: audit: type=1403 audit(1752553959.001:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 04:32:39.604985 systemd[1]: Successfully loaded SELinux policy in 57.355ms. Jul 15 04:32:39.605001 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.258ms. Jul 15 04:32:39.605014 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:32:39.605028 systemd[1]: Detected virtualization kvm. Jul 15 04:32:39.605038 systemd[1]: Detected architecture arm64. Jul 15 04:32:39.605054 systemd[1]: Detected first boot. Jul 15 04:32:39.605067 systemd[1]: Initializing machine ID from VM UUID. Jul 15 04:32:39.605077 zram_generator::config[1082]: No configuration found. Jul 15 04:32:39.605088 kernel: NET: Registered PF_VSOCK protocol family Jul 15 04:32:39.605098 systemd[1]: Populated /etc with preset unit settings. Jul 15 04:32:39.605111 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 04:32:39.605122 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 04:32:39.605131 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 04:32:39.605142 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 04:32:39.605153 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 04:32:39.605163 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 04:32:39.605173 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 04:32:39.605184 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 04:32:39.605194 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 04:32:39.605206 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 04:32:39.605217 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 04:32:39.605227 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 04:32:39.605237 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:32:39.605248 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:32:39.605260 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 04:32:39.605270 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 04:32:39.605281 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 04:32:39.605294 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:32:39.605304 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 04:32:39.605315 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:32:39.605325 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:32:39.605335 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 04:32:39.605346 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 04:32:39.605356 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 04:32:39.605366 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 04:32:39.605378 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:32:39.605388 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:32:39.605399 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:32:39.605409 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:32:39.605419 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 04:32:39.605436 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 04:32:39.605452 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 04:32:39.605491 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:32:39.605504 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:32:39.605517 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:32:39.605527 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 04:32:39.605538 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 04:32:39.605549 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 04:32:39.605560 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 04:32:39.605570 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 04:32:39.605581 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 04:32:39.605592 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 04:32:39.605603 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 04:32:39.605621 systemd[1]: Reached target machines.target - Containers. Jul 15 04:32:39.605634 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 04:32:39.605645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:32:39.605655 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:32:39.605665 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 04:32:39.605676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:32:39.605687 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:32:39.605700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:32:39.605713 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 04:32:39.605723 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:32:39.605734 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 04:32:39.605748 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 04:32:39.605760 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 04:32:39.605770 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 04:32:39.605786 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 04:32:39.605798 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:32:39.605812 kernel: loop: module loaded Jul 15 04:32:39.605823 kernel: fuse: init (API version 7.41) Jul 15 04:32:39.605834 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:32:39.605845 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:32:39.605854 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:32:39.605864 kernel: ACPI: bus type drm_connector registered Jul 15 04:32:39.605873 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 04:32:39.605884 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 04:32:39.605895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:32:39.605906 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 04:32:39.605916 systemd[1]: Stopped verity-setup.service. Jul 15 04:32:39.605953 systemd-journald[1155]: Collecting audit messages is disabled. Jul 15 04:32:39.605978 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 04:32:39.605991 systemd-journald[1155]: Journal started Jul 15 04:32:39.606019 systemd-journald[1155]: Runtime Journal (/run/log/journal/bea7e9fadc9342b9b9a1fdf643e0f5dd) is 6M, max 48.5M, 42.4M free. Jul 15 04:32:39.367104 systemd[1]: Queued start job for default target multi-user.target. Jul 15 04:32:39.391544 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 04:32:39.391943 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 04:32:39.607479 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:32:39.609096 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 04:32:39.610373 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 04:32:39.611662 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 04:32:39.612879 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 04:32:39.614164 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 04:32:39.617497 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 04:32:39.619014 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:32:39.620576 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 04:32:39.620741 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 04:32:39.622322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:32:39.622502 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:32:39.623871 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:32:39.624026 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:32:39.625368 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:32:39.625537 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:32:39.626982 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 04:32:39.627131 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 04:32:39.628480 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:32:39.628651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:32:39.630009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:32:39.632727 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:32:39.634240 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 04:32:39.635803 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 04:32:39.646119 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:32:39.648471 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 04:32:39.650579 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 04:32:39.651748 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 04:32:39.651791 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:32:39.653680 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 04:32:39.662644 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 04:32:39.664079 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:32:39.665525 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 04:32:39.667847 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 04:32:39.669247 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:32:39.671656 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 04:32:39.673218 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:32:39.675619 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:32:39.681158 systemd-journald[1155]: Time spent on flushing to /var/log/journal/bea7e9fadc9342b9b9a1fdf643e0f5dd is 17.368ms for 883 entries. Jul 15 04:32:39.681158 systemd-journald[1155]: System Journal (/var/log/journal/bea7e9fadc9342b9b9a1fdf643e0f5dd) is 8M, max 195.6M, 187.6M free. Jul 15 04:32:39.709372 systemd-journald[1155]: Received client request to flush runtime journal. Jul 15 04:32:39.709418 kernel: loop0: detected capacity change from 0 to 105936 Jul 15 04:32:39.683812 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 04:32:39.688606 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:32:39.692063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:32:39.696585 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 04:32:39.705490 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 04:32:39.712493 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 04:32:39.716921 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 04:32:39.720359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:32:39.729094 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 04:32:39.731931 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 04:32:39.734536 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 04:32:39.735485 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jul 15 04:32:39.735501 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jul 15 04:32:39.743670 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:32:39.749642 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 04:32:39.751632 kernel: loop1: detected capacity change from 0 to 134232 Jul 15 04:32:39.760697 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 04:32:39.777382 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 04:32:39.779896 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:32:39.782497 kernel: loop2: detected capacity change from 0 to 203944 Jul 15 04:32:39.799669 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jul 15 04:32:39.799685 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jul 15 04:32:39.804269 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:32:39.809471 kernel: loop3: detected capacity change from 0 to 105936 Jul 15 04:32:39.817516 kernel: loop4: detected capacity change from 0 to 134232 Jul 15 04:32:39.826509 kernel: loop5: detected capacity change from 0 to 203944 Jul 15 04:32:39.831810 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 04:32:39.832194 (sd-merge)[1225]: Merged extensions into '/usr'. Jul 15 04:32:39.839028 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 04:32:39.839048 systemd[1]: Reloading... Jul 15 04:32:39.907514 zram_generator::config[1254]: No configuration found. Jul 15 04:32:39.982249 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 04:32:39.988851 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:32:40.051939 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 04:32:40.052234 systemd[1]: Reloading finished in 212 ms. Jul 15 04:32:40.085175 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 04:32:40.088379 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 04:32:40.104669 systemd[1]: Starting ensure-sysext.service... Jul 15 04:32:40.106401 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:32:40.119985 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 04:32:40.123764 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:32:40.126762 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Jul 15 04:32:40.126779 systemd[1]: Reloading... Jul 15 04:32:40.127022 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 04:32:40.127410 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 04:32:40.127765 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 04:32:40.128020 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 04:32:40.128735 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 04:32:40.129016 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 15 04:32:40.129122 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 15 04:32:40.131379 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:32:40.131487 systemd-tmpfiles[1286]: Skipping /boot Jul 15 04:32:40.137206 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:32:40.137279 systemd-tmpfiles[1286]: Skipping /boot Jul 15 04:32:40.167649 systemd-udevd[1289]: Using default interface naming scheme 'v255'. Jul 15 04:32:40.171511 zram_generator::config[1314]: No configuration found. Jul 15 04:32:40.281592 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:32:40.361335 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 04:32:40.362967 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 04:32:40.363067 systemd[1]: Reloading finished in 236 ms. Jul 15 04:32:40.380792 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:32:40.382398 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:32:40.414748 systemd[1]: Finished ensure-sysext.service. Jul 15 04:32:40.417397 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:32:40.419620 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 04:32:40.420996 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:32:40.438199 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:32:40.442599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:32:40.444623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:32:40.447988 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:32:40.449210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:32:40.451619 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 04:32:40.452790 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:32:40.454630 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 04:32:40.458223 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:32:40.464587 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:32:40.467683 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 04:32:40.469809 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 04:32:40.474035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:32:40.476047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:32:40.479708 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:32:40.481251 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:32:40.481389 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:32:40.482907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:32:40.483086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:32:40.486970 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:32:40.487153 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:32:40.488865 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 04:32:40.490569 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 04:32:40.498256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:32:40.498381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:32:40.499743 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 04:32:40.502669 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 04:32:40.512521 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 04:32:40.513068 augenrules[1441]: No rules Jul 15 04:32:40.514841 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:32:40.515057 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:32:40.516729 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 04:32:40.518560 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 04:32:40.523383 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 04:32:40.536783 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:32:40.544084 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 04:32:40.609710 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 04:32:40.613203 systemd-resolved[1413]: Positive Trust Anchors: Jul 15 04:32:40.613223 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:32:40.613254 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:32:40.613540 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 04:32:40.619021 systemd-resolved[1413]: Defaulting to hostname 'linux'. Jul 15 04:32:40.619931 systemd-networkd[1412]: lo: Link UP Jul 15 04:32:40.619939 systemd-networkd[1412]: lo: Gained carrier Jul 15 04:32:40.620137 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:32:40.620782 systemd-networkd[1412]: Enumeration completed Jul 15 04:32:40.621297 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:32:40.621308 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:32:40.621396 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:32:40.621874 systemd-networkd[1412]: eth0: Link UP Jul 15 04:32:40.622048 systemd-networkd[1412]: eth0: Gained carrier Jul 15 04:32:40.622067 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:32:40.622719 systemd[1]: Reached target network.target - Network. Jul 15 04:32:40.623655 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:32:40.624869 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:32:40.626319 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 04:32:40.627747 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 04:32:40.629249 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 04:32:40.630510 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 04:32:40.631812 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 04:32:40.633073 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 04:32:40.633107 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:32:40.634135 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:32:40.635959 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 04:32:40.638644 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 04:32:40.641410 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 04:32:40.643057 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 04:32:40.644530 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 04:32:40.646526 systemd-networkd[1412]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 04:32:40.647364 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Jul 15 04:32:40.648128 systemd-timesyncd[1415]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 04:32:40.648178 systemd-timesyncd[1415]: Initial clock synchronization to Tue 2025-07-15 04:32:40.589145 UTC. Jul 15 04:32:40.648471 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 04:32:40.650053 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 04:32:40.652564 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 04:32:40.654770 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 04:32:40.656569 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 04:32:40.657844 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:32:40.658912 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:32:40.659939 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:32:40.659967 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:32:40.660877 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 04:32:40.662919 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 04:32:40.665410 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 04:32:40.667508 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 04:32:40.669518 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 04:32:40.670637 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 04:32:40.673622 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 04:32:40.675571 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 04:32:40.678170 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 04:32:40.680725 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 04:32:40.684638 jq[1469]: false Jul 15 04:32:40.685679 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 04:32:40.687472 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 04:32:40.687936 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 04:32:40.689700 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 04:32:40.691572 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 04:32:40.693715 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 04:32:40.696627 extend-filesystems[1470]: Found /dev/vda6 Jul 15 04:32:40.698983 extend-filesystems[1470]: Found /dev/vda9 Jul 15 04:32:40.700481 extend-filesystems[1470]: Checking size of /dev/vda9 Jul 15 04:32:40.707195 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 04:32:40.709630 jq[1485]: true Jul 15 04:32:40.709367 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 04:32:40.710600 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 04:32:40.710934 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 04:32:40.711119 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 04:32:40.714162 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 04:32:40.714322 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 04:32:40.716993 extend-filesystems[1470]: Resized partition /dev/vda9 Jul 15 04:32:40.719583 extend-filesystems[1498]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 04:32:40.729538 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 04:32:40.735501 jq[1497]: true Jul 15 04:32:40.737511 (ntainerd)[1499]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 04:32:40.767488 tar[1495]: linux-arm64/helm Jul 15 04:32:40.782807 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 04:32:40.788171 dbus-daemon[1467]: [system] SELinux support is enabled Jul 15 04:32:40.797738 update_engine[1481]: I20250715 04:32:40.789113 1481 main.cc:92] Flatcar Update Engine starting Jul 15 04:32:40.788596 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 04:32:40.793051 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 04:32:40.793076 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 04:32:40.795654 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 04:32:40.795673 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 04:32:40.798768 extend-filesystems[1498]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 04:32:40.798768 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 04:32:40.798768 extend-filesystems[1498]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 04:32:40.804769 extend-filesystems[1470]: Resized filesystem in /dev/vda9 Jul 15 04:32:40.801745 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 04:32:40.801935 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 04:32:40.811983 systemd[1]: Started update-engine.service - Update Engine. Jul 15 04:32:40.814728 update_engine[1481]: I20250715 04:32:40.814682 1481 update_check_scheduler.cc:74] Next update check in 3m38s Jul 15 04:32:40.815083 systemd-logind[1478]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 04:32:40.816047 systemd-logind[1478]: New seat seat0. Jul 15 04:32:40.816676 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 04:32:40.819274 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 04:32:40.832089 bash[1528]: Updated "/home/core/.ssh/authorized_keys" Jul 15 04:32:40.841882 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 04:32:40.844772 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 04:32:40.881643 locksmithd[1530]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 04:32:40.957854 containerd[1499]: time="2025-07-15T04:32:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 04:32:40.959780 containerd[1499]: time="2025-07-15T04:32:40.959745680Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 04:32:40.970129 containerd[1499]: time="2025-07-15T04:32:40.970085640Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.24µs" Jul 15 04:32:40.970129 containerd[1499]: time="2025-07-15T04:32:40.970120960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 04:32:40.970129 containerd[1499]: time="2025-07-15T04:32:40.970137760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 04:32:40.970309 containerd[1499]: time="2025-07-15T04:32:40.970286680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 04:32:40.970309 containerd[1499]: time="2025-07-15T04:32:40.970306720Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 04:32:40.970396 containerd[1499]: time="2025-07-15T04:32:40.970332720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:32:40.970396 containerd[1499]: time="2025-07-15T04:32:40.970382640Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:32:40.970396 containerd[1499]: time="2025-07-15T04:32:40.970392960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:32:40.970703 containerd[1499]: time="2025-07-15T04:32:40.970675320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:32:40.970703 containerd[1499]: time="2025-07-15T04:32:40.970700800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:32:40.970754 containerd[1499]: time="2025-07-15T04:32:40.970712600Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:32:40.970754 containerd[1499]: time="2025-07-15T04:32:40.970720720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 04:32:40.970805 containerd[1499]: time="2025-07-15T04:32:40.970796680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 04:32:40.970987 containerd[1499]: time="2025-07-15T04:32:40.970965440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:32:40.971016 containerd[1499]: time="2025-07-15T04:32:40.970997320Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:32:40.971016 containerd[1499]: time="2025-07-15T04:32:40.971009160Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 04:32:40.971079 containerd[1499]: time="2025-07-15T04:32:40.971037880Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 04:32:40.971244 containerd[1499]: time="2025-07-15T04:32:40.971224800Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 04:32:40.971303 containerd[1499]: time="2025-07-15T04:32:40.971287800Z" level=info msg="metadata content store policy set" policy=shared Jul 15 04:32:40.974556 containerd[1499]: time="2025-07-15T04:32:40.974521240Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 04:32:40.974624 containerd[1499]: time="2025-07-15T04:32:40.974587320Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 04:32:40.974624 containerd[1499]: time="2025-07-15T04:32:40.974607080Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 04:32:40.974624 containerd[1499]: time="2025-07-15T04:32:40.974618840Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974671400Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974685880Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974702200Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974716560Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974728120Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974738840Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974747680Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974762440Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974893400Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974912880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974926720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974938000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 04:32:40.974945 containerd[1499]: time="2025-07-15T04:32:40.974948360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 04:32:40.975160 containerd[1499]: time="2025-07-15T04:32:40.974958200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 04:32:40.975160 containerd[1499]: time="2025-07-15T04:32:40.974969040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 04:32:40.975160 containerd[1499]: time="2025-07-15T04:32:40.974978360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 04:32:40.975160 containerd[1499]: time="2025-07-15T04:32:40.974988760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 04:32:40.975160 containerd[1499]: time="2025-07-15T04:32:40.975000000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 04:32:40.975160 containerd[1499]: time="2025-07-15T04:32:40.975013800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 04:32:40.975568 containerd[1499]: time="2025-07-15T04:32:40.975191960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 04:32:40.975568 containerd[1499]: time="2025-07-15T04:32:40.975206520Z" level=info msg="Start snapshots syncer" Jul 15 04:32:40.975568 containerd[1499]: time="2025-07-15T04:32:40.975241640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 04:32:40.975638 containerd[1499]: time="2025-07-15T04:32:40.975477960Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 04:32:40.975638 containerd[1499]: time="2025-07-15T04:32:40.975529000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 04:32:40.976216 containerd[1499]: time="2025-07-15T04:32:40.976187200Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 04:32:40.976374 containerd[1499]: time="2025-07-15T04:32:40.976346040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 04:32:40.976405 containerd[1499]: time="2025-07-15T04:32:40.976377280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 04:32:40.976405 containerd[1499]: time="2025-07-15T04:32:40.976389280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 04:32:40.976405 containerd[1499]: time="2025-07-15T04:32:40.976403840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 04:32:40.976474 containerd[1499]: time="2025-07-15T04:32:40.976417120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 04:32:40.976474 containerd[1499]: time="2025-07-15T04:32:40.976437480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 04:32:40.976474 containerd[1499]: time="2025-07-15T04:32:40.976449840Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 04:32:40.976474 containerd[1499]: time="2025-07-15T04:32:40.976501960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 04:32:40.976474 containerd[1499]: time="2025-07-15T04:32:40.976520280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 04:32:40.976474 containerd[1499]: time="2025-07-15T04:32:40.976531440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 04:32:40.977223 containerd[1499]: time="2025-07-15T04:32:40.977198000Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:32:40.977315 containerd[1499]: time="2025-07-15T04:32:40.977227000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:32:40.977315 containerd[1499]: time="2025-07-15T04:32:40.977308440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:32:40.977409 containerd[1499]: time="2025-07-15T04:32:40.977321320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:32:40.977409 containerd[1499]: time="2025-07-15T04:32:40.977329720Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 04:32:40.977409 containerd[1499]: time="2025-07-15T04:32:40.977339400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 04:32:40.977409 containerd[1499]: time="2025-07-15T04:32:40.977350680Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 04:32:40.977557 containerd[1499]: time="2025-07-15T04:32:40.977434080Z" level=info msg="runtime interface created" Jul 15 04:32:40.977557 containerd[1499]: time="2025-07-15T04:32:40.977441480Z" level=info msg="created NRI interface" Jul 15 04:32:40.977557 containerd[1499]: time="2025-07-15T04:32:40.977450400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 04:32:40.977557 containerd[1499]: time="2025-07-15T04:32:40.977479520Z" level=info msg="Connect containerd service" Jul 15 04:32:40.977557 containerd[1499]: time="2025-07-15T04:32:40.977521240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 04:32:40.978672 containerd[1499]: time="2025-07-15T04:32:40.978640480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:32:41.066100 tar[1495]: linux-arm64/LICENSE Jul 15 04:32:41.066100 tar[1495]: linux-arm64/README.md Jul 15 04:32:41.084179 containerd[1499]: time="2025-07-15T04:32:41.084084135Z" level=info msg="Start subscribing containerd event" Jul 15 04:32:41.084320 containerd[1499]: time="2025-07-15T04:32:41.084306753Z" level=info msg="Start recovering state" Jul 15 04:32:41.084427 containerd[1499]: time="2025-07-15T04:32:41.084395305Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 04:32:41.084504 containerd[1499]: time="2025-07-15T04:32:41.084446795Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 04:32:41.085549 containerd[1499]: time="2025-07-15T04:32:41.085522061Z" level=info msg="Start event monitor" Jul 15 04:32:41.085549 containerd[1499]: time="2025-07-15T04:32:41.085553425Z" level=info msg="Start cni network conf syncer for default" Jul 15 04:32:41.085779 containerd[1499]: time="2025-07-15T04:32:41.085567334Z" level=info msg="Start streaming server" Jul 15 04:32:41.085779 containerd[1499]: time="2025-07-15T04:32:41.085576899Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 04:32:41.085779 containerd[1499]: time="2025-07-15T04:32:41.085585427Z" level=info msg="runtime interface starting up..." Jul 15 04:32:41.085779 containerd[1499]: time="2025-07-15T04:32:41.085591525Z" level=info msg="starting plugins..." Jul 15 04:32:41.085779 containerd[1499]: time="2025-07-15T04:32:41.085606310Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 04:32:41.085779 containerd[1499]: time="2025-07-15T04:32:41.085723278Z" level=info msg="containerd successfully booted in 0.128235s" Jul 15 04:32:41.086073 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 04:32:41.089180 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 04:32:41.848600 systemd-networkd[1412]: eth0: Gained IPv6LL Jul 15 04:32:41.851410 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 04:32:41.853331 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 04:32:41.856131 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 04:32:41.858705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:32:41.870701 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 04:32:41.897528 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 04:32:41.899251 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 04:32:41.899431 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 04:32:41.901327 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 04:32:41.941279 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 04:32:41.961839 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 04:32:41.964932 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 04:32:41.984232 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 04:32:41.984489 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 04:32:41.987597 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 04:32:42.013182 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 04:32:42.015960 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 04:32:42.018141 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 04:32:42.019751 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 04:32:42.408986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:32:42.410792 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 04:32:42.412330 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:32:42.412561 systemd[1]: Startup finished in 2.054s (kernel) + 5.374s (initrd) + 3.470s (userspace) = 10.899s. Jul 15 04:32:42.839304 kubelet[1602]: E0715 04:32:42.839178 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:32:42.841558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:32:42.841691 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:32:42.842006 systemd[1]: kubelet.service: Consumed 841ms CPU time, 256.5M memory peak. Jul 15 04:32:46.983036 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 04:32:46.984199 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:48026.service - OpenSSH per-connection server daemon (10.0.0.1:48026). Jul 15 04:32:47.065485 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 48026 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:32:47.067388 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:32:47.073398 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 04:32:47.074398 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 04:32:47.080023 systemd-logind[1478]: New session 1 of user core. Jul 15 04:32:47.106557 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 04:32:47.109329 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 04:32:47.128776 (systemd)[1621]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 04:32:47.131040 systemd-logind[1478]: New session c1 of user core. Jul 15 04:32:47.246317 systemd[1621]: Queued start job for default target default.target. Jul 15 04:32:47.264567 systemd[1621]: Created slice app.slice - User Application Slice. Jul 15 04:32:47.264598 systemd[1621]: Reached target paths.target - Paths. Jul 15 04:32:47.264635 systemd[1621]: Reached target timers.target - Timers. Jul 15 04:32:47.265922 systemd[1621]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 04:32:47.275842 systemd[1621]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 04:32:47.275913 systemd[1621]: Reached target sockets.target - Sockets. Jul 15 04:32:47.275954 systemd[1621]: Reached target basic.target - Basic System. Jul 15 04:32:47.275985 systemd[1621]: Reached target default.target - Main User Target. Jul 15 04:32:47.276010 systemd[1621]: Startup finished in 138ms. Jul 15 04:32:47.276230 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 04:32:47.277669 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 04:32:47.341108 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:48042.service - OpenSSH per-connection server daemon (10.0.0.1:48042). Jul 15 04:32:47.407427 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 48042 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:32:47.408798 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:32:47.413538 systemd-logind[1478]: New session 2 of user core. Jul 15 04:32:47.422660 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 04:32:47.473503 sshd[1635]: Connection closed by 10.0.0.1 port 48042 Jul 15 04:32:47.473639 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Jul 15 04:32:47.488025 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:48042.service: Deactivated successfully. Jul 15 04:32:47.490971 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 04:32:47.492157 systemd-logind[1478]: Session 2 logged out. Waiting for processes to exit. Jul 15 04:32:47.496059 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:48044.service - OpenSSH per-connection server daemon (10.0.0.1:48044). Jul 15 04:32:47.497221 systemd-logind[1478]: Removed session 2. Jul 15 04:32:47.554213 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 48044 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:32:47.555584 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:32:47.559413 systemd-logind[1478]: New session 3 of user core. Jul 15 04:32:47.569630 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 04:32:47.618288 sshd[1644]: Connection closed by 10.0.0.1 port 48044 Jul 15 04:32:47.618952 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Jul 15 04:32:47.628586 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:48044.service: Deactivated successfully. Jul 15 04:32:47.631912 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 04:32:47.632553 systemd-logind[1478]: Session 3 logged out. Waiting for processes to exit. Jul 15 04:32:47.635015 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:48054.service - OpenSSH per-connection server daemon (10.0.0.1:48054). Jul 15 04:32:47.635434 systemd-logind[1478]: Removed session 3. Jul 15 04:32:47.688341 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 48054 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:32:47.689706 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:32:47.694275 systemd-logind[1478]: New session 4 of user core. Jul 15 04:32:47.700634 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 04:32:47.753682 sshd[1653]: Connection closed by 10.0.0.1 port 48054 Jul 15 04:32:47.754097 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Jul 15 04:32:47.764886 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:48054.service: Deactivated successfully. Jul 15 04:32:47.767916 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 04:32:47.768777 systemd-logind[1478]: Session 4 logged out. Waiting for processes to exit. Jul 15 04:32:47.770847 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:48056.service - OpenSSH per-connection server daemon (10.0.0.1:48056). Jul 15 04:32:47.771296 systemd-logind[1478]: Removed session 4. Jul 15 04:32:47.823618 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 48056 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:32:47.824887 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:32:47.829542 systemd-logind[1478]: New session 5 of user core. Jul 15 04:32:47.836645 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 04:32:47.903020 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 04:32:47.903326 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:32:47.917498 sudo[1663]: pam_unix(sudo:session): session closed for user root Jul 15 04:32:47.919120 sshd[1662]: Connection closed by 10.0.0.1 port 48056 Jul 15 04:32:47.919711 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Jul 15 04:32:47.933658 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:48056.service: Deactivated successfully. Jul 15 04:32:47.935901 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 04:32:47.937124 systemd-logind[1478]: Session 5 logged out. Waiting for processes to exit. Jul 15 04:32:47.939094 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:48070.service - OpenSSH per-connection server daemon (10.0.0.1:48070). Jul 15 04:32:47.940098 systemd-logind[1478]: Removed session 5. Jul 15 04:32:47.996967 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 48070 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:32:47.998372 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:32:48.002137 systemd-logind[1478]: New session 6 of user core. Jul 15 04:32:48.021660 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 04:32:48.074292 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 04:32:48.074934 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:32:48.152075 sudo[1674]: pam_unix(sudo:session): session closed for user root Jul 15 04:32:48.157028 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 04:32:48.157284 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:32:48.165560 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:32:48.204280 augenrules[1696]: No rules Jul 15 04:32:48.205374 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:32:48.205664 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:32:48.206583 sudo[1673]: pam_unix(sudo:session): session closed for user root Jul 15 04:32:48.208245 sshd[1672]: Connection closed by 10.0.0.1 port 48070 Jul 15 04:32:48.208123 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Jul 15 04:32:48.218596 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:48070.service: Deactivated successfully. Jul 15 04:32:48.220903 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 04:32:48.223006 systemd-logind[1478]: Session 6 logged out. Waiting for processes to exit. Jul 15 04:32:48.224863 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:48084.service - OpenSSH per-connection server daemon (10.0.0.1:48084). Jul 15 04:32:48.225666 systemd-logind[1478]: Removed session 6. Jul 15 04:32:48.278276 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 48084 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:32:48.279544 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:32:48.283521 systemd-logind[1478]: New session 7 of user core. Jul 15 04:32:48.290631 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 04:32:48.340253 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 04:32:48.340904 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:32:48.675194 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 04:32:48.693874 (dockerd)[1729]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 04:32:48.941590 dockerd[1729]: time="2025-07-15T04:32:48.940424820Z" level=info msg="Starting up" Jul 15 04:32:48.943612 dockerd[1729]: time="2025-07-15T04:32:48.943573503Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 04:32:48.954327 dockerd[1729]: time="2025-07-15T04:32:48.954281069Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 04:32:48.969607 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1596058822-merged.mount: Deactivated successfully. Jul 15 04:32:48.986735 dockerd[1729]: time="2025-07-15T04:32:48.986685944Z" level=info msg="Loading containers: start." Jul 15 04:32:48.996488 kernel: Initializing XFRM netlink socket Jul 15 04:32:49.199799 systemd-networkd[1412]: docker0: Link UP Jul 15 04:32:49.205814 dockerd[1729]: time="2025-07-15T04:32:49.205769005Z" level=info msg="Loading containers: done." Jul 15 04:32:49.221378 dockerd[1729]: time="2025-07-15T04:32:49.221327504Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 04:32:49.221539 dockerd[1729]: time="2025-07-15T04:32:49.221426646Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 04:32:49.221572 dockerd[1729]: time="2025-07-15T04:32:49.221555363Z" level=info msg="Initializing buildkit" Jul 15 04:32:49.246261 dockerd[1729]: time="2025-07-15T04:32:49.246215939Z" level=info msg="Completed buildkit initialization" Jul 15 04:32:49.251204 dockerd[1729]: time="2025-07-15T04:32:49.251150473Z" level=info msg="Daemon has completed initialization" Jul 15 04:32:49.251350 dockerd[1729]: time="2025-07-15T04:32:49.251232093Z" level=info msg="API listen on /run/docker.sock" Jul 15 04:32:49.251499 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 04:32:50.089357 containerd[1499]: time="2025-07-15T04:32:50.089304615Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 04:32:50.671140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2495505108.mount: Deactivated successfully. Jul 15 04:32:51.511835 containerd[1499]: time="2025-07-15T04:32:51.511781390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:51.512314 containerd[1499]: time="2025-07-15T04:32:51.512272320Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 15 04:32:51.513122 containerd[1499]: time="2025-07-15T04:32:51.513080956Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:51.515221 containerd[1499]: time="2025-07-15T04:32:51.515184526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:51.516093 containerd[1499]: time="2025-07-15T04:32:51.516066858Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.426718853s" Jul 15 04:32:51.516146 containerd[1499]: time="2025-07-15T04:32:51.516097319Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 15 04:32:51.519164 containerd[1499]: time="2025-07-15T04:32:51.519136639Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 04:32:52.443514 containerd[1499]: time="2025-07-15T04:32:52.443451705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:52.444380 containerd[1499]: time="2025-07-15T04:32:52.444347281Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 15 04:32:52.445770 containerd[1499]: time="2025-07-15T04:32:52.445434150Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:52.448193 containerd[1499]: time="2025-07-15T04:32:52.448145113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:52.449925 containerd[1499]: time="2025-07-15T04:32:52.449692706Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 930.431228ms" Jul 15 04:32:52.449925 containerd[1499]: time="2025-07-15T04:32:52.449730557Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 15 04:32:52.450218 containerd[1499]: time="2025-07-15T04:32:52.450183815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 04:32:53.092018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 04:32:53.093486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:32:53.263894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:32:53.268818 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:32:53.311906 kubelet[2015]: E0715 04:32:53.311857 2015 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:32:53.315059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:32:53.315190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:32:53.315494 systemd[1]: kubelet.service: Consumed 147ms CPU time, 108.1M memory peak. Jul 15 04:32:53.578045 containerd[1499]: time="2025-07-15T04:32:53.577910630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:53.578718 containerd[1499]: time="2025-07-15T04:32:53.578550023Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 15 04:32:53.579278 containerd[1499]: time="2025-07-15T04:32:53.579251151Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:53.581500 containerd[1499]: time="2025-07-15T04:32:53.581471017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:53.582541 containerd[1499]: time="2025-07-15T04:32:53.582510051Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.132293294s" Jul 15 04:32:53.582576 containerd[1499]: time="2025-07-15T04:32:53.582544512Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 15 04:32:53.583186 containerd[1499]: time="2025-07-15T04:32:53.583001016Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 04:32:54.461846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405845707.mount: Deactivated successfully. Jul 15 04:32:54.679229 containerd[1499]: time="2025-07-15T04:32:54.679176578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:54.679763 containerd[1499]: time="2025-07-15T04:32:54.679723786Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 15 04:32:54.680306 containerd[1499]: time="2025-07-15T04:32:54.680257575Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:54.682050 containerd[1499]: time="2025-07-15T04:32:54.682005669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:54.682703 containerd[1499]: time="2025-07-15T04:32:54.682447485Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.099410649s" Jul 15 04:32:54.682703 containerd[1499]: time="2025-07-15T04:32:54.682495408Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 15 04:32:54.683291 containerd[1499]: time="2025-07-15T04:32:54.683103919Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 04:32:55.191990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2208913101.mount: Deactivated successfully. Jul 15 04:32:55.991718 containerd[1499]: time="2025-07-15T04:32:55.991655955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:55.992896 containerd[1499]: time="2025-07-15T04:32:55.992597828Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 15 04:32:55.993773 containerd[1499]: time="2025-07-15T04:32:55.993725343Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:55.996496 containerd[1499]: time="2025-07-15T04:32:55.996441446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:55.997591 containerd[1499]: time="2025-07-15T04:32:55.997558137Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.314424545s" Jul 15 04:32:55.997635 containerd[1499]: time="2025-07-15T04:32:55.997598916Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 04:32:55.998087 containerd[1499]: time="2025-07-15T04:32:55.998055674Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 04:32:56.497354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2219185875.mount: Deactivated successfully. Jul 15 04:32:56.502153 containerd[1499]: time="2025-07-15T04:32:56.501876495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:32:56.502932 containerd[1499]: time="2025-07-15T04:32:56.502839426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 15 04:32:56.503937 containerd[1499]: time="2025-07-15T04:32:56.503899781Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:32:56.506424 containerd[1499]: time="2025-07-15T04:32:56.506383543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:32:56.506854 containerd[1499]: time="2025-07-15T04:32:56.506834272Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 508.747204ms" Jul 15 04:32:56.506888 containerd[1499]: time="2025-07-15T04:32:56.506860355Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 04:32:56.507572 containerd[1499]: time="2025-07-15T04:32:56.507553305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 04:32:57.022353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994955414.mount: Deactivated successfully. Jul 15 04:32:59.001247 containerd[1499]: time="2025-07-15T04:32:59.000962077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:59.001887 containerd[1499]: time="2025-07-15T04:32:59.001858323Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 15 04:32:59.002897 containerd[1499]: time="2025-07-15T04:32:59.002863443Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:59.006453 containerd[1499]: time="2025-07-15T04:32:59.006400762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:32:59.008113 containerd[1499]: time="2025-07-15T04:32:59.008060288Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.500480059s" Jul 15 04:32:59.008113 containerd[1499]: time="2025-07-15T04:32:59.008095207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 15 04:33:03.091714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:33:03.091863 systemd[1]: kubelet.service: Consumed 147ms CPU time, 108.1M memory peak. Jul 15 04:33:03.093655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:33:03.113503 systemd[1]: Reload requested from client PID 2175 ('systemctl') (unit session-7.scope)... Jul 15 04:33:03.113517 systemd[1]: Reloading... Jul 15 04:33:03.172501 zram_generator::config[2217]: No configuration found. Jul 15 04:33:03.273657 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:33:03.359251 systemd[1]: Reloading finished in 245 ms. Jul 15 04:33:03.431008 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 04:33:03.431100 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 04:33:03.431376 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:33:03.431427 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95M memory peak. Jul 15 04:33:03.433099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:33:03.560008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:33:03.564074 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:33:03.600958 kubelet[2262]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:33:03.600958 kubelet[2262]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 04:33:03.600958 kubelet[2262]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:33:03.601319 kubelet[2262]: I0715 04:33:03.601010 2262 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:33:04.357696 kubelet[2262]: I0715 04:33:04.357645 2262 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 04:33:04.357696 kubelet[2262]: I0715 04:33:04.357685 2262 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:33:04.357973 kubelet[2262]: I0715 04:33:04.357944 2262 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 04:33:04.384303 kubelet[2262]: E0715 04:33:04.384265 2262 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:33:04.386405 kubelet[2262]: I0715 04:33:04.386285 2262 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:33:04.394739 kubelet[2262]: I0715 04:33:04.394699 2262 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:33:04.398275 kubelet[2262]: I0715 04:33:04.398242 2262 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:33:04.398883 kubelet[2262]: I0715 04:33:04.398858 2262 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 04:33:04.399027 kubelet[2262]: I0715 04:33:04.398994 2262 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:33:04.399181 kubelet[2262]: I0715 04:33:04.399021 2262 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:33:04.399392 kubelet[2262]: I0715 04:33:04.399302 2262 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:33:04.399392 kubelet[2262]: I0715 04:33:04.399312 2262 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 04:33:04.399587 kubelet[2262]: I0715 04:33:04.399559 2262 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:33:04.401487 kubelet[2262]: I0715 04:33:04.401449 2262 kubelet.go:408] "Attempting to sync node with API server" Jul 15 04:33:04.401526 kubelet[2262]: I0715 04:33:04.401489 2262 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:33:04.401526 kubelet[2262]: I0715 04:33:04.401508 2262 kubelet.go:314] "Adding apiserver pod source" Jul 15 04:33:04.401526 kubelet[2262]: I0715 04:33:04.401522 2262 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:33:04.404487 kubelet[2262]: W0715 04:33:04.403747 2262 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Jul 15 04:33:04.404487 kubelet[2262]: E0715 04:33:04.403807 2262 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:33:04.404675 kubelet[2262]: W0715 04:33:04.404601 2262 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Jul 15 04:33:04.404675 kubelet[2262]: E0715 04:33:04.404647 2262 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:33:04.409417 kubelet[2262]: I0715 04:33:04.409344 2262 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:33:04.410222 kubelet[2262]: I0715 04:33:04.410186 2262 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 04:33:04.410276 kubelet[2262]: W0715 04:33:04.410234 2262 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 04:33:04.411214 kubelet[2262]: I0715 04:33:04.411183 2262 server.go:1274] "Started kubelet" Jul 15 04:33:04.411873 kubelet[2262]: I0715 04:33:04.411441 2262 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:33:04.411873 kubelet[2262]: I0715 04:33:04.411592 2262 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:33:04.411873 kubelet[2262]: I0715 04:33:04.411851 2262 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:33:04.413027 kubelet[2262]: I0715 04:33:04.412987 2262 server.go:449] "Adding debug handlers to kubelet server" Jul 15 04:33:04.413615 kubelet[2262]: I0715 04:33:04.413591 2262 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:33:04.414417 kubelet[2262]: I0715 04:33:04.414382 2262 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:33:04.416477 kubelet[2262]: I0715 04:33:04.416430 2262 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 04:33:04.416598 kubelet[2262]: I0715 04:33:04.416578 2262 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 04:33:04.416677 kubelet[2262]: I0715 04:33:04.416660 2262 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:33:04.417181 kubelet[2262]: W0715 04:33:04.417127 2262 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Jul 15 04:33:04.417277 kubelet[2262]: E0715 04:33:04.417197 2262 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:33:04.417415 kubelet[2262]: I0715 04:33:04.417393 2262 factory.go:221] Registration of the systemd container factory successfully Jul 15 04:33:04.417511 kubelet[2262]: I0715 04:33:04.417492 2262 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:33:04.419047 kubelet[2262]: E0715 04:33:04.418795 2262 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:33:04.419047 kubelet[2262]: E0715 04:33:04.418877 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Jul 15 04:33:04.419131 kubelet[2262]: E0715 04:33:04.419078 2262 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:33:04.419228 kubelet[2262]: I0715 04:33:04.419211 2262 factory.go:221] Registration of the containerd container factory successfully Jul 15 04:33:04.424958 kubelet[2262]: E0715 04:33:04.422938 2262 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185252903a85d54a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 04:33:04.41116193 +0000 UTC m=+0.844193910,LastTimestamp:2025-07-15 04:33:04.41116193 +0000 UTC m=+0.844193910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 04:33:04.436794 kubelet[2262]: I0715 04:33:04.435701 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 04:33:04.436962 kubelet[2262]: I0715 04:33:04.436946 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 04:33:04.437023 kubelet[2262]: I0715 04:33:04.437013 2262 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 04:33:04.437074 kubelet[2262]: I0715 04:33:04.437066 2262 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 04:33:04.437169 kubelet[2262]: E0715 04:33:04.437146 2262 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:33:04.438004 kubelet[2262]: W0715 04:33:04.437958 2262 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Jul 15 04:33:04.438117 kubelet[2262]: E0715 04:33:04.438096 2262 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:33:04.441318 kubelet[2262]: I0715 04:33:04.441289 2262 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 04:33:04.441318 kubelet[2262]: I0715 04:33:04.441315 2262 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 04:33:04.441398 kubelet[2262]: I0715 04:33:04.441333 2262 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:33:04.470422 kubelet[2262]: I0715 04:33:04.470400 2262 policy_none.go:49] "None policy: Start" Jul 15 04:33:04.471037 kubelet[2262]: I0715 04:33:04.471018 2262 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 04:33:04.471086 kubelet[2262]: I0715 04:33:04.471046 2262 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:33:04.476219 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 04:33:04.491922 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 04:33:04.494641 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 04:33:04.510187 kubelet[2262]: I0715 04:33:04.510149 2262 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 04:33:04.510535 kubelet[2262]: I0715 04:33:04.510342 2262 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:33:04.510535 kubelet[2262]: I0715 04:33:04.510358 2262 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:33:04.510609 kubelet[2262]: I0715 04:33:04.510594 2262 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:33:04.512032 kubelet[2262]: E0715 04:33:04.512007 2262 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 04:33:04.544119 systemd[1]: Created slice kubepods-burstable-podb7ed2c7911a007b75d574ff9c074b38d.slice - libcontainer container kubepods-burstable-podb7ed2c7911a007b75d574ff9c074b38d.slice. Jul 15 04:33:04.571063 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 15 04:33:04.574655 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 15 04:33:04.611344 kubelet[2262]: I0715 04:33:04.611258 2262 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 04:33:04.612367 kubelet[2262]: E0715 04:33:04.612341 2262 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jul 15 04:33:04.620013 kubelet[2262]: E0715 04:33:04.619977 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Jul 15 04:33:04.718445 kubelet[2262]: I0715 04:33:04.718296 2262 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7ed2c7911a007b75d574ff9c074b38d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7ed2c7911a007b75d574ff9c074b38d\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:33:04.718445 kubelet[2262]: I0715 04:33:04.718332 2262 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7ed2c7911a007b75d574ff9c074b38d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b7ed2c7911a007b75d574ff9c074b38d\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:33:04.718445 kubelet[2262]: I0715 04:33:04.718350 2262 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:33:04.718445 kubelet[2262]: I0715 04:33:04.718373 2262 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:33:04.718445 kubelet[2262]: I0715 04:33:04.718391 2262 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:33:04.718640 kubelet[2262]: I0715 04:33:04.718405 2262 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7ed2c7911a007b75d574ff9c074b38d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7ed2c7911a007b75d574ff9c074b38d\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:33:04.718640 kubelet[2262]: I0715 04:33:04.718421 2262 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:33:04.718640 kubelet[2262]: I0715 04:33:04.718481 2262 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:33:04.718640 kubelet[2262]: I0715 04:33:04.718531 2262 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 04:33:04.814480 kubelet[2262]: I0715 04:33:04.814430 2262 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 04:33:04.814819 kubelet[2262]: E0715 04:33:04.814783 2262 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jul 15 04:33:04.869932 kubelet[2262]: E0715 04:33:04.869823 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:04.870638 containerd[1499]: time="2025-07-15T04:33:04.870568782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b7ed2c7911a007b75d574ff9c074b38d,Namespace:kube-system,Attempt:0,}" Jul 15 04:33:04.873720 kubelet[2262]: E0715 04:33:04.873688 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:04.874138 containerd[1499]: time="2025-07-15T04:33:04.874018780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 15 04:33:04.876336 kubelet[2262]: E0715 04:33:04.876308 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:04.876829 containerd[1499]: time="2025-07-15T04:33:04.876803974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 15 04:33:04.892798 containerd[1499]: time="2025-07-15T04:33:04.892761646Z" level=info msg="connecting to shim eb0b321f71c79e58c27ae81fef9880e8e0dadcacb43d94ac09a0d5d02cbe178f" address="unix:///run/containerd/s/331168aec639bf3ff5448db0192f664a92292691ded9efbbafa6da9fd4df5ae5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:04.902964 containerd[1499]: time="2025-07-15T04:33:04.902270384Z" level=info msg="connecting to shim 41667a65401d86c17c1339009886b6b3ff2cdfb62b309f7831e6a5dcdb8e7443" address="unix:///run/containerd/s/f8f4471112fb2685091f66c5e4fd913c4116a855a6d577051eab01e62438dbd9" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:04.913698 containerd[1499]: time="2025-07-15T04:33:04.913658752Z" level=info msg="connecting to shim fead65e525ba1e6b93681abcb96db8d4ae64a41f75e067c16bbb3ba82a8622c8" address="unix:///run/containerd/s/23642aa67c9bd48692e988d1e3904c4c854a96bba1a768d531898c32d6bce9ad" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:04.923648 systemd[1]: Started cri-containerd-eb0b321f71c79e58c27ae81fef9880e8e0dadcacb43d94ac09a0d5d02cbe178f.scope - libcontainer container eb0b321f71c79e58c27ae81fef9880e8e0dadcacb43d94ac09a0d5d02cbe178f. Jul 15 04:33:04.926248 systemd[1]: Started cri-containerd-41667a65401d86c17c1339009886b6b3ff2cdfb62b309f7831e6a5dcdb8e7443.scope - libcontainer container 41667a65401d86c17c1339009886b6b3ff2cdfb62b309f7831e6a5dcdb8e7443. Jul 15 04:33:04.940063 systemd[1]: Started cri-containerd-fead65e525ba1e6b93681abcb96db8d4ae64a41f75e067c16bbb3ba82a8622c8.scope - libcontainer container fead65e525ba1e6b93681abcb96db8d4ae64a41f75e067c16bbb3ba82a8622c8. Jul 15 04:33:04.965682 containerd[1499]: time="2025-07-15T04:33:04.965035161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b7ed2c7911a007b75d574ff9c074b38d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb0b321f71c79e58c27ae81fef9880e8e0dadcacb43d94ac09a0d5d02cbe178f\"" Jul 15 04:33:04.966492 kubelet[2262]: E0715 04:33:04.966432 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:04.971516 containerd[1499]: time="2025-07-15T04:33:04.969592475Z" level=info msg="CreateContainer within sandbox \"eb0b321f71c79e58c27ae81fef9880e8e0dadcacb43d94ac09a0d5d02cbe178f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 04:33:04.979645 containerd[1499]: time="2025-07-15T04:33:04.979561668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"41667a65401d86c17c1339009886b6b3ff2cdfb62b309f7831e6a5dcdb8e7443\"" Jul 15 04:33:04.979943 containerd[1499]: time="2025-07-15T04:33:04.979913454Z" level=info msg="Container 402e9b11186a5dd0dc81a1a0420b28ab98d010df0a54da8060f8817f7d094972: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:04.980389 kubelet[2262]: E0715 04:33:04.980368 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:04.982422 containerd[1499]: time="2025-07-15T04:33:04.982083602Z" level=info msg="CreateContainer within sandbox \"41667a65401d86c17c1339009886b6b3ff2cdfb62b309f7831e6a5dcdb8e7443\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 04:33:04.990075 containerd[1499]: time="2025-07-15T04:33:04.990022571Z" level=info msg="CreateContainer within sandbox \"eb0b321f71c79e58c27ae81fef9880e8e0dadcacb43d94ac09a0d5d02cbe178f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"402e9b11186a5dd0dc81a1a0420b28ab98d010df0a54da8060f8817f7d094972\"" Jul 15 04:33:04.990922 containerd[1499]: time="2025-07-15T04:33:04.990888048Z" level=info msg="StartContainer for \"402e9b11186a5dd0dc81a1a0420b28ab98d010df0a54da8060f8817f7d094972\"" Jul 15 04:33:04.991422 containerd[1499]: time="2025-07-15T04:33:04.991380397Z" level=info msg="Container e19cb6ae324956c7e12641e6bd2b82272164eb044f749c88a447fe71078f0845: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:04.992138 containerd[1499]: time="2025-07-15T04:33:04.992103353Z" level=info msg="connecting to shim 402e9b11186a5dd0dc81a1a0420b28ab98d010df0a54da8060f8817f7d094972" address="unix:///run/containerd/s/331168aec639bf3ff5448db0192f664a92292691ded9efbbafa6da9fd4df5ae5" protocol=ttrpc version=3 Jul 15 04:33:05.003699 containerd[1499]: time="2025-07-15T04:33:05.003648744Z" level=info msg="CreateContainer within sandbox \"41667a65401d86c17c1339009886b6b3ff2cdfb62b309f7831e6a5dcdb8e7443\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e19cb6ae324956c7e12641e6bd2b82272164eb044f749c88a447fe71078f0845\"" Jul 15 04:33:05.003897 containerd[1499]: time="2025-07-15T04:33:05.003760936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"fead65e525ba1e6b93681abcb96db8d4ae64a41f75e067c16bbb3ba82a8622c8\"" Jul 15 04:33:05.004884 containerd[1499]: time="2025-07-15T04:33:05.004849644Z" level=info msg="StartContainer for \"e19cb6ae324956c7e12641e6bd2b82272164eb044f749c88a447fe71078f0845\"" Jul 15 04:33:05.005393 kubelet[2262]: E0715 04:33:05.005366 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:05.006763 containerd[1499]: time="2025-07-15T04:33:05.006570137Z" level=info msg="connecting to shim e19cb6ae324956c7e12641e6bd2b82272164eb044f749c88a447fe71078f0845" address="unix:///run/containerd/s/f8f4471112fb2685091f66c5e4fd913c4116a855a6d577051eab01e62438dbd9" protocol=ttrpc version=3 Jul 15 04:33:05.007494 containerd[1499]: time="2025-07-15T04:33:05.007433181Z" level=info msg="CreateContainer within sandbox \"fead65e525ba1e6b93681abcb96db8d4ae64a41f75e067c16bbb3ba82a8622c8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 04:33:05.015559 containerd[1499]: time="2025-07-15T04:33:05.015512775Z" level=info msg="Container 336f58a183ad4349976ad57fd029ee320c3a03709f22402fdf439a623c3acb30: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:05.016618 systemd[1]: Started cri-containerd-402e9b11186a5dd0dc81a1a0420b28ab98d010df0a54da8060f8817f7d094972.scope - libcontainer container 402e9b11186a5dd0dc81a1a0420b28ab98d010df0a54da8060f8817f7d094972. Jul 15 04:33:05.020903 systemd[1]: Started cri-containerd-e19cb6ae324956c7e12641e6bd2b82272164eb044f749c88a447fe71078f0845.scope - libcontainer container e19cb6ae324956c7e12641e6bd2b82272164eb044f749c88a447fe71078f0845. Jul 15 04:33:05.021236 kubelet[2262]: E0715 04:33:05.021188 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Jul 15 04:33:05.023573 containerd[1499]: time="2025-07-15T04:33:05.023498642Z" level=info msg="CreateContainer within sandbox \"fead65e525ba1e6b93681abcb96db8d4ae64a41f75e067c16bbb3ba82a8622c8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"336f58a183ad4349976ad57fd029ee320c3a03709f22402fdf439a623c3acb30\"" Jul 15 04:33:05.024257 containerd[1499]: time="2025-07-15T04:33:05.024229470Z" level=info msg="StartContainer for \"336f58a183ad4349976ad57fd029ee320c3a03709f22402fdf439a623c3acb30\"" Jul 15 04:33:05.025795 containerd[1499]: time="2025-07-15T04:33:05.025751438Z" level=info msg="connecting to shim 336f58a183ad4349976ad57fd029ee320c3a03709f22402fdf439a623c3acb30" address="unix:///run/containerd/s/23642aa67c9bd48692e988d1e3904c4c854a96bba1a768d531898c32d6bce9ad" protocol=ttrpc version=3 Jul 15 04:33:05.047695 systemd[1]: Started cri-containerd-336f58a183ad4349976ad57fd029ee320c3a03709f22402fdf439a623c3acb30.scope - libcontainer container 336f58a183ad4349976ad57fd029ee320c3a03709f22402fdf439a623c3acb30. Jul 15 04:33:05.060750 containerd[1499]: time="2025-07-15T04:33:05.060694678Z" level=info msg="StartContainer for \"402e9b11186a5dd0dc81a1a0420b28ab98d010df0a54da8060f8817f7d094972\" returns successfully" Jul 15 04:33:05.070628 containerd[1499]: time="2025-07-15T04:33:05.070591050Z" level=info msg="StartContainer for \"e19cb6ae324956c7e12641e6bd2b82272164eb044f749c88a447fe71078f0845\" returns successfully" Jul 15 04:33:05.121025 containerd[1499]: time="2025-07-15T04:33:05.120858291Z" level=info msg="StartContainer for \"336f58a183ad4349976ad57fd029ee320c3a03709f22402fdf439a623c3acb30\" returns successfully" Jul 15 04:33:05.216971 kubelet[2262]: I0715 04:33:05.216906 2262 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 04:33:05.217286 kubelet[2262]: E0715 04:33:05.217257 2262 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Jul 15 04:33:05.447047 kubelet[2262]: E0715 04:33:05.446951 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:05.448752 kubelet[2262]: E0715 04:33:05.447212 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:05.451824 kubelet[2262]: E0715 04:33:05.451785 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:06.018535 kubelet[2262]: I0715 04:33:06.018505 2262 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 04:33:06.453745 kubelet[2262]: E0715 04:33:06.453432 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:07.930812 kubelet[2262]: E0715 04:33:07.930766 2262 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 04:33:07.971289 kubelet[2262]: I0715 04:33:07.971235 2262 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 04:33:07.971289 kubelet[2262]: E0715 04:33:07.971282 2262 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 04:33:08.404641 kubelet[2262]: I0715 04:33:08.404533 2262 apiserver.go:52] "Watching apiserver" Jul 15 04:33:08.417614 kubelet[2262]: I0715 04:33:08.417575 2262 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 04:33:09.371372 kubelet[2262]: E0715 04:33:09.371323 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:09.458607 kubelet[2262]: E0715 04:33:09.458580 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:10.032523 systemd[1]: Reload requested from client PID 2532 ('systemctl') (unit session-7.scope)... Jul 15 04:33:10.032542 systemd[1]: Reloading... Jul 15 04:33:10.103485 zram_generator::config[2575]: No configuration found. Jul 15 04:33:10.178144 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:33:10.275469 systemd[1]: Reloading finished in 242 ms. Jul 15 04:33:10.302692 kubelet[2262]: I0715 04:33:10.302583 2262 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:33:10.302652 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:33:10.311311 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 04:33:10.311576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:33:10.311633 systemd[1]: kubelet.service: Consumed 1.228s CPU time, 128.5M memory peak. Jul 15 04:33:10.313186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:33:10.449372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:33:10.453912 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:33:10.497132 kubelet[2617]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:33:10.497132 kubelet[2617]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 04:33:10.497132 kubelet[2617]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:33:10.497521 kubelet[2617]: I0715 04:33:10.497186 2617 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:33:10.502493 kubelet[2617]: I0715 04:33:10.502454 2617 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 04:33:10.502493 kubelet[2617]: I0715 04:33:10.502488 2617 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:33:10.503733 kubelet[2617]: I0715 04:33:10.503703 2617 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 04:33:10.506082 kubelet[2617]: I0715 04:33:10.506046 2617 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 04:33:10.508656 kubelet[2617]: I0715 04:33:10.508589 2617 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:33:10.512424 kubelet[2617]: I0715 04:33:10.512407 2617 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:33:10.514903 kubelet[2617]: I0715 04:33:10.514879 2617 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:33:10.515108 kubelet[2617]: I0715 04:33:10.515090 2617 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 04:33:10.515335 kubelet[2617]: I0715 04:33:10.515305 2617 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:33:10.515602 kubelet[2617]: I0715 04:33:10.515394 2617 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:33:10.515765 kubelet[2617]: I0715 04:33:10.515750 2617 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:33:10.515829 kubelet[2617]: I0715 04:33:10.515819 2617 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 04:33:10.516005 kubelet[2617]: I0715 04:33:10.515991 2617 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:33:10.516225 kubelet[2617]: I0715 04:33:10.516211 2617 kubelet.go:408] "Attempting to sync node with API server" Jul 15 04:33:10.516355 kubelet[2617]: I0715 04:33:10.516314 2617 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:33:10.516446 kubelet[2617]: I0715 04:33:10.516436 2617 kubelet.go:314] "Adding apiserver pod source" Jul 15 04:33:10.516616 kubelet[2617]: I0715 04:33:10.516511 2617 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:33:10.517437 kubelet[2617]: I0715 04:33:10.517404 2617 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:33:10.518338 kubelet[2617]: I0715 04:33:10.518106 2617 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 04:33:10.520238 kubelet[2617]: I0715 04:33:10.520214 2617 server.go:1274] "Started kubelet" Jul 15 04:33:10.521695 kubelet[2617]: I0715 04:33:10.521537 2617 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:33:10.521837 kubelet[2617]: I0715 04:33:10.521818 2617 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:33:10.521966 kubelet[2617]: I0715 04:33:10.521940 2617 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:33:10.522277 kubelet[2617]: I0715 04:33:10.522256 2617 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:33:10.523100 kubelet[2617]: I0715 04:33:10.523081 2617 server.go:449] "Adding debug handlers to kubelet server" Jul 15 04:33:10.526575 kubelet[2617]: I0715 04:33:10.526111 2617 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:33:10.526575 kubelet[2617]: I0715 04:33:10.526446 2617 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 04:33:10.527504 kubelet[2617]: E0715 04:33:10.526693 2617 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:33:10.527504 kubelet[2617]: I0715 04:33:10.527362 2617 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 04:33:10.528333 kubelet[2617]: I0715 04:33:10.528224 2617 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:33:10.529569 kubelet[2617]: I0715 04:33:10.529532 2617 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:33:10.534633 kubelet[2617]: I0715 04:33:10.534505 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 04:33:10.538551 kubelet[2617]: I0715 04:33:10.538527 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 04:33:10.539220 kubelet[2617]: I0715 04:33:10.539184 2617 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 04:33:10.539565 kubelet[2617]: I0715 04:33:10.539298 2617 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 04:33:10.539565 kubelet[2617]: I0715 04:33:10.539327 2617 factory.go:221] Registration of the containerd container factory successfully Jul 15 04:33:10.539565 kubelet[2617]: I0715 04:33:10.539339 2617 factory.go:221] Registration of the systemd container factory successfully Jul 15 04:33:10.541761 kubelet[2617]: E0715 04:33:10.541549 2617 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:33:10.544912 kubelet[2617]: E0715 04:33:10.544885 2617 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:33:10.592125 kubelet[2617]: I0715 04:33:10.592030 2617 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 04:33:10.592125 kubelet[2617]: I0715 04:33:10.592051 2617 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 04:33:10.592125 kubelet[2617]: I0715 04:33:10.592074 2617 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:33:10.592257 kubelet[2617]: I0715 04:33:10.592225 2617 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 04:33:10.592257 kubelet[2617]: I0715 04:33:10.592235 2617 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 04:33:10.592257 kubelet[2617]: I0715 04:33:10.592252 2617 policy_none.go:49] "None policy: Start" Jul 15 04:33:10.593756 kubelet[2617]: I0715 04:33:10.593733 2617 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 04:33:10.594340 kubelet[2617]: I0715 04:33:10.593844 2617 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:33:10.594340 kubelet[2617]: I0715 04:33:10.593992 2617 state_mem.go:75] "Updated machine memory state" Jul 15 04:33:10.597846 kubelet[2617]: I0715 04:33:10.597788 2617 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 04:33:10.598005 kubelet[2617]: I0715 04:33:10.597962 2617 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:33:10.598005 kubelet[2617]: I0715 04:33:10.597979 2617 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:33:10.598187 kubelet[2617]: I0715 04:33:10.598171 2617 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:33:10.649149 kubelet[2617]: E0715 04:33:10.649078 2617 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 04:33:10.701793 kubelet[2617]: I0715 04:33:10.701718 2617 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 04:33:10.708384 kubelet[2617]: I0715 04:33:10.708337 2617 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 15 04:33:10.708567 kubelet[2617]: I0715 04:33:10.708554 2617 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 04:33:10.830139 kubelet[2617]: I0715 04:33:10.830094 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:33:10.830139 kubelet[2617]: I0715 04:33:10.830140 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:33:10.830314 kubelet[2617]: I0715 04:33:10.830165 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7ed2c7911a007b75d574ff9c074b38d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b7ed2c7911a007b75d574ff9c074b38d\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:33:10.830314 kubelet[2617]: I0715 04:33:10.830182 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:33:10.830314 kubelet[2617]: I0715 04:33:10.830199 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:33:10.830314 kubelet[2617]: I0715 04:33:10.830215 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:33:10.830314 kubelet[2617]: I0715 04:33:10.830232 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 04:33:10.830410 kubelet[2617]: I0715 04:33:10.830246 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7ed2c7911a007b75d574ff9c074b38d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7ed2c7911a007b75d574ff9c074b38d\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:33:10.830410 kubelet[2617]: I0715 04:33:10.830261 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7ed2c7911a007b75d574ff9c074b38d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7ed2c7911a007b75d574ff9c074b38d\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:33:10.948017 kubelet[2617]: E0715 04:33:10.947907 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:10.950108 kubelet[2617]: E0715 04:33:10.950023 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:10.950183 kubelet[2617]: E0715 04:33:10.950131 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:11.517403 kubelet[2617]: I0715 04:33:11.517354 2617 apiserver.go:52] "Watching apiserver" Jul 15 04:33:11.528667 kubelet[2617]: I0715 04:33:11.528626 2617 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 04:33:11.571024 kubelet[2617]: E0715 04:33:11.570995 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:11.571184 kubelet[2617]: E0715 04:33:11.571134 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:11.575665 kubelet[2617]: E0715 04:33:11.575630 2617 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 04:33:11.575827 kubelet[2617]: E0715 04:33:11.575792 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:11.602116 kubelet[2617]: I0715 04:33:11.602000 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.601981354 podStartE2EDuration="2.601981354s" podCreationTimestamp="2025-07-15 04:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:33:11.595174452 +0000 UTC m=+1.137570263" watchObservedRunningTime="2025-07-15 04:33:11.601981354 +0000 UTC m=+1.144377165" Jul 15 04:33:11.613670 kubelet[2617]: I0715 04:33:11.613366 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.613350712 podStartE2EDuration="1.613350712s" podCreationTimestamp="2025-07-15 04:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:33:11.612895994 +0000 UTC m=+1.155291805" watchObservedRunningTime="2025-07-15 04:33:11.613350712 +0000 UTC m=+1.155746523" Jul 15 04:33:11.613670 kubelet[2617]: I0715 04:33:11.613451 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6134472610000001 podStartE2EDuration="1.613447261s" podCreationTimestamp="2025-07-15 04:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:33:11.602608261 +0000 UTC m=+1.145004072" watchObservedRunningTime="2025-07-15 04:33:11.613447261 +0000 UTC m=+1.155843072" Jul 15 04:33:12.572057 kubelet[2617]: E0715 04:33:12.572027 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:13.497247 kubelet[2617]: E0715 04:33:13.497153 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:13.573150 kubelet[2617]: E0715 04:33:13.573063 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:14.699286 kubelet[2617]: I0715 04:33:14.699247 2617 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 04:33:14.699931 containerd[1499]: time="2025-07-15T04:33:14.699838780Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 04:33:14.700247 kubelet[2617]: I0715 04:33:14.699998 2617 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 04:33:15.585021 systemd[1]: Created slice kubepods-besteffort-pode167a98c_922c_4a8f_a875_74ec3887438f.slice - libcontainer container kubepods-besteffort-pode167a98c_922c_4a8f_a875_74ec3887438f.slice. Jul 15 04:33:15.661615 kubelet[2617]: I0715 04:33:15.661577 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e167a98c-922c-4a8f-a875-74ec3887438f-kube-proxy\") pod \"kube-proxy-g8s2k\" (UID: \"e167a98c-922c-4a8f-a875-74ec3887438f\") " pod="kube-system/kube-proxy-g8s2k" Jul 15 04:33:15.661871 kubelet[2617]: I0715 04:33:15.661816 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e167a98c-922c-4a8f-a875-74ec3887438f-xtables-lock\") pod \"kube-proxy-g8s2k\" (UID: \"e167a98c-922c-4a8f-a875-74ec3887438f\") " pod="kube-system/kube-proxy-g8s2k" Jul 15 04:33:15.661871 kubelet[2617]: I0715 04:33:15.661841 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e167a98c-922c-4a8f-a875-74ec3887438f-lib-modules\") pod \"kube-proxy-g8s2k\" (UID: \"e167a98c-922c-4a8f-a875-74ec3887438f\") " pod="kube-system/kube-proxy-g8s2k" Jul 15 04:33:15.661993 kubelet[2617]: I0715 04:33:15.661972 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57qvn\" (UniqueName: \"kubernetes.io/projected/e167a98c-922c-4a8f-a875-74ec3887438f-kube-api-access-57qvn\") pod \"kube-proxy-g8s2k\" (UID: \"e167a98c-922c-4a8f-a875-74ec3887438f\") " pod="kube-system/kube-proxy-g8s2k" Jul 15 04:33:15.810233 systemd[1]: Created slice kubepods-besteffort-pod3728368f_ce0e_477c_bdd8_61f1511ce229.slice - libcontainer container kubepods-besteffort-pod3728368f_ce0e_477c_bdd8_61f1511ce229.slice. Jul 15 04:33:15.863251 kubelet[2617]: I0715 04:33:15.863125 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3728368f-ce0e-477c-bdd8-61f1511ce229-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-4665k\" (UID: \"3728368f-ce0e-477c-bdd8-61f1511ce229\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-4665k" Jul 15 04:33:15.863251 kubelet[2617]: I0715 04:33:15.863173 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp8gk\" (UniqueName: \"kubernetes.io/projected/3728368f-ce0e-477c-bdd8-61f1511ce229-kube-api-access-bp8gk\") pod \"tigera-operator-5bf8dfcb4-4665k\" (UID: \"3728368f-ce0e-477c-bdd8-61f1511ce229\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-4665k" Jul 15 04:33:15.894480 kubelet[2617]: E0715 04:33:15.894431 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:15.895163 containerd[1499]: time="2025-07-15T04:33:15.895131944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g8s2k,Uid:e167a98c-922c-4a8f-a875-74ec3887438f,Namespace:kube-system,Attempt:0,}" Jul 15 04:33:15.931263 containerd[1499]: time="2025-07-15T04:33:15.931193980Z" level=info msg="connecting to shim e282144d9ab4fd835b2a7503f7d614ca6c341bd510f2cbbd899b515ecba5e9a4" address="unix:///run/containerd/s/623e4e45007a052a8e477c18a728439527939dba4417010859f7a6bb0292a8ea" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:15.959666 systemd[1]: Started cri-containerd-e282144d9ab4fd835b2a7503f7d614ca6c341bd510f2cbbd899b515ecba5e9a4.scope - libcontainer container e282144d9ab4fd835b2a7503f7d614ca6c341bd510f2cbbd899b515ecba5e9a4. Jul 15 04:33:15.992643 containerd[1499]: time="2025-07-15T04:33:15.992604972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g8s2k,Uid:e167a98c-922c-4a8f-a875-74ec3887438f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e282144d9ab4fd835b2a7503f7d614ca6c341bd510f2cbbd899b515ecba5e9a4\"" Jul 15 04:33:15.993526 kubelet[2617]: E0715 04:33:15.993441 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:15.996693 containerd[1499]: time="2025-07-15T04:33:15.996630879Z" level=info msg="CreateContainer within sandbox \"e282144d9ab4fd835b2a7503f7d614ca6c341bd510f2cbbd899b515ecba5e9a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 04:33:16.010114 containerd[1499]: time="2025-07-15T04:33:16.010030944Z" level=info msg="Container 8fc671c579d8220f837b739e8a8665e4fb328211a0db72be8ed84a57a0e479d6: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:16.016481 containerd[1499]: time="2025-07-15T04:33:16.016415567Z" level=info msg="CreateContainer within sandbox \"e282144d9ab4fd835b2a7503f7d614ca6c341bd510f2cbbd899b515ecba5e9a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8fc671c579d8220f837b739e8a8665e4fb328211a0db72be8ed84a57a0e479d6\"" Jul 15 04:33:16.017764 containerd[1499]: time="2025-07-15T04:33:16.017391632Z" level=info msg="StartContainer for \"8fc671c579d8220f837b739e8a8665e4fb328211a0db72be8ed84a57a0e479d6\"" Jul 15 04:33:16.020284 containerd[1499]: time="2025-07-15T04:33:16.020205149Z" level=info msg="connecting to shim 8fc671c579d8220f837b739e8a8665e4fb328211a0db72be8ed84a57a0e479d6" address="unix:///run/containerd/s/623e4e45007a052a8e477c18a728439527939dba4417010859f7a6bb0292a8ea" protocol=ttrpc version=3 Jul 15 04:33:16.052687 systemd[1]: Started cri-containerd-8fc671c579d8220f837b739e8a8665e4fb328211a0db72be8ed84a57a0e479d6.scope - libcontainer container 8fc671c579d8220f837b739e8a8665e4fb328211a0db72be8ed84a57a0e479d6. Jul 15 04:33:16.093362 containerd[1499]: time="2025-07-15T04:33:16.093309617Z" level=info msg="StartContainer for \"8fc671c579d8220f837b739e8a8665e4fb328211a0db72be8ed84a57a0e479d6\" returns successfully" Jul 15 04:33:16.117162 containerd[1499]: time="2025-07-15T04:33:16.117060996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-4665k,Uid:3728368f-ce0e-477c-bdd8-61f1511ce229,Namespace:tigera-operator,Attempt:0,}" Jul 15 04:33:16.140167 containerd[1499]: time="2025-07-15T04:33:16.140012524Z" level=info msg="connecting to shim 88064f882221d2cd85e3724deb73ffcbff55cf4c72428091697ccd92c4337711" address="unix:///run/containerd/s/85e6c7eabddcd43fff4c3d60c290e86894f9a3faee69cebe8b469fc8553de074" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:16.164644 systemd[1]: Started cri-containerd-88064f882221d2cd85e3724deb73ffcbff55cf4c72428091697ccd92c4337711.scope - libcontainer container 88064f882221d2cd85e3724deb73ffcbff55cf4c72428091697ccd92c4337711. Jul 15 04:33:16.204260 containerd[1499]: time="2025-07-15T04:33:16.204039445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-4665k,Uid:3728368f-ce0e-477c-bdd8-61f1511ce229,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"88064f882221d2cd85e3724deb73ffcbff55cf4c72428091697ccd92c4337711\"" Jul 15 04:33:16.207172 containerd[1499]: time="2025-07-15T04:33:16.205680933Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 04:33:16.586169 kubelet[2617]: E0715 04:33:16.586086 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:16.595451 kubelet[2617]: I0715 04:33:16.595390 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g8s2k" podStartSLOduration=1.595370531 podStartE2EDuration="1.595370531s" podCreationTimestamp="2025-07-15 04:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:33:16.594701748 +0000 UTC m=+6.137097559" watchObservedRunningTime="2025-07-15 04:33:16.595370531 +0000 UTC m=+6.137766382" Jul 15 04:33:16.861508 kubelet[2617]: E0715 04:33:16.861349 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:17.533671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2507325430.mount: Deactivated successfully. Jul 15 04:33:17.588563 kubelet[2617]: E0715 04:33:17.588492 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:17.858328 containerd[1499]: time="2025-07-15T04:33:17.858217999Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:17.859226 containerd[1499]: time="2025-07-15T04:33:17.859063454Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 15 04:33:17.859914 containerd[1499]: time="2025-07-15T04:33:17.859880839Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:17.861766 containerd[1499]: time="2025-07-15T04:33:17.861736250Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:17.862518 containerd[1499]: time="2025-07-15T04:33:17.862484860Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.656771619s" Jul 15 04:33:17.863044 containerd[1499]: time="2025-07-15T04:33:17.862604737Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 15 04:33:17.864526 containerd[1499]: time="2025-07-15T04:33:17.864493455Z" level=info msg="CreateContainer within sandbox \"88064f882221d2cd85e3724deb73ffcbff55cf4c72428091697ccd92c4337711\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 04:33:17.872518 containerd[1499]: time="2025-07-15T04:33:17.871943688Z" level=info msg="Container 3b843e3024799f63b01e9a817e2327011f0191befa84e00b4b8b8ec937f6c64a: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:17.874127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133685686.mount: Deactivated successfully. Jul 15 04:33:17.879388 containerd[1499]: time="2025-07-15T04:33:17.879334261Z" level=info msg="CreateContainer within sandbox \"88064f882221d2cd85e3724deb73ffcbff55cf4c72428091697ccd92c4337711\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3b843e3024799f63b01e9a817e2327011f0191befa84e00b4b8b8ec937f6c64a\"" Jul 15 04:33:17.879977 containerd[1499]: time="2025-07-15T04:33:17.879784499Z" level=info msg="StartContainer for \"3b843e3024799f63b01e9a817e2327011f0191befa84e00b4b8b8ec937f6c64a\"" Jul 15 04:33:17.880705 containerd[1499]: time="2025-07-15T04:33:17.880675098Z" level=info msg="connecting to shim 3b843e3024799f63b01e9a817e2327011f0191befa84e00b4b8b8ec937f6c64a" address="unix:///run/containerd/s/85e6c7eabddcd43fff4c3d60c290e86894f9a3faee69cebe8b469fc8553de074" protocol=ttrpc version=3 Jul 15 04:33:17.903628 systemd[1]: Started cri-containerd-3b843e3024799f63b01e9a817e2327011f0191befa84e00b4b8b8ec937f6c64a.scope - libcontainer container 3b843e3024799f63b01e9a817e2327011f0191befa84e00b4b8b8ec937f6c64a. Jul 15 04:33:17.931407 containerd[1499]: time="2025-07-15T04:33:17.931364970Z" level=info msg="StartContainer for \"3b843e3024799f63b01e9a817e2327011f0191befa84e00b4b8b8ec937f6c64a\" returns successfully" Jul 15 04:33:22.087722 kubelet[2617]: E0715 04:33:22.087680 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:22.097924 kubelet[2617]: I0715 04:33:22.097862 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-4665k" podStartSLOduration=5.439615321 podStartE2EDuration="7.097847517s" podCreationTimestamp="2025-07-15 04:33:15 +0000 UTC" firstStartedPulling="2025-07-15 04:33:16.205060812 +0000 UTC m=+5.747456583" lastFinishedPulling="2025-07-15 04:33:17.863292968 +0000 UTC m=+7.405688779" observedRunningTime="2025-07-15 04:33:18.599631598 +0000 UTC m=+8.142027409" watchObservedRunningTime="2025-07-15 04:33:22.097847517 +0000 UTC m=+11.640243328" Jul 15 04:33:23.426894 sudo[1709]: pam_unix(sudo:session): session closed for user root Jul 15 04:33:23.428995 sshd[1708]: Connection closed by 10.0.0.1 port 48084 Jul 15 04:33:23.429428 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jul 15 04:33:23.434735 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:48084.service: Deactivated successfully. Jul 15 04:33:23.437267 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 04:33:23.437533 systemd[1]: session-7.scope: Consumed 5.779s CPU time, 222M memory peak. Jul 15 04:33:23.438506 systemd-logind[1478]: Session 7 logged out. Waiting for processes to exit. Jul 15 04:33:23.439996 systemd-logind[1478]: Removed session 7. Jul 15 04:33:23.509797 kubelet[2617]: E0715 04:33:23.509764 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:26.516611 update_engine[1481]: I20250715 04:33:26.516544 1481 update_attempter.cc:509] Updating boot flags... Jul 15 04:33:30.632262 systemd[1]: Created slice kubepods-besteffort-podc994b260_1d8d_4f02_bb41_e20796449803.slice - libcontainer container kubepods-besteffort-podc994b260_1d8d_4f02_bb41_e20796449803.slice. Jul 15 04:33:30.752241 kubelet[2617]: I0715 04:33:30.752136 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c994b260-1d8d-4f02-bb41-e20796449803-tigera-ca-bundle\") pod \"calico-typha-68654cd5c4-kb6mf\" (UID: \"c994b260-1d8d-4f02-bb41-e20796449803\") " pod="calico-system/calico-typha-68654cd5c4-kb6mf" Jul 15 04:33:30.752241 kubelet[2617]: I0715 04:33:30.752178 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4qvx\" (UniqueName: \"kubernetes.io/projected/c994b260-1d8d-4f02-bb41-e20796449803-kube-api-access-v4qvx\") pod \"calico-typha-68654cd5c4-kb6mf\" (UID: \"c994b260-1d8d-4f02-bb41-e20796449803\") " pod="calico-system/calico-typha-68654cd5c4-kb6mf" Jul 15 04:33:30.752241 kubelet[2617]: I0715 04:33:30.752198 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c994b260-1d8d-4f02-bb41-e20796449803-typha-certs\") pod \"calico-typha-68654cd5c4-kb6mf\" (UID: \"c994b260-1d8d-4f02-bb41-e20796449803\") " pod="calico-system/calico-typha-68654cd5c4-kb6mf" Jul 15 04:33:30.880989 systemd[1]: Created slice kubepods-besteffort-poda7d48e1b_8962_49de_8883_dcb1fc609f85.slice - libcontainer container kubepods-besteffort-poda7d48e1b_8962_49de_8883_dcb1fc609f85.slice. Jul 15 04:33:30.936765 kubelet[2617]: E0715 04:33:30.936642 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:30.940219 containerd[1499]: time="2025-07-15T04:33:30.940184712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68654cd5c4-kb6mf,Uid:c994b260-1d8d-4f02-bb41-e20796449803,Namespace:calico-system,Attempt:0,}" Jul 15 04:33:31.013199 containerd[1499]: time="2025-07-15T04:33:31.013145783Z" level=info msg="connecting to shim 703e883f917b53fb8554fc95b8bde042a70c16610936041c0938b761c68612de" address="unix:///run/containerd/s/3146aa835d60a1eab2da5e504c069f1cefadb81394e8e703baa8eb35a64f6069" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:31.045216 kubelet[2617]: E0715 04:33:31.043786 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-959pd" podUID="8fa57634-f69a-42ff-ae8f-1f9db265d0bc" Jul 15 04:33:31.054720 kubelet[2617]: I0715 04:33:31.054205 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a7d48e1b-8962-49de-8883-dcb1fc609f85-cni-net-dir\") pod \"calico-node-kj8lm\" (UID: \"a7d48e1b-8962-49de-8883-dcb1fc609f85\") " pod="calico-system/calico-node-kj8lm" Jul 15 04:33:31.054720 kubelet[2617]: I0715 04:33:31.054242 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8fa57634-f69a-42ff-ae8f-1f9db265d0bc-registration-dir\") pod \"csi-node-driver-959pd\" (UID: \"8fa57634-f69a-42ff-ae8f-1f9db265d0bc\") " pod="calico-system/csi-node-driver-959pd" Jul 15 04:33:31.054720 kubelet[2617]: I0715 04:33:31.054259 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2r9l\" (UniqueName: \"kubernetes.io/projected/8fa57634-f69a-42ff-ae8f-1f9db265d0bc-kube-api-access-b2r9l\") pod \"csi-node-driver-959pd\" (UID: \"8fa57634-f69a-42ff-ae8f-1f9db265d0bc\") " pod="calico-system/csi-node-driver-959pd" Jul 15 04:33:31.054720 kubelet[2617]: I0715 04:33:31.054275 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a7d48e1b-8962-49de-8883-dcb1fc609f85-var-run-calico\") pod \"calico-node-kj8lm\" (UID: \"a7d48e1b-8962-49de-8883-dcb1fc609f85\") " pod="calico-system/calico-node-kj8lm" Jul 15 04:33:31.054720 kubelet[2617]: I0715 04:33:31.054292 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a7d48e1b-8962-49de-8883-dcb1fc609f85-flexvol-driver-host\") pod \"calico-node-kj8lm\" (UID: \"a7d48e1b-8962-49de-8883-dcb1fc609f85\") " pod="calico-system/calico-node-kj8lm" Jul 15 04:33:31.054928 kubelet[2617]: I0715 04:33:31.054307 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7d48e1b-8962-49de-8883-dcb1fc609f85-tigera-ca-bundle\") pod \"calico-node-kj8lm\" (UID: \"a7d48e1b-8962-49de-8883-dcb1fc609f85\") " pod="calico-system/calico-node-kj8lm" Jul 15 04:33:31.054928 kubelet[2617]: I0715 04:33:31.054322 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8fa57634-f69a-42ff-ae8f-1f9db265d0bc-kubelet-dir\") pod \"csi-node-driver-959pd\" (UID: \"8fa57634-f69a-42ff-ae8f-1f9db265d0bc\") " pod="calico-system/csi-node-driver-959pd" Jul 15 04:33:31.054928 kubelet[2617]: I0715 04:33:31.054336 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8fa57634-f69a-42ff-ae8f-1f9db265d0bc-varrun\") pod \"csi-node-driver-959pd\" (UID: \"8fa57634-f69a-42ff-ae8f-1f9db265d0bc\") " pod="calico-system/csi-node-driver-959pd" Jul 15 04:33:31.054928 kubelet[2617]: I0715 04:33:31.054350 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a7d48e1b-8962-49de-8883-dcb1fc609f85-cni-log-dir\") pod \"calico-node-kj8lm\" (UID: \"a7d48e1b-8962-49de-8883-dcb1fc609f85\") " pod="calico-system/calico-node-kj8lm" Jul 15 04:33:31.054928 kubelet[2617]: I0715 04:33:31.054365 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a7d48e1b-8962-49de-8883-dcb1fc609f85-var-lib-calico\") pod \"calico-node-kj8lm\" (UID: \"a7d48e1b-8962-49de-8883-dcb1fc609f85\") " pod="calico-system/calico-node-kj8lm" Jul 15 04:33:31.055030 kubelet[2617]: I0715 04:33:31.054379 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdqcg\" (UniqueName: \"kubernetes.io/projected/a7d48e1b-8962-49de-8883-dcb1fc609f85-kube-api-access-zdqcg\") pod \"calico-node-kj8lm\" (UID: \"a7d48e1b-8962-49de-8883-dcb1fc609f85\") " pod="calico-system/calico-node-kj8lm" Jul 15 04:33:31.055030 kubelet[2617]: I0715 04:33:31.054394 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a7d48e1b-8962-49de-8883-dcb1fc609f85-cni-bin-dir\") pod \"calico-node-kj8lm\" (UID: \"a7d48e1b-8962-49de-8883-dcb1fc609f85\") " pod="calico-system/calico-node-kj8lm" Jul 15 04:33:31.055030 kubelet[2617]: I0715 04:33:31.054490 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7d48e1b-8962-49de-8883-dcb1fc609f85-lib-modules\") pod \"calico-node-kj8lm\" (UID: \"a7d48e1b-8962-49de-8883-dcb1fc609f85\") " pod="calico-system/calico-node-kj8lm" Jul 15 04:33:31.055030 kubelet[2617]: I0715 04:33:31.054531 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a7d48e1b-8962-49de-8883-dcb1fc609f85-policysync\") pod \"calico-node-kj8lm\" (UID: \"a7d48e1b-8962-49de-8883-dcb1fc609f85\") " pod="calico-system/calico-node-kj8lm" Jul 15 04:33:31.055030 kubelet[2617]: I0715 04:33:31.054568 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7d48e1b-8962-49de-8883-dcb1fc609f85-xtables-lock\") pod \"calico-node-kj8lm\" (UID: \"a7d48e1b-8962-49de-8883-dcb1fc609f85\") " pod="calico-system/calico-node-kj8lm" Jul 15 04:33:31.055133 kubelet[2617]: I0715 04:33:31.054608 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a7d48e1b-8962-49de-8883-dcb1fc609f85-node-certs\") pod \"calico-node-kj8lm\" (UID: \"a7d48e1b-8962-49de-8883-dcb1fc609f85\") " pod="calico-system/calico-node-kj8lm" Jul 15 04:33:31.055133 kubelet[2617]: I0715 04:33:31.054629 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8fa57634-f69a-42ff-ae8f-1f9db265d0bc-socket-dir\") pod \"csi-node-driver-959pd\" (UID: \"8fa57634-f69a-42ff-ae8f-1f9db265d0bc\") " pod="calico-system/csi-node-driver-959pd" Jul 15 04:33:31.088650 systemd[1]: Started cri-containerd-703e883f917b53fb8554fc95b8bde042a70c16610936041c0938b761c68612de.scope - libcontainer container 703e883f917b53fb8554fc95b8bde042a70c16610936041c0938b761c68612de. Jul 15 04:33:31.133512 containerd[1499]: time="2025-07-15T04:33:31.133326101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68654cd5c4-kb6mf,Uid:c994b260-1d8d-4f02-bb41-e20796449803,Namespace:calico-system,Attempt:0,} returns sandbox id \"703e883f917b53fb8554fc95b8bde042a70c16610936041c0938b761c68612de\"" Jul 15 04:33:31.139223 kubelet[2617]: E0715 04:33:31.139191 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:31.143478 containerd[1499]: time="2025-07-15T04:33:31.143438544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 04:33:31.157326 kubelet[2617]: E0715 04:33:31.157168 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.157326 kubelet[2617]: W0715 04:33:31.157307 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.157550 kubelet[2617]: E0715 04:33:31.157351 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.159155 kubelet[2617]: E0715 04:33:31.158529 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.159155 kubelet[2617]: W0715 04:33:31.158561 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.159155 kubelet[2617]: E0715 04:33:31.158582 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.159155 kubelet[2617]: E0715 04:33:31.158773 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.159155 kubelet[2617]: W0715 04:33:31.158782 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.159155 kubelet[2617]: E0715 04:33:31.158809 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.159155 kubelet[2617]: E0715 04:33:31.158933 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.159155 kubelet[2617]: W0715 04:33:31.158944 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.159155 kubelet[2617]: E0715 04:33:31.158964 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.159897 kubelet[2617]: E0715 04:33:31.159795 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.159897 kubelet[2617]: W0715 04:33:31.159815 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.159897 kubelet[2617]: E0715 04:33:31.159881 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.160528 kubelet[2617]: E0715 04:33:31.160503 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.160528 kubelet[2617]: W0715 04:33:31.160521 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.160620 kubelet[2617]: E0715 04:33:31.160582 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.164920 kubelet[2617]: E0715 04:33:31.164894 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.164920 kubelet[2617]: W0715 04:33:31.164913 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.165079 kubelet[2617]: E0715 04:33:31.164957 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.165133 kubelet[2617]: E0715 04:33:31.165116 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.165133 kubelet[2617]: W0715 04:33:31.165128 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.165301 kubelet[2617]: E0715 04:33:31.165174 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.165467 kubelet[2617]: E0715 04:33:31.165429 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.165467 kubelet[2617]: W0715 04:33:31.165440 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.165571 kubelet[2617]: E0715 04:33:31.165475 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.165690 kubelet[2617]: E0715 04:33:31.165637 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.165690 kubelet[2617]: W0715 04:33:31.165652 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.165747 kubelet[2617]: E0715 04:33:31.165733 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.165868 kubelet[2617]: E0715 04:33:31.165853 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.165868 kubelet[2617]: W0715 04:33:31.165864 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.165930 kubelet[2617]: E0715 04:33:31.165913 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.166075 kubelet[2617]: E0715 04:33:31.166062 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.166075 kubelet[2617]: W0715 04:33:31.166073 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.166187 kubelet[2617]: E0715 04:33:31.166170 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.166542 kubelet[2617]: E0715 04:33:31.166519 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.166542 kubelet[2617]: W0715 04:33:31.166532 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.166614 kubelet[2617]: E0715 04:33:31.166568 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.166705 kubelet[2617]: E0715 04:33:31.166687 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.166705 kubelet[2617]: W0715 04:33:31.166699 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.166758 kubelet[2617]: E0715 04:33:31.166722 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.167327 kubelet[2617]: E0715 04:33:31.167060 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.167327 kubelet[2617]: W0715 04:33:31.167074 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.167327 kubelet[2617]: E0715 04:33:31.167118 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.167530 kubelet[2617]: E0715 04:33:31.167515 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.167659 kubelet[2617]: W0715 04:33:31.167587 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.168766 kubelet[2617]: E0715 04:33:31.168739 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.170835 kubelet[2617]: E0715 04:33:31.170811 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.170996 kubelet[2617]: W0715 04:33:31.170929 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.170996 kubelet[2617]: E0715 04:33:31.170964 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.171905 kubelet[2617]: E0715 04:33:31.171859 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.171905 kubelet[2617]: W0715 04:33:31.171882 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.171905 kubelet[2617]: E0715 04:33:31.171903 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.172483 kubelet[2617]: E0715 04:33:31.172337 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.172483 kubelet[2617]: W0715 04:33:31.172348 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.172483 kubelet[2617]: E0715 04:33:31.172361 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.176383 kubelet[2617]: E0715 04:33:31.176347 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.176546 kubelet[2617]: W0715 04:33:31.176520 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.176617 kubelet[2617]: E0715 04:33:31.176604 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.178600 kubelet[2617]: E0715 04:33:31.178524 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:31.178600 kubelet[2617]: W0715 04:33:31.178552 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:31.178600 kubelet[2617]: E0715 04:33:31.178566 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:31.185860 containerd[1499]: time="2025-07-15T04:33:31.185814231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kj8lm,Uid:a7d48e1b-8962-49de-8883-dcb1fc609f85,Namespace:calico-system,Attempt:0,}" Jul 15 04:33:31.210775 containerd[1499]: time="2025-07-15T04:33:31.209828082Z" level=info msg="connecting to shim 37d41390df3cd315873270af0df04ef5085f548f581d2c0f7cb21229f52bc82c" address="unix:///run/containerd/s/4c421bc476dc73a749ae01ee3013a5a7a665ff1759f172c863d2951a807b0dc7" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:31.238624 systemd[1]: Started cri-containerd-37d41390df3cd315873270af0df04ef5085f548f581d2c0f7cb21229f52bc82c.scope - libcontainer container 37d41390df3cd315873270af0df04ef5085f548f581d2c0f7cb21229f52bc82c. Jul 15 04:33:31.267721 containerd[1499]: time="2025-07-15T04:33:31.267671910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kj8lm,Uid:a7d48e1b-8962-49de-8883-dcb1fc609f85,Namespace:calico-system,Attempt:0,} returns sandbox id \"37d41390df3cd315873270af0df04ef5085f548f581d2c0f7cb21229f52bc82c\"" Jul 15 04:33:32.233103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409990028.mount: Deactivated successfully. Jul 15 04:33:32.540231 kubelet[2617]: E0715 04:33:32.540102 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-959pd" podUID="8fa57634-f69a-42ff-ae8f-1f9db265d0bc" Jul 15 04:33:32.726179 containerd[1499]: time="2025-07-15T04:33:32.725652564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:32.726592 containerd[1499]: time="2025-07-15T04:33:32.726559680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 15 04:33:32.729665 containerd[1499]: time="2025-07-15T04:33:32.729612102Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:32.732678 containerd[1499]: time="2025-07-15T04:33:32.732642767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:32.733286 containerd[1499]: time="2025-07-15T04:33:32.733237485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.589708474s" Jul 15 04:33:32.733358 containerd[1499]: time="2025-07-15T04:33:32.733283239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 15 04:33:32.746194 containerd[1499]: time="2025-07-15T04:33:32.746148277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 04:33:32.782560 containerd[1499]: time="2025-07-15T04:33:32.782505136Z" level=info msg="CreateContainer within sandbox \"703e883f917b53fb8554fc95b8bde042a70c16610936041c0938b761c68612de\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 04:33:32.805173 containerd[1499]: time="2025-07-15T04:33:32.804977337Z" level=info msg="Container 0d4ab0efd65b65031cffd19221b14b4ccba775b689362543ccfa80a81727cf08: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:32.812110 containerd[1499]: time="2025-07-15T04:33:32.812058967Z" level=info msg="CreateContainer within sandbox \"703e883f917b53fb8554fc95b8bde042a70c16610936041c0938b761c68612de\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0d4ab0efd65b65031cffd19221b14b4ccba775b689362543ccfa80a81727cf08\"" Jul 15 04:33:32.814866 containerd[1499]: time="2025-07-15T04:33:32.814824988Z" level=info msg="StartContainer for \"0d4ab0efd65b65031cffd19221b14b4ccba775b689362543ccfa80a81727cf08\"" Jul 15 04:33:32.816211 containerd[1499]: time="2025-07-15T04:33:32.816180123Z" level=info msg="connecting to shim 0d4ab0efd65b65031cffd19221b14b4ccba775b689362543ccfa80a81727cf08" address="unix:///run/containerd/s/3146aa835d60a1eab2da5e504c069f1cefadb81394e8e703baa8eb35a64f6069" protocol=ttrpc version=3 Jul 15 04:33:32.843646 systemd[1]: Started cri-containerd-0d4ab0efd65b65031cffd19221b14b4ccba775b689362543ccfa80a81727cf08.scope - libcontainer container 0d4ab0efd65b65031cffd19221b14b4ccba775b689362543ccfa80a81727cf08. Jul 15 04:33:32.891573 containerd[1499]: time="2025-07-15T04:33:32.891533360Z" level=info msg="StartContainer for \"0d4ab0efd65b65031cffd19221b14b4ccba775b689362543ccfa80a81727cf08\" returns successfully" Jul 15 04:33:33.635301 kubelet[2617]: E0715 04:33:33.635259 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:33.647863 kubelet[2617]: I0715 04:33:33.647806 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68654cd5c4-kb6mf" podStartSLOduration=2.04374169 podStartE2EDuration="3.647790782s" podCreationTimestamp="2025-07-15 04:33:30 +0000 UTC" firstStartedPulling="2025-07-15 04:33:31.141485549 +0000 UTC m=+20.683881360" lastFinishedPulling="2025-07-15 04:33:32.745534681 +0000 UTC m=+22.287930452" observedRunningTime="2025-07-15 04:33:33.647215736 +0000 UTC m=+23.189611546" watchObservedRunningTime="2025-07-15 04:33:33.647790782 +0000 UTC m=+23.190186593" Jul 15 04:33:33.672196 kubelet[2617]: E0715 04:33:33.672159 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.672196 kubelet[2617]: W0715 04:33:33.672192 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.672364 kubelet[2617]: E0715 04:33:33.672214 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.672417 kubelet[2617]: E0715 04:33:33.672398 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.672417 kubelet[2617]: W0715 04:33:33.672413 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.672598 kubelet[2617]: E0715 04:33:33.672423 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.672598 kubelet[2617]: E0715 04:33:33.672591 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.672598 kubelet[2617]: W0715 04:33:33.672598 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.672667 kubelet[2617]: E0715 04:33:33.672607 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.672788 kubelet[2617]: E0715 04:33:33.672758 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.672788 kubelet[2617]: W0715 04:33:33.672770 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.672788 kubelet[2617]: E0715 04:33:33.672779 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.672945 kubelet[2617]: E0715 04:33:33.672932 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.672945 kubelet[2617]: W0715 04:33:33.672944 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.672991 kubelet[2617]: E0715 04:33:33.672953 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.673092 kubelet[2617]: E0715 04:33:33.673082 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.673113 kubelet[2617]: W0715 04:33:33.673092 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.673113 kubelet[2617]: E0715 04:33:33.673100 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.673236 kubelet[2617]: E0715 04:33:33.673225 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.673260 kubelet[2617]: W0715 04:33:33.673238 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.673260 kubelet[2617]: E0715 04:33:33.673246 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.673375 kubelet[2617]: E0715 04:33:33.673365 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.673413 kubelet[2617]: W0715 04:33:33.673375 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.673413 kubelet[2617]: E0715 04:33:33.673383 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.673553 kubelet[2617]: E0715 04:33:33.673542 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.673583 kubelet[2617]: W0715 04:33:33.673553 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.673583 kubelet[2617]: E0715 04:33:33.673561 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.673693 kubelet[2617]: E0715 04:33:33.673681 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.673718 kubelet[2617]: W0715 04:33:33.673692 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.673718 kubelet[2617]: E0715 04:33:33.673700 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.673830 kubelet[2617]: E0715 04:33:33.673819 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.673855 kubelet[2617]: W0715 04:33:33.673829 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.673855 kubelet[2617]: E0715 04:33:33.673837 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.673963 kubelet[2617]: E0715 04:33:33.673953 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.674000 kubelet[2617]: W0715 04:33:33.673963 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.674000 kubelet[2617]: E0715 04:33:33.673972 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.674116 kubelet[2617]: E0715 04:33:33.674104 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.674116 kubelet[2617]: W0715 04:33:33.674115 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.674166 kubelet[2617]: E0715 04:33:33.674123 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.674263 kubelet[2617]: E0715 04:33:33.674251 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.674263 kubelet[2617]: W0715 04:33:33.674262 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.674321 kubelet[2617]: E0715 04:33:33.674270 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.674421 kubelet[2617]: E0715 04:33:33.674410 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.674421 kubelet[2617]: W0715 04:33:33.674420 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.674483 kubelet[2617]: E0715 04:33:33.674428 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.674663 kubelet[2617]: E0715 04:33:33.674636 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.674663 kubelet[2617]: W0715 04:33:33.674649 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.674663 kubelet[2617]: E0715 04:33:33.674658 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.674821 kubelet[2617]: E0715 04:33:33.674809 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.674821 kubelet[2617]: W0715 04:33:33.674820 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.674872 kubelet[2617]: E0715 04:33:33.674833 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.674979 kubelet[2617]: E0715 04:33:33.674968 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.675009 kubelet[2617]: W0715 04:33:33.674980 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.675009 kubelet[2617]: E0715 04:33:33.674994 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.675250 kubelet[2617]: E0715 04:33:33.675154 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.675250 kubelet[2617]: W0715 04:33:33.675166 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.675250 kubelet[2617]: E0715 04:33:33.675190 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.675356 kubelet[2617]: E0715 04:33:33.675341 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.675356 kubelet[2617]: W0715 04:33:33.675354 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.675440 kubelet[2617]: E0715 04:33:33.675369 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.675526 kubelet[2617]: E0715 04:33:33.675512 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.675526 kubelet[2617]: W0715 04:33:33.675524 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.675710 kubelet[2617]: E0715 04:33:33.675538 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.675847 kubelet[2617]: E0715 04:33:33.675789 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.675847 kubelet[2617]: W0715 04:33:33.675805 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.675954 kubelet[2617]: E0715 04:33:33.675940 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.676109 kubelet[2617]: E0715 04:33:33.676094 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.676109 kubelet[2617]: W0715 04:33:33.676106 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.676194 kubelet[2617]: E0715 04:33:33.676121 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.676281 kubelet[2617]: E0715 04:33:33.676270 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.676281 kubelet[2617]: W0715 04:33:33.676280 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.676351 kubelet[2617]: E0715 04:33:33.676292 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.676515 kubelet[2617]: E0715 04:33:33.676501 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.676515 kubelet[2617]: W0715 04:33:33.676514 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.676559 kubelet[2617]: E0715 04:33:33.676529 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.676696 kubelet[2617]: E0715 04:33:33.676685 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.676728 kubelet[2617]: W0715 04:33:33.676698 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.676749 kubelet[2617]: E0715 04:33:33.676726 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.676876 kubelet[2617]: E0715 04:33:33.676866 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.676876 kubelet[2617]: W0715 04:33:33.676876 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.676926 kubelet[2617]: E0715 04:33:33.676893 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.677001 kubelet[2617]: E0715 04:33:33.676991 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.677001 kubelet[2617]: W0715 04:33:33.677001 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.677058 kubelet[2617]: E0715 04:33:33.677014 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.677161 kubelet[2617]: E0715 04:33:33.677149 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.677197 kubelet[2617]: W0715 04:33:33.677162 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.677197 kubelet[2617]: E0715 04:33:33.677176 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.677382 kubelet[2617]: E0715 04:33:33.677370 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.677420 kubelet[2617]: W0715 04:33:33.677382 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.677420 kubelet[2617]: E0715 04:33:33.677400 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.677581 kubelet[2617]: E0715 04:33:33.677566 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.677607 kubelet[2617]: W0715 04:33:33.677581 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.677607 kubelet[2617]: E0715 04:33:33.677602 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.677771 kubelet[2617]: E0715 04:33:33.677757 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.677771 kubelet[2617]: W0715 04:33:33.677771 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.677827 kubelet[2617]: E0715 04:33:33.677785 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.677936 kubelet[2617]: E0715 04:33:33.677924 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:33:33.677936 kubelet[2617]: W0715 04:33:33.677935 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:33:33.677984 kubelet[2617]: E0715 04:33:33.677943 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:33:33.894858 containerd[1499]: time="2025-07-15T04:33:33.893088638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:33.894858 containerd[1499]: time="2025-07-15T04:33:33.893888176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 15 04:33:33.895653 containerd[1499]: time="2025-07-15T04:33:33.895615954Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:33.897831 containerd[1499]: time="2025-07-15T04:33:33.897779596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:33.898609 containerd[1499]: time="2025-07-15T04:33:33.898571934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.152382343s" Jul 15 04:33:33.898609 containerd[1499]: time="2025-07-15T04:33:33.898606650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 15 04:33:33.901326 containerd[1499]: time="2025-07-15T04:33:33.901255149Z" level=info msg="CreateContainer within sandbox \"37d41390df3cd315873270af0df04ef5085f548f581d2c0f7cb21229f52bc82c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 04:33:33.934686 containerd[1499]: time="2025-07-15T04:33:33.934646061Z" level=info msg="Container 18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:33.936323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834673503.mount: Deactivated successfully. Jul 15 04:33:33.942713 containerd[1499]: time="2025-07-15T04:33:33.942621597Z" level=info msg="CreateContainer within sandbox \"37d41390df3cd315873270af0df04ef5085f548f581d2c0f7cb21229f52bc82c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9\"" Jul 15 04:33:33.943375 containerd[1499]: time="2025-07-15T04:33:33.943270393Z" level=info msg="StartContainer for \"18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9\"" Jul 15 04:33:33.944913 containerd[1499]: time="2025-07-15T04:33:33.944888506Z" level=info msg="connecting to shim 18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9" address="unix:///run/containerd/s/4c421bc476dc73a749ae01ee3013a5a7a665ff1759f172c863d2951a807b0dc7" protocol=ttrpc version=3 Jul 15 04:33:33.967628 systemd[1]: Started cri-containerd-18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9.scope - libcontainer container 18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9. Jul 15 04:33:34.002744 containerd[1499]: time="2025-07-15T04:33:34.002637138Z" level=info msg="StartContainer for \"18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9\" returns successfully" Jul 15 04:33:34.061497 systemd[1]: cri-containerd-18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9.scope: Deactivated successfully. Jul 15 04:33:34.081538 containerd[1499]: time="2025-07-15T04:33:34.081487205Z" level=info msg="received exit event container_id:\"18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9\" id:\"18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9\" pid:3262 exited_at:{seconds:1752554014 nanos:75873841}" Jul 15 04:33:34.081880 containerd[1499]: time="2025-07-15T04:33:34.081784449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9\" id:\"18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9\" pid:3262 exited_at:{seconds:1752554014 nanos:75873841}" Jul 15 04:33:34.119593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18d4ad2c5212dbd824ae7af088152aa995cbbe6fac03e6598d859d82e1629fd9-rootfs.mount: Deactivated successfully. Jul 15 04:33:34.540181 kubelet[2617]: E0715 04:33:34.539821 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-959pd" podUID="8fa57634-f69a-42ff-ae8f-1f9db265d0bc" Jul 15 04:33:34.638594 kubelet[2617]: I0715 04:33:34.638558 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:33:34.639037 kubelet[2617]: E0715 04:33:34.639016 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:34.640031 containerd[1499]: time="2025-07-15T04:33:34.639982881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 04:33:36.541450 kubelet[2617]: E0715 04:33:36.541400 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-959pd" podUID="8fa57634-f69a-42ff-ae8f-1f9db265d0bc" Jul 15 04:33:37.974837 containerd[1499]: time="2025-07-15T04:33:37.974794862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:37.976128 containerd[1499]: time="2025-07-15T04:33:37.975191423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 15 04:33:37.976128 containerd[1499]: time="2025-07-15T04:33:37.976082695Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:37.978075 containerd[1499]: time="2025-07-15T04:33:37.978049700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:37.978822 containerd[1499]: time="2025-07-15T04:33:37.978623523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.338602086s" Jul 15 04:33:37.978822 containerd[1499]: time="2025-07-15T04:33:37.978648560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 15 04:33:37.981359 containerd[1499]: time="2025-07-15T04:33:37.981319055Z" level=info msg="CreateContainer within sandbox \"37d41390df3cd315873270af0df04ef5085f548f581d2c0f7cb21229f52bc82c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 04:33:37.989484 containerd[1499]: time="2025-07-15T04:33:37.988930220Z" level=info msg="Container f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:37.995770 containerd[1499]: time="2025-07-15T04:33:37.995727746Z" level=info msg="CreateContainer within sandbox \"37d41390df3cd315873270af0df04ef5085f548f581d2c0f7cb21229f52bc82c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268\"" Jul 15 04:33:37.996136 containerd[1499]: time="2025-07-15T04:33:37.996116427Z" level=info msg="StartContainer for \"f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268\"" Jul 15 04:33:37.997714 containerd[1499]: time="2025-07-15T04:33:37.997673993Z" level=info msg="connecting to shim f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268" address="unix:///run/containerd/s/4c421bc476dc73a749ae01ee3013a5a7a665ff1759f172c863d2951a807b0dc7" protocol=ttrpc version=3 Jul 15 04:33:38.015626 systemd[1]: Started cri-containerd-f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268.scope - libcontainer container f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268. Jul 15 04:33:38.053216 containerd[1499]: time="2025-07-15T04:33:38.053156575Z" level=info msg="StartContainer for \"f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268\" returns successfully" Jul 15 04:33:38.540318 kubelet[2617]: E0715 04:33:38.540222 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-959pd" podUID="8fa57634-f69a-42ff-ae8f-1f9db265d0bc" Jul 15 04:33:38.565648 systemd[1]: cri-containerd-f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268.scope: Deactivated successfully. Jul 15 04:33:38.566241 systemd[1]: cri-containerd-f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268.scope: Consumed 486ms CPU time, 174.8M memory peak, 2.1M read from disk, 165.8M written to disk. Jul 15 04:33:38.567106 containerd[1499]: time="2025-07-15T04:33:38.567050021Z" level=info msg="received exit event container_id:\"f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268\" id:\"f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268\" pid:3324 exited_at:{seconds:1752554018 nanos:566354445}" Jul 15 04:33:38.567409 containerd[1499]: time="2025-07-15T04:33:38.567292278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268\" id:\"f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268\" pid:3324 exited_at:{seconds:1752554018 nanos:566354445}" Jul 15 04:33:38.587049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f608a48c1f409beb7922b3c3417e932ab04fa656d65b87c7ea210dddeebdc268-rootfs.mount: Deactivated successfully. Jul 15 04:33:38.649302 kubelet[2617]: I0715 04:33:38.649164 2617 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 04:33:38.657723 containerd[1499]: time="2025-07-15T04:33:38.657676832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 04:33:38.714781 systemd[1]: Created slice kubepods-burstable-pod4d2aa392_68c8_4783_b72c_0de907a5114b.slice - libcontainer container kubepods-burstable-pod4d2aa392_68c8_4783_b72c_0de907a5114b.slice. Jul 15 04:33:38.721228 systemd[1]: Created slice kubepods-besteffort-pod5ed815e8_8dff_4f47_9985_135c045643f0.slice - libcontainer container kubepods-besteffort-pod5ed815e8_8dff_4f47_9985_135c045643f0.slice. Jul 15 04:33:38.727284 systemd[1]: Created slice kubepods-besteffort-pod3b87aef8_f00a_4f17_be31_0b279e5b7f35.slice - libcontainer container kubepods-besteffort-pod3b87aef8_f00a_4f17_be31_0b279e5b7f35.slice. Jul 15 04:33:38.735673 systemd[1]: Created slice kubepods-burstable-podf84ceeeb_aacc_4938_944f_2df46d19a16c.slice - libcontainer container kubepods-burstable-podf84ceeeb_aacc_4938_944f_2df46d19a16c.slice. Jul 15 04:33:38.739599 systemd[1]: Created slice kubepods-besteffort-pod23c819d8_fed4_400c_915e_7b92b0eda130.slice - libcontainer container kubepods-besteffort-pod23c819d8_fed4_400c_915e_7b92b0eda130.slice. Jul 15 04:33:38.744381 systemd[1]: Created slice kubepods-besteffort-podde06bca6_0234_4d15_bcad_837748ad6701.slice - libcontainer container kubepods-besteffort-podde06bca6_0234_4d15_bcad_837748ad6701.slice. Jul 15 04:33:38.749027 systemd[1]: Created slice kubepods-besteffort-podad1d9544_62ac_4e6f_8783_fcb5f0ab849d.slice - libcontainer container kubepods-besteffort-podad1d9544_62ac_4e6f_8783_fcb5f0ab849d.slice. Jul 15 04:33:38.814220 kubelet[2617]: I0715 04:33:38.813343 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b87aef8-f00a-4f17-be31-0b279e5b7f35-tigera-ca-bundle\") pod \"calico-kube-controllers-b4b45f849-dl72s\" (UID: \"3b87aef8-f00a-4f17-be31-0b279e5b7f35\") " pod="calico-system/calico-kube-controllers-b4b45f849-dl72s" Jul 15 04:33:38.814220 kubelet[2617]: I0715 04:33:38.813403 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q86cl\" (UniqueName: \"kubernetes.io/projected/3b87aef8-f00a-4f17-be31-0b279e5b7f35-kube-api-access-q86cl\") pod \"calico-kube-controllers-b4b45f849-dl72s\" (UID: \"3b87aef8-f00a-4f17-be31-0b279e5b7f35\") " pod="calico-system/calico-kube-controllers-b4b45f849-dl72s" Jul 15 04:33:38.814220 kubelet[2617]: I0715 04:33:38.813427 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f84ceeeb-aacc-4938-944f-2df46d19a16c-config-volume\") pod \"coredns-7c65d6cfc9-zcmdf\" (UID: \"f84ceeeb-aacc-4938-944f-2df46d19a16c\") " pod="kube-system/coredns-7c65d6cfc9-zcmdf" Jul 15 04:33:38.814220 kubelet[2617]: I0715 04:33:38.813512 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23c819d8-fed4-400c-915e-7b92b0eda130-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-nfnml\" (UID: \"23c819d8-fed4-400c-915e-7b92b0eda130\") " pod="calico-system/goldmane-58fd7646b9-nfnml" Jul 15 04:33:38.814220 kubelet[2617]: I0715 04:33:38.813821 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29gkq\" (UniqueName: \"kubernetes.io/projected/de06bca6-0234-4d15-bcad-837748ad6701-kube-api-access-29gkq\") pod \"whisker-6867b7bbf5-w2zm7\" (UID: \"de06bca6-0234-4d15-bcad-837748ad6701\") " pod="calico-system/whisker-6867b7bbf5-w2zm7" Jul 15 04:33:38.814489 kubelet[2617]: I0715 04:33:38.813841 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ad1d9544-62ac-4e6f-8783-fcb5f0ab849d-calico-apiserver-certs\") pod \"calico-apiserver-b66bcf85-n8tlt\" (UID: \"ad1d9544-62ac-4e6f-8783-fcb5f0ab849d\") " pod="calico-apiserver/calico-apiserver-b66bcf85-n8tlt" Jul 15 04:33:38.814489 kubelet[2617]: I0715 04:33:38.813859 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d2aa392-68c8-4783-b72c-0de907a5114b-config-volume\") pod \"coredns-7c65d6cfc9-qldj9\" (UID: \"4d2aa392-68c8-4783-b72c-0de907a5114b\") " pod="kube-system/coredns-7c65d6cfc9-qldj9" Jul 15 04:33:38.814489 kubelet[2617]: I0715 04:33:38.813888 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ed815e8-8dff-4f47-9985-135c045643f0-calico-apiserver-certs\") pod \"calico-apiserver-b66bcf85-cbhsv\" (UID: \"5ed815e8-8dff-4f47-9985-135c045643f0\") " pod="calico-apiserver/calico-apiserver-b66bcf85-cbhsv" Jul 15 04:33:38.814489 kubelet[2617]: I0715 04:33:38.813904 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bgsx\" (UniqueName: \"kubernetes.io/projected/f84ceeeb-aacc-4938-944f-2df46d19a16c-kube-api-access-8bgsx\") pod \"coredns-7c65d6cfc9-zcmdf\" (UID: \"f84ceeeb-aacc-4938-944f-2df46d19a16c\") " pod="kube-system/coredns-7c65d6cfc9-zcmdf" Jul 15 04:33:38.814489 kubelet[2617]: I0715 04:33:38.813919 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06bca6-0234-4d15-bcad-837748ad6701-whisker-ca-bundle\") pod \"whisker-6867b7bbf5-w2zm7\" (UID: \"de06bca6-0234-4d15-bcad-837748ad6701\") " pod="calico-system/whisker-6867b7bbf5-w2zm7" Jul 15 04:33:38.814595 kubelet[2617]: I0715 04:33:38.813936 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23c819d8-fed4-400c-915e-7b92b0eda130-config\") pod \"goldmane-58fd7646b9-nfnml\" (UID: \"23c819d8-fed4-400c-915e-7b92b0eda130\") " pod="calico-system/goldmane-58fd7646b9-nfnml" Jul 15 04:33:38.814595 kubelet[2617]: I0715 04:33:38.813963 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de06bca6-0234-4d15-bcad-837748ad6701-whisker-backend-key-pair\") pod \"whisker-6867b7bbf5-w2zm7\" (UID: \"de06bca6-0234-4d15-bcad-837748ad6701\") " pod="calico-system/whisker-6867b7bbf5-w2zm7" Jul 15 04:33:38.814595 kubelet[2617]: I0715 04:33:38.813982 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5phm\" (UniqueName: \"kubernetes.io/projected/ad1d9544-62ac-4e6f-8783-fcb5f0ab849d-kube-api-access-g5phm\") pod \"calico-apiserver-b66bcf85-n8tlt\" (UID: \"ad1d9544-62ac-4e6f-8783-fcb5f0ab849d\") " pod="calico-apiserver/calico-apiserver-b66bcf85-n8tlt" Jul 15 04:33:38.814595 kubelet[2617]: I0715 04:33:38.813996 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltkqk\" (UniqueName: \"kubernetes.io/projected/4d2aa392-68c8-4783-b72c-0de907a5114b-kube-api-access-ltkqk\") pod \"coredns-7c65d6cfc9-qldj9\" (UID: \"4d2aa392-68c8-4783-b72c-0de907a5114b\") " pod="kube-system/coredns-7c65d6cfc9-qldj9" Jul 15 04:33:38.814595 kubelet[2617]: I0715 04:33:38.814012 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgqz6\" (UniqueName: \"kubernetes.io/projected/5ed815e8-8dff-4f47-9985-135c045643f0-kube-api-access-qgqz6\") pod \"calico-apiserver-b66bcf85-cbhsv\" (UID: \"5ed815e8-8dff-4f47-9985-135c045643f0\") " pod="calico-apiserver/calico-apiserver-b66bcf85-cbhsv" Jul 15 04:33:38.814701 kubelet[2617]: I0715 04:33:38.814027 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/23c819d8-fed4-400c-915e-7b92b0eda130-goldmane-key-pair\") pod \"goldmane-58fd7646b9-nfnml\" (UID: \"23c819d8-fed4-400c-915e-7b92b0eda130\") " pod="calico-system/goldmane-58fd7646b9-nfnml" Jul 15 04:33:38.814701 kubelet[2617]: I0715 04:33:38.814054 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znw67\" (UniqueName: \"kubernetes.io/projected/23c819d8-fed4-400c-915e-7b92b0eda130-kube-api-access-znw67\") pod \"goldmane-58fd7646b9-nfnml\" (UID: \"23c819d8-fed4-400c-915e-7b92b0eda130\") " pod="calico-system/goldmane-58fd7646b9-nfnml" Jul 15 04:33:39.018577 kubelet[2617]: E0715 04:33:39.018429 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:39.019237 containerd[1499]: time="2025-07-15T04:33:39.019194198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qldj9,Uid:4d2aa392-68c8-4783-b72c-0de907a5114b,Namespace:kube-system,Attempt:0,}" Jul 15 04:33:39.026636 containerd[1499]: time="2025-07-15T04:33:39.026332376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66bcf85-cbhsv,Uid:5ed815e8-8dff-4f47-9985-135c045643f0,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:33:39.041446 kubelet[2617]: E0715 04:33:39.041403 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:39.050684 containerd[1499]: time="2025-07-15T04:33:39.042226070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4b45f849-dl72s,Uid:3b87aef8-f00a-4f17-be31-0b279e5b7f35,Namespace:calico-system,Attempt:0,}" Jul 15 04:33:39.061258 containerd[1499]: time="2025-07-15T04:33:39.052874661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-nfnml,Uid:23c819d8-fed4-400c-915e-7b92b0eda130,Namespace:calico-system,Attempt:0,}" Jul 15 04:33:39.061258 containerd[1499]: time="2025-07-15T04:33:39.053474729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6867b7bbf5-w2zm7,Uid:de06bca6-0234-4d15-bcad-837748ad6701,Namespace:calico-system,Attempt:0,}" Jul 15 04:33:39.062535 containerd[1499]: time="2025-07-15T04:33:39.061814442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zcmdf,Uid:f84ceeeb-aacc-4938-944f-2df46d19a16c,Namespace:kube-system,Attempt:0,}" Jul 15 04:33:39.063052 containerd[1499]: time="2025-07-15T04:33:39.062998259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66bcf85-n8tlt,Uid:ad1d9544-62ac-4e6f-8783-fcb5f0ab849d,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:33:39.579472 containerd[1499]: time="2025-07-15T04:33:39.579345158Z" level=error msg="Failed to destroy network for sandbox \"39cd6557bd715bd225abafff77bcb090d88d223b45a5e00a096a22c9dd82b6e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.582030 containerd[1499]: time="2025-07-15T04:33:39.581901655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66bcf85-cbhsv,Uid:5ed815e8-8dff-4f47-9985-135c045643f0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39cd6557bd715bd225abafff77bcb090d88d223b45a5e00a096a22c9dd82b6e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.585208 containerd[1499]: time="2025-07-15T04:33:39.585171330Z" level=error msg="Failed to destroy network for sandbox \"ef0d4ec8ea8ebd01b2643cb267ec60e4ba54f3b869d4e1e6dc1b3f5f6a568869\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.585473 kubelet[2617]: E0715 04:33:39.585414 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39cd6557bd715bd225abafff77bcb090d88d223b45a5e00a096a22c9dd82b6e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.585990 kubelet[2617]: E0715 04:33:39.585506 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39cd6557bd715bd225abafff77bcb090d88d223b45a5e00a096a22c9dd82b6e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b66bcf85-cbhsv" Jul 15 04:33:39.585990 kubelet[2617]: E0715 04:33:39.585527 2617 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39cd6557bd715bd225abafff77bcb090d88d223b45a5e00a096a22c9dd82b6e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b66bcf85-cbhsv" Jul 15 04:33:39.585990 kubelet[2617]: E0715 04:33:39.585581 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b66bcf85-cbhsv_calico-apiserver(5ed815e8-8dff-4f47-9985-135c045643f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b66bcf85-cbhsv_calico-apiserver(5ed815e8-8dff-4f47-9985-135c045643f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39cd6557bd715bd225abafff77bcb090d88d223b45a5e00a096a22c9dd82b6e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b66bcf85-cbhsv" podUID="5ed815e8-8dff-4f47-9985-135c045643f0" Jul 15 04:33:39.586362 containerd[1499]: time="2025-07-15T04:33:39.586293352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zcmdf,Uid:f84ceeeb-aacc-4938-944f-2df46d19a16c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef0d4ec8ea8ebd01b2643cb267ec60e4ba54f3b869d4e1e6dc1b3f5f6a568869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.586518 kubelet[2617]: E0715 04:33:39.586487 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef0d4ec8ea8ebd01b2643cb267ec60e4ba54f3b869d4e1e6dc1b3f5f6a568869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.586585 kubelet[2617]: E0715 04:33:39.586528 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef0d4ec8ea8ebd01b2643cb267ec60e4ba54f3b869d4e1e6dc1b3f5f6a568869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zcmdf" Jul 15 04:33:39.586585 kubelet[2617]: E0715 04:33:39.586553 2617 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef0d4ec8ea8ebd01b2643cb267ec60e4ba54f3b869d4e1e6dc1b3f5f6a568869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zcmdf" Jul 15 04:33:39.586655 kubelet[2617]: E0715 04:33:39.586585 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-zcmdf_kube-system(f84ceeeb-aacc-4938-944f-2df46d19a16c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-zcmdf_kube-system(f84ceeeb-aacc-4938-944f-2df46d19a16c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef0d4ec8ea8ebd01b2643cb267ec60e4ba54f3b869d4e1e6dc1b3f5f6a568869\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zcmdf" podUID="f84ceeeb-aacc-4938-944f-2df46d19a16c" Jul 15 04:33:39.594636 containerd[1499]: time="2025-07-15T04:33:39.594595428Z" level=error msg="Failed to destroy network for sandbox \"1c0c34bfbdf217704815fc5544ac74d392f4913ac18af93aa391e864075c39c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.597268 containerd[1499]: time="2025-07-15T04:33:39.597153765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-nfnml,Uid:23c819d8-fed4-400c-915e-7b92b0eda130,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0c34bfbdf217704815fc5544ac74d392f4913ac18af93aa391e864075c39c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.597411 kubelet[2617]: E0715 04:33:39.597371 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0c34bfbdf217704815fc5544ac74d392f4913ac18af93aa391e864075c39c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.597474 kubelet[2617]: E0715 04:33:39.597420 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0c34bfbdf217704815fc5544ac74d392f4913ac18af93aa391e864075c39c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-nfnml" Jul 15 04:33:39.597474 kubelet[2617]: E0715 04:33:39.597437 2617 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0c34bfbdf217704815fc5544ac74d392f4913ac18af93aa391e864075c39c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-nfnml" Jul 15 04:33:39.597591 kubelet[2617]: E0715 04:33:39.597562 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-nfnml_calico-system(23c819d8-fed4-400c-915e-7b92b0eda130)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-nfnml_calico-system(23c819d8-fed4-400c-915e-7b92b0eda130)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c0c34bfbdf217704815fc5544ac74d392f4913ac18af93aa391e864075c39c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-nfnml" podUID="23c819d8-fed4-400c-915e-7b92b0eda130" Jul 15 04:33:39.597736 containerd[1499]: time="2025-07-15T04:33:39.597667160Z" level=error msg="Failed to destroy network for sandbox \"cfc2aa6e77128a0caf76bc6f88eead97f98ea757fa4b91f6da7961b9940bb372\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.598124 containerd[1499]: time="2025-07-15T04:33:39.598097163Z" level=error msg="Failed to destroy network for sandbox \"3f58b6c38b3bdcb19e5920d206fe82c8b5df92db8c9b733fead3effd817b075e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.599775 containerd[1499]: time="2025-07-15T04:33:39.598580281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qldj9,Uid:4d2aa392-68c8-4783-b72c-0de907a5114b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfc2aa6e77128a0caf76bc6f88eead97f98ea757fa4b91f6da7961b9940bb372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.599775 containerd[1499]: time="2025-07-15T04:33:39.599239263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6867b7bbf5-w2zm7,Uid:de06bca6-0234-4d15-bcad-837748ad6701,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f58b6c38b3bdcb19e5920d206fe82c8b5df92db8c9b733fead3effd817b075e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.600121 kubelet[2617]: E0715 04:33:39.599670 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfc2aa6e77128a0caf76bc6f88eead97f98ea757fa4b91f6da7961b9940bb372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.600121 kubelet[2617]: E0715 04:33:39.599717 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfc2aa6e77128a0caf76bc6f88eead97f98ea757fa4b91f6da7961b9940bb372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qldj9" Jul 15 04:33:39.600121 kubelet[2617]: E0715 04:33:39.599734 2617 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfc2aa6e77128a0caf76bc6f88eead97f98ea757fa4b91f6da7961b9940bb372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qldj9" Jul 15 04:33:39.600228 kubelet[2617]: E0715 04:33:39.599774 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qldj9_kube-system(4d2aa392-68c8-4783-b72c-0de907a5114b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qldj9_kube-system(4d2aa392-68c8-4783-b72c-0de907a5114b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfc2aa6e77128a0caf76bc6f88eead97f98ea757fa4b91f6da7961b9940bb372\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qldj9" podUID="4d2aa392-68c8-4783-b72c-0de907a5114b" Jul 15 04:33:39.600228 kubelet[2617]: E0715 04:33:39.599648 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f58b6c38b3bdcb19e5920d206fe82c8b5df92db8c9b733fead3effd817b075e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.600228 kubelet[2617]: E0715 04:33:39.599960 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f58b6c38b3bdcb19e5920d206fe82c8b5df92db8c9b733fead3effd817b075e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6867b7bbf5-w2zm7" Jul 15 04:33:39.600333 kubelet[2617]: E0715 04:33:39.599975 2617 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f58b6c38b3bdcb19e5920d206fe82c8b5df92db8c9b733fead3effd817b075e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6867b7bbf5-w2zm7" Jul 15 04:33:39.600333 kubelet[2617]: E0715 04:33:39.600113 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6867b7bbf5-w2zm7_calico-system(de06bca6-0234-4d15-bcad-837748ad6701)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6867b7bbf5-w2zm7_calico-system(de06bca6-0234-4d15-bcad-837748ad6701)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f58b6c38b3bdcb19e5920d206fe82c8b5df92db8c9b733fead3effd817b075e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6867b7bbf5-w2zm7" podUID="de06bca6-0234-4d15-bcad-837748ad6701" Jul 15 04:33:39.600494 containerd[1499]: time="2025-07-15T04:33:39.600455437Z" level=error msg="Failed to destroy network for sandbox \"c77f97ea27228f17d410dd612eba2a3269908f566abdc907df15e9fc9e116f98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.600865 containerd[1499]: time="2025-07-15T04:33:39.600620863Z" level=error msg="Failed to destroy network for sandbox \"d35f72221a8166d14422008fa6b64124f2582b49c7ed382ac8a3debe4a4179dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.602977 containerd[1499]: time="2025-07-15T04:33:39.602857828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4b45f849-dl72s,Uid:3b87aef8-f00a-4f17-be31-0b279e5b7f35,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77f97ea27228f17d410dd612eba2a3269908f566abdc907df15e9fc9e116f98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.603256 kubelet[2617]: E0715 04:33:39.603221 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77f97ea27228f17d410dd612eba2a3269908f566abdc907df15e9fc9e116f98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.603305 kubelet[2617]: E0715 04:33:39.603281 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77f97ea27228f17d410dd612eba2a3269908f566abdc907df15e9fc9e116f98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b4b45f849-dl72s" Jul 15 04:33:39.603332 kubelet[2617]: E0715 04:33:39.603305 2617 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77f97ea27228f17d410dd612eba2a3269908f566abdc907df15e9fc9e116f98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b4b45f849-dl72s" Jul 15 04:33:39.603385 kubelet[2617]: E0715 04:33:39.603358 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b4b45f849-dl72s_calico-system(3b87aef8-f00a-4f17-be31-0b279e5b7f35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b4b45f849-dl72s_calico-system(3b87aef8-f00a-4f17-be31-0b279e5b7f35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c77f97ea27228f17d410dd612eba2a3269908f566abdc907df15e9fc9e116f98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b4b45f849-dl72s" podUID="3b87aef8-f00a-4f17-be31-0b279e5b7f35" Jul 15 04:33:39.604006 containerd[1499]: time="2025-07-15T04:33:39.603973450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66bcf85-n8tlt,Uid:ad1d9544-62ac-4e6f-8783-fcb5f0ab849d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d35f72221a8166d14422008fa6b64124f2582b49c7ed382ac8a3debe4a4179dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.604349 kubelet[2617]: E0715 04:33:39.604266 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d35f72221a8166d14422008fa6b64124f2582b49c7ed382ac8a3debe4a4179dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:39.604403 kubelet[2617]: E0715 04:33:39.604369 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d35f72221a8166d14422008fa6b64124f2582b49c7ed382ac8a3debe4a4179dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b66bcf85-n8tlt" Jul 15 04:33:39.604430 kubelet[2617]: E0715 04:33:39.604388 2617 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d35f72221a8166d14422008fa6b64124f2582b49c7ed382ac8a3debe4a4179dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b66bcf85-n8tlt" Jul 15 04:33:39.604474 kubelet[2617]: E0715 04:33:39.604441 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b66bcf85-n8tlt_calico-apiserver(ad1d9544-62ac-4e6f-8783-fcb5f0ab849d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b66bcf85-n8tlt_calico-apiserver(ad1d9544-62ac-4e6f-8783-fcb5f0ab849d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d35f72221a8166d14422008fa6b64124f2582b49c7ed382ac8a3debe4a4179dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b66bcf85-n8tlt" podUID="ad1d9544-62ac-4e6f-8783-fcb5f0ab849d" Jul 15 04:33:39.988960 systemd[1]: run-netns-cni\x2db5cd8c73\x2d6b88\x2de9e0\x2d5c2b\x2d934ccc285322.mount: Deactivated successfully. Jul 15 04:33:39.989057 systemd[1]: run-netns-cni\x2def742131\x2de7f1\x2d6a59\x2d7fab\x2d90b8a471a9c0.mount: Deactivated successfully. Jul 15 04:33:39.989101 systemd[1]: run-netns-cni\x2d83004bc2\x2dab73\x2d21a8\x2d9ae4\x2d29e380109dd8.mount: Deactivated successfully. Jul 15 04:33:39.989146 systemd[1]: run-netns-cni\x2d0cd72f3f\x2d0a20\x2d0bf0\x2dc7b2\x2d548bf604dfeb.mount: Deactivated successfully. Jul 15 04:33:40.546326 systemd[1]: Created slice kubepods-besteffort-pod8fa57634_f69a_42ff_ae8f_1f9db265d0bc.slice - libcontainer container kubepods-besteffort-pod8fa57634_f69a_42ff_ae8f_1f9db265d0bc.slice. Jul 15 04:33:40.548566 containerd[1499]: time="2025-07-15T04:33:40.548521041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-959pd,Uid:8fa57634-f69a-42ff-ae8f-1f9db265d0bc,Namespace:calico-system,Attempt:0,}" Jul 15 04:33:40.633146 containerd[1499]: time="2025-07-15T04:33:40.632976937Z" level=error msg="Failed to destroy network for sandbox \"30c3f3751eebc197d0d998bb842c7865e2273fa65b6f7e43f39f7fc0706e2436\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:40.634981 systemd[1]: run-netns-cni\x2d2e8fc902\x2da29f\x2d06b3\x2d57f3\x2d804b66559a13.mount: Deactivated successfully. Jul 15 04:33:40.646715 containerd[1499]: time="2025-07-15T04:33:40.646626822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-959pd,Uid:8fa57634-f69a-42ff-ae8f-1f9db265d0bc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30c3f3751eebc197d0d998bb842c7865e2273fa65b6f7e43f39f7fc0706e2436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:40.646864 kubelet[2617]: E0715 04:33:40.646836 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30c3f3751eebc197d0d998bb842c7865e2273fa65b6f7e43f39f7fc0706e2436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:33:40.647131 kubelet[2617]: E0715 04:33:40.646886 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30c3f3751eebc197d0d998bb842c7865e2273fa65b6f7e43f39f7fc0706e2436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-959pd" Jul 15 04:33:40.647131 kubelet[2617]: E0715 04:33:40.646907 2617 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30c3f3751eebc197d0d998bb842c7865e2273fa65b6f7e43f39f7fc0706e2436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-959pd" Jul 15 04:33:40.647131 kubelet[2617]: E0715 04:33:40.646948 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-959pd_calico-system(8fa57634-f69a-42ff-ae8f-1f9db265d0bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-959pd_calico-system(8fa57634-f69a-42ff-ae8f-1f9db265d0bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30c3f3751eebc197d0d998bb842c7865e2273fa65b6f7e43f39f7fc0706e2436\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-959pd" podUID="8fa57634-f69a-42ff-ae8f-1f9db265d0bc" Jul 15 04:33:42.829918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2702555728.mount: Deactivated successfully. Jul 15 04:33:43.071673 containerd[1499]: time="2025-07-15T04:33:43.066830191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 15 04:33:43.071673 containerd[1499]: time="2025-07-15T04:33:43.070168686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.412442259s" Jul 15 04:33:43.072018 containerd[1499]: time="2025-07-15T04:33:43.071690744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 15 04:33:43.079837 containerd[1499]: time="2025-07-15T04:33:43.078965774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:43.079837 containerd[1499]: time="2025-07-15T04:33:43.079603331Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:43.080227 containerd[1499]: time="2025-07-15T04:33:43.080188292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:43.080771 containerd[1499]: time="2025-07-15T04:33:43.080721336Z" level=info msg="CreateContainer within sandbox \"37d41390df3cd315873270af0df04ef5085f548f581d2c0f7cb21229f52bc82c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 04:33:43.095929 containerd[1499]: time="2025-07-15T04:33:43.095845357Z" level=info msg="Container 8fd3e2a35548bf412b1386e6945c42f7e98a4adb25b31557cc57b117926c584d: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:43.122589 containerd[1499]: time="2025-07-15T04:33:43.122449125Z" level=info msg="CreateContainer within sandbox \"37d41390df3cd315873270af0df04ef5085f548f581d2c0f7cb21229f52bc82c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8fd3e2a35548bf412b1386e6945c42f7e98a4adb25b31557cc57b117926c584d\"" Jul 15 04:33:43.123013 containerd[1499]: time="2025-07-15T04:33:43.122973450Z" level=info msg="StartContainer for \"8fd3e2a35548bf412b1386e6945c42f7e98a4adb25b31557cc57b117926c584d\"" Jul 15 04:33:43.124580 containerd[1499]: time="2025-07-15T04:33:43.124551504Z" level=info msg="connecting to shim 8fd3e2a35548bf412b1386e6945c42f7e98a4adb25b31557cc57b117926c584d" address="unix:///run/containerd/s/4c421bc476dc73a749ae01ee3013a5a7a665ff1759f172c863d2951a807b0dc7" protocol=ttrpc version=3 Jul 15 04:33:43.144635 systemd[1]: Started cri-containerd-8fd3e2a35548bf412b1386e6945c42f7e98a4adb25b31557cc57b117926c584d.scope - libcontainer container 8fd3e2a35548bf412b1386e6945c42f7e98a4adb25b31557cc57b117926c584d. Jul 15 04:33:43.207852 containerd[1499]: time="2025-07-15T04:33:43.207792177Z" level=info msg="StartContainer for \"8fd3e2a35548bf412b1386e6945c42f7e98a4adb25b31557cc57b117926c584d\" returns successfully" Jul 15 04:33:43.402532 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 04:33:43.402994 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 04:33:43.562916 kubelet[2617]: I0715 04:33:43.562775 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de06bca6-0234-4d15-bcad-837748ad6701-whisker-backend-key-pair\") pod \"de06bca6-0234-4d15-bcad-837748ad6701\" (UID: \"de06bca6-0234-4d15-bcad-837748ad6701\") " Jul 15 04:33:43.563826 kubelet[2617]: I0715 04:33:43.563572 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06bca6-0234-4d15-bcad-837748ad6701-whisker-ca-bundle\") pod \"de06bca6-0234-4d15-bcad-837748ad6701\" (UID: \"de06bca6-0234-4d15-bcad-837748ad6701\") " Jul 15 04:33:43.563826 kubelet[2617]: I0715 04:33:43.563650 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29gkq\" (UniqueName: \"kubernetes.io/projected/de06bca6-0234-4d15-bcad-837748ad6701-kube-api-access-29gkq\") pod \"de06bca6-0234-4d15-bcad-837748ad6701\" (UID: \"de06bca6-0234-4d15-bcad-837748ad6701\") " Jul 15 04:33:43.569256 kubelet[2617]: I0715 04:33:43.569213 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de06bca6-0234-4d15-bcad-837748ad6701-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "de06bca6-0234-4d15-bcad-837748ad6701" (UID: "de06bca6-0234-4d15-bcad-837748ad6701"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 04:33:43.571302 kubelet[2617]: I0715 04:33:43.571220 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06bca6-0234-4d15-bcad-837748ad6701-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "de06bca6-0234-4d15-bcad-837748ad6701" (UID: "de06bca6-0234-4d15-bcad-837748ad6701"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 04:33:43.571302 kubelet[2617]: I0715 04:33:43.571220 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de06bca6-0234-4d15-bcad-837748ad6701-kube-api-access-29gkq" (OuterVolumeSpecName: "kube-api-access-29gkq") pod "de06bca6-0234-4d15-bcad-837748ad6701" (UID: "de06bca6-0234-4d15-bcad-837748ad6701"). InnerVolumeSpecName "kube-api-access-29gkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 04:33:43.664889 kubelet[2617]: I0715 04:33:43.664762 2617 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29gkq\" (UniqueName: \"kubernetes.io/projected/de06bca6-0234-4d15-bcad-837748ad6701-kube-api-access-29gkq\") on node \"localhost\" DevicePath \"\"" Jul 15 04:33:43.664889 kubelet[2617]: I0715 04:33:43.664796 2617 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de06bca6-0234-4d15-bcad-837748ad6701-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 15 04:33:43.664889 kubelet[2617]: I0715 04:33:43.664807 2617 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06bca6-0234-4d15-bcad-837748ad6701-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 15 04:33:43.696501 systemd[1]: Removed slice kubepods-besteffort-podde06bca6_0234_4d15_bcad_837748ad6701.slice - libcontainer container kubepods-besteffort-podde06bca6_0234_4d15_bcad_837748ad6701.slice. Jul 15 04:33:43.718972 kubelet[2617]: I0715 04:33:43.718903 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kj8lm" podStartSLOduration=1.915498544 podStartE2EDuration="13.718885994s" podCreationTimestamp="2025-07-15 04:33:30 +0000 UTC" firstStartedPulling="2025-07-15 04:33:31.268852737 +0000 UTC m=+20.811248548" lastFinishedPulling="2025-07-15 04:33:43.072240227 +0000 UTC m=+32.614635998" observedRunningTime="2025-07-15 04:33:43.717768429 +0000 UTC m=+33.260164360" watchObservedRunningTime="2025-07-15 04:33:43.718885994 +0000 UTC m=+33.261281805" Jul 15 04:33:43.765685 kubelet[2617]: I0715 04:33:43.765637 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ab57830-3bc8-4d13-b158-391109680421-whisker-ca-bundle\") pod \"whisker-658469c645-2dn2l\" (UID: \"2ab57830-3bc8-4d13-b158-391109680421\") " pod="calico-system/whisker-658469c645-2dn2l" Jul 15 04:33:43.765797 kubelet[2617]: I0715 04:33:43.765690 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9xmg\" (UniqueName: \"kubernetes.io/projected/2ab57830-3bc8-4d13-b158-391109680421-kube-api-access-c9xmg\") pod \"whisker-658469c645-2dn2l\" (UID: \"2ab57830-3bc8-4d13-b158-391109680421\") " pod="calico-system/whisker-658469c645-2dn2l" Jul 15 04:33:43.765797 kubelet[2617]: I0715 04:33:43.765723 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ab57830-3bc8-4d13-b158-391109680421-whisker-backend-key-pair\") pod \"whisker-658469c645-2dn2l\" (UID: \"2ab57830-3bc8-4d13-b158-391109680421\") " pod="calico-system/whisker-658469c645-2dn2l" Jul 15 04:33:43.769525 systemd[1]: Created slice kubepods-besteffort-pod2ab57830_3bc8_4d13_b158_391109680421.slice - libcontainer container kubepods-besteffort-pod2ab57830_3bc8_4d13_b158_391109680421.slice. Jul 15 04:33:43.830441 systemd[1]: var-lib-kubelet-pods-de06bca6\x2d0234\x2d4d15\x2dbcad\x2d837748ad6701-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d29gkq.mount: Deactivated successfully. Jul 15 04:33:43.830755 systemd[1]: var-lib-kubelet-pods-de06bca6\x2d0234\x2d4d15\x2dbcad\x2d837748ad6701-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 04:33:44.074954 containerd[1499]: time="2025-07-15T04:33:44.074903950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658469c645-2dn2l,Uid:2ab57830-3bc8-4d13-b158-391109680421,Namespace:calico-system,Attempt:0,}" Jul 15 04:33:44.343974 systemd-networkd[1412]: calic736b330940: Link UP Jul 15 04:33:44.344476 systemd-networkd[1412]: calic736b330940: Gained carrier Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.145 [INFO][3703] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.190 [INFO][3703] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--658469c645--2dn2l-eth0 whisker-658469c645- calico-system 2ab57830-3bc8-4d13-b158-391109680421 875 0 2025-07-15 04:33:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:658469c645 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-658469c645-2dn2l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic736b330940 [] [] }} ContainerID="9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" Namespace="calico-system" Pod="whisker-658469c645-2dn2l" WorkloadEndpoint="localhost-k8s-whisker--658469c645--2dn2l-" Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.190 [INFO][3703] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" Namespace="calico-system" Pod="whisker-658469c645-2dn2l" WorkloadEndpoint="localhost-k8s-whisker--658469c645--2dn2l-eth0" Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.294 [INFO][3716] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" HandleID="k8s-pod-network.9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" Workload="localhost-k8s-whisker--658469c645--2dn2l-eth0" Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.294 [INFO][3716] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" HandleID="k8s-pod-network.9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" Workload="localhost-k8s-whisker--658469c645--2dn2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-658469c645-2dn2l", "timestamp":"2025-07-15 04:33:44.294245501 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.294 [INFO][3716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.294 [INFO][3716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.294 [INFO][3716] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.310 [INFO][3716] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" host="localhost" Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.315 [INFO][3716] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.320 [INFO][3716] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.322 [INFO][3716] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.324 [INFO][3716] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.324 [INFO][3716] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" host="localhost" Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.326 [INFO][3716] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.329 [INFO][3716] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" host="localhost" Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.334 [INFO][3716] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" host="localhost" Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.334 [INFO][3716] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" host="localhost" Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.334 [INFO][3716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:33:44.362688 containerd[1499]: 2025-07-15 04:33:44.334 [INFO][3716] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" HandleID="k8s-pod-network.9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" Workload="localhost-k8s-whisker--658469c645--2dn2l-eth0" Jul 15 04:33:44.363209 containerd[1499]: 2025-07-15 04:33:44.337 [INFO][3703] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" Namespace="calico-system" Pod="whisker-658469c645-2dn2l" WorkloadEndpoint="localhost-k8s-whisker--658469c645--2dn2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--658469c645--2dn2l-eth0", GenerateName:"whisker-658469c645-", Namespace:"calico-system", SelfLink:"", UID:"2ab57830-3bc8-4d13-b158-391109680421", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"658469c645", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-658469c645-2dn2l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic736b330940", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:44.363209 containerd[1499]: 2025-07-15 04:33:44.337 [INFO][3703] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" Namespace="calico-system" Pod="whisker-658469c645-2dn2l" WorkloadEndpoint="localhost-k8s-whisker--658469c645--2dn2l-eth0" Jul 15 04:33:44.363209 containerd[1499]: 2025-07-15 04:33:44.337 [INFO][3703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic736b330940 ContainerID="9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" Namespace="calico-system" Pod="whisker-658469c645-2dn2l" WorkloadEndpoint="localhost-k8s-whisker--658469c645--2dn2l-eth0" Jul 15 04:33:44.363209 containerd[1499]: 2025-07-15 04:33:44.344 [INFO][3703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" Namespace="calico-system" Pod="whisker-658469c645-2dn2l" WorkloadEndpoint="localhost-k8s-whisker--658469c645--2dn2l-eth0" Jul 15 04:33:44.363209 containerd[1499]: 2025-07-15 04:33:44.346 [INFO][3703] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" Namespace="calico-system" Pod="whisker-658469c645-2dn2l" WorkloadEndpoint="localhost-k8s-whisker--658469c645--2dn2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--658469c645--2dn2l-eth0", GenerateName:"whisker-658469c645-", Namespace:"calico-system", SelfLink:"", UID:"2ab57830-3bc8-4d13-b158-391109680421", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"658469c645", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb", Pod:"whisker-658469c645-2dn2l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic736b330940", MAC:"5a:c1:94:cb:58:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:44.363209 containerd[1499]: 2025-07-15 04:33:44.360 [INFO][3703] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" Namespace="calico-system" Pod="whisker-658469c645-2dn2l" WorkloadEndpoint="localhost-k8s-whisker--658469c645--2dn2l-eth0" Jul 15 04:33:44.420919 containerd[1499]: time="2025-07-15T04:33:44.420875585Z" level=info msg="connecting to shim 9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb" address="unix:///run/containerd/s/81c99487a44c9033657fc18cf46bba0dbb049982527b06a960550e11f779f8ff" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:44.454638 systemd[1]: Started cri-containerd-9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb.scope - libcontainer container 9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb. Jul 15 04:33:44.465202 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:33:44.490147 containerd[1499]: time="2025-07-15T04:33:44.490108733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658469c645-2dn2l,Uid:2ab57830-3bc8-4d13-b158-391109680421,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb\"" Jul 15 04:33:44.491752 containerd[1499]: time="2025-07-15T04:33:44.491724551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 04:33:44.542783 kubelet[2617]: I0715 04:33:44.542744 2617 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de06bca6-0234-4d15-bcad-837748ad6701" path="/var/lib/kubelet/pods/de06bca6-0234-4d15-bcad-837748ad6701/volumes" Jul 15 04:33:44.695418 kubelet[2617]: I0715 04:33:44.695301 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:33:45.619546 containerd[1499]: time="2025-07-15T04:33:45.619380897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:45.620764 containerd[1499]: time="2025-07-15T04:33:45.620665347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 15 04:33:45.621485 containerd[1499]: time="2025-07-15T04:33:45.621441249Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:45.623508 containerd[1499]: time="2025-07-15T04:33:45.623386325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:45.623930 containerd[1499]: time="2025-07-15T04:33:45.623896713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.131929497s" Jul 15 04:33:45.623930 containerd[1499]: time="2025-07-15T04:33:45.623926712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 15 04:33:45.632362 containerd[1499]: time="2025-07-15T04:33:45.632310039Z" level=info msg="CreateContainer within sandbox \"9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 04:33:45.638476 containerd[1499]: time="2025-07-15T04:33:45.637742394Z" level=info msg="Container d516e0b78ed952355a5c4ef826c6cdba9e56b7451ccaae2eb5cc5d436be755ac: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:45.642736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318077712.mount: Deactivated successfully. Jul 15 04:33:45.646237 containerd[1499]: time="2025-07-15T04:33:45.646187840Z" level=info msg="CreateContainer within sandbox \"9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d516e0b78ed952355a5c4ef826c6cdba9e56b7451ccaae2eb5cc5d436be755ac\"" Jul 15 04:33:45.646760 containerd[1499]: time="2025-07-15T04:33:45.646729707Z" level=info msg="StartContainer for \"d516e0b78ed952355a5c4ef826c6cdba9e56b7451ccaae2eb5cc5d436be755ac\"" Jul 15 04:33:45.648098 containerd[1499]: time="2025-07-15T04:33:45.648057157Z" level=info msg="connecting to shim d516e0b78ed952355a5c4ef826c6cdba9e56b7451ccaae2eb5cc5d436be755ac" address="unix:///run/containerd/s/81c99487a44c9033657fc18cf46bba0dbb049982527b06a960550e11f779f8ff" protocol=ttrpc version=3 Jul 15 04:33:45.667630 systemd[1]: Started cri-containerd-d516e0b78ed952355a5c4ef826c6cdba9e56b7451ccaae2eb5cc5d436be755ac.scope - libcontainer container d516e0b78ed952355a5c4ef826c6cdba9e56b7451ccaae2eb5cc5d436be755ac. Jul 15 04:33:45.701805 containerd[1499]: time="2025-07-15T04:33:45.701767601Z" level=info msg="StartContainer for \"d516e0b78ed952355a5c4ef826c6cdba9e56b7451ccaae2eb5cc5d436be755ac\" returns successfully" Jul 15 04:33:45.703266 containerd[1499]: time="2025-07-15T04:33:45.703238247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 04:33:46.168628 systemd-networkd[1412]: calic736b330940: Gained IPv6LL Jul 15 04:33:47.371945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3915200390.mount: Deactivated successfully. Jul 15 04:33:47.403342 containerd[1499]: time="2025-07-15T04:33:47.403292836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:47.403755 containerd[1499]: time="2025-07-15T04:33:47.403714307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 15 04:33:47.407144 containerd[1499]: time="2025-07-15T04:33:47.407105393Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:47.409330 containerd[1499]: time="2025-07-15T04:33:47.409294105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:47.410149 containerd[1499]: time="2025-07-15T04:33:47.410119607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.706847601s" Jul 15 04:33:47.410196 containerd[1499]: time="2025-07-15T04:33:47.410148687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 15 04:33:47.412481 containerd[1499]: time="2025-07-15T04:33:47.411995007Z" level=info msg="CreateContainer within sandbox \"9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 04:33:47.417165 containerd[1499]: time="2025-07-15T04:33:47.417139255Z" level=info msg="Container eaa6610d25a5731d5cfed717764644a25a3ffd24613eb6b5d7f00fd41d03b73a: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:47.424047 containerd[1499]: time="2025-07-15T04:33:47.424005985Z" level=info msg="CreateContainer within sandbox \"9bf053d31d2aa83cd63b5401c15c8f24814a9e0a73a55cbae1e9d28d14b079fb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"eaa6610d25a5731d5cfed717764644a25a3ffd24613eb6b5d7f00fd41d03b73a\"" Jul 15 04:33:47.425112 containerd[1499]: time="2025-07-15T04:33:47.424456336Z" level=info msg="StartContainer for \"eaa6610d25a5731d5cfed717764644a25a3ffd24613eb6b5d7f00fd41d03b73a\"" Jul 15 04:33:47.425422 containerd[1499]: time="2025-07-15T04:33:47.425398675Z" level=info msg="connecting to shim eaa6610d25a5731d5cfed717764644a25a3ffd24613eb6b5d7f00fd41d03b73a" address="unix:///run/containerd/s/81c99487a44c9033657fc18cf46bba0dbb049982527b06a960550e11f779f8ff" protocol=ttrpc version=3 Jul 15 04:33:47.443635 systemd[1]: Started cri-containerd-eaa6610d25a5731d5cfed717764644a25a3ffd24613eb6b5d7f00fd41d03b73a.scope - libcontainer container eaa6610d25a5731d5cfed717764644a25a3ffd24613eb6b5d7f00fd41d03b73a. Jul 15 04:33:47.478149 containerd[1499]: time="2025-07-15T04:33:47.478111288Z" level=info msg="StartContainer for \"eaa6610d25a5731d5cfed717764644a25a3ffd24613eb6b5d7f00fd41d03b73a\" returns successfully" Jul 15 04:33:47.717475 kubelet[2617]: I0715 04:33:47.717285 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-658469c645-2dn2l" podStartSLOduration=1.798046458 podStartE2EDuration="4.717264847s" podCreationTimestamp="2025-07-15 04:33:43 +0000 UTC" firstStartedPulling="2025-07-15 04:33:44.491504845 +0000 UTC m=+34.033900656" lastFinishedPulling="2025-07-15 04:33:47.410723234 +0000 UTC m=+36.953119045" observedRunningTime="2025-07-15 04:33:47.716862776 +0000 UTC m=+37.259258587" watchObservedRunningTime="2025-07-15 04:33:47.717264847 +0000 UTC m=+37.259660658" Jul 15 04:33:50.534933 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:54906.service - OpenSSH per-connection server daemon (10.0.0.1:54906). Jul 15 04:33:50.541379 kubelet[2617]: E0715 04:33:50.541101 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:50.542137 containerd[1499]: time="2025-07-15T04:33:50.541842734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-nfnml,Uid:23c819d8-fed4-400c-915e-7b92b0eda130,Namespace:calico-system,Attempt:0,}" Jul 15 04:33:50.542580 containerd[1499]: time="2025-07-15T04:33:50.541967012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qldj9,Uid:4d2aa392-68c8-4783-b72c-0de907a5114b,Namespace:kube-system,Attempt:0,}" Jul 15 04:33:50.600502 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 54906 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:33:50.601404 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:33:50.608942 systemd-logind[1478]: New session 8 of user core. Jul 15 04:33:50.616699 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 04:33:50.673898 systemd-networkd[1412]: calif40e368719c: Link UP Jul 15 04:33:50.676429 systemd-networkd[1412]: calif40e368719c: Gained carrier Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.571 [INFO][4087] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.593 [INFO][4087] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--nfnml-eth0 goldmane-58fd7646b9- calico-system 23c819d8-fed4-400c-915e-7b92b0eda130 815 0 2025-07-15 04:33:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-nfnml eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif40e368719c [] [] }} ContainerID="c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" Namespace="calico-system" Pod="goldmane-58fd7646b9-nfnml" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nfnml-" Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.593 [INFO][4087] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" Namespace="calico-system" Pod="goldmane-58fd7646b9-nfnml" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nfnml-eth0" Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.623 [INFO][4118] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" HandleID="k8s-pod-network.c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" Workload="localhost-k8s-goldmane--58fd7646b9--nfnml-eth0" Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.624 [INFO][4118] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" HandleID="k8s-pod-network.c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" Workload="localhost-k8s-goldmane--58fd7646b9--nfnml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-nfnml", "timestamp":"2025-07-15 04:33:50.623827294 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.624 [INFO][4118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.624 [INFO][4118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.624 [INFO][4118] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.634 [INFO][4118] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" host="localhost" Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.641 [INFO][4118] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.645 [INFO][4118] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.647 [INFO][4118] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.650 [INFO][4118] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.651 [INFO][4118] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" host="localhost" Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.655 [INFO][4118] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93 Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.658 [INFO][4118] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" host="localhost" Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.667 [INFO][4118] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" host="localhost" Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.667 [INFO][4118] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" host="localhost" Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.668 [INFO][4118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:33:50.691884 containerd[1499]: 2025-07-15 04:33:50.668 [INFO][4118] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" HandleID="k8s-pod-network.c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" Workload="localhost-k8s-goldmane--58fd7646b9--nfnml-eth0" Jul 15 04:33:50.692402 containerd[1499]: 2025-07-15 04:33:50.671 [INFO][4087] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" Namespace="calico-system" Pod="goldmane-58fd7646b9-nfnml" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nfnml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--nfnml-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"23c819d8-fed4-400c-915e-7b92b0eda130", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-nfnml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif40e368719c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:50.692402 containerd[1499]: 2025-07-15 04:33:50.671 [INFO][4087] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" Namespace="calico-system" Pod="goldmane-58fd7646b9-nfnml" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nfnml-eth0" Jul 15 04:33:50.692402 containerd[1499]: 2025-07-15 04:33:50.671 [INFO][4087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif40e368719c ContainerID="c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" Namespace="calico-system" Pod="goldmane-58fd7646b9-nfnml" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nfnml-eth0" Jul 15 04:33:50.692402 containerd[1499]: 2025-07-15 04:33:50.675 [INFO][4087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" Namespace="calico-system" Pod="goldmane-58fd7646b9-nfnml" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nfnml-eth0" Jul 15 04:33:50.692402 containerd[1499]: 2025-07-15 04:33:50.676 [INFO][4087] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" Namespace="calico-system" Pod="goldmane-58fd7646b9-nfnml" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nfnml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--nfnml-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"23c819d8-fed4-400c-915e-7b92b0eda130", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93", Pod:"goldmane-58fd7646b9-nfnml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif40e368719c", MAC:"62:1e:65:63:af:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:50.692402 containerd[1499]: 2025-07-15 04:33:50.689 [INFO][4087] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" Namespace="calico-system" Pod="goldmane-58fd7646b9-nfnml" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--nfnml-eth0" Jul 15 04:33:50.768794 containerd[1499]: time="2025-07-15T04:33:50.768642837Z" level=info msg="connecting to shim c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93" address="unix:///run/containerd/s/6f2959097a18eacc38327aafc0d980d6787034447734cc2a5be5089215210d19" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:50.803061 systemd[1]: Started cri-containerd-c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93.scope - libcontainer container c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93. Jul 15 04:33:50.818335 systemd-networkd[1412]: califc4c61cff19: Link UP Jul 15 04:33:50.819075 systemd-networkd[1412]: califc4c61cff19: Gained carrier Jul 15 04:33:50.826402 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:33:50.828057 sshd[4129]: Connection closed by 10.0.0.1 port 54906 Jul 15 04:33:50.828325 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Jul 15 04:33:50.834768 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:54906.service: Deactivated successfully. Jul 15 04:33:50.836375 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.578 [INFO][4099] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.593 [INFO][4099] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0 coredns-7c65d6cfc9- kube-system 4d2aa392-68c8-4783-b72c-0de907a5114b 805 0 2025-07-15 04:33:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-qldj9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc4c61cff19 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qldj9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qldj9-" Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.593 [INFO][4099] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qldj9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0" Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.627 [INFO][4119] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" HandleID="k8s-pod-network.79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" Workload="localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0" Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.627 [INFO][4119] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" HandleID="k8s-pod-network.79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" Workload="localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-qldj9", "timestamp":"2025-07-15 04:33:50.627197346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.627 [INFO][4119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.667 [INFO][4119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.667 [INFO][4119] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.758 [INFO][4119] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" host="localhost" Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.764 [INFO][4119] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.771 [INFO][4119] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.773 [INFO][4119] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.779 [INFO][4119] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.779 [INFO][4119] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" host="localhost" Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.785 [INFO][4119] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.799 [INFO][4119] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" host="localhost" Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.807 [INFO][4119] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" host="localhost" Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.807 [INFO][4119] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" host="localhost" Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.807 [INFO][4119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:33:50.836765 containerd[1499]: 2025-07-15 04:33:50.807 [INFO][4119] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" HandleID="k8s-pod-network.79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" Workload="localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0" Jul 15 04:33:50.838898 containerd[1499]: 2025-07-15 04:33:50.810 [INFO][4099] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qldj9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4d2aa392-68c8-4783-b72c-0de907a5114b", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-qldj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc4c61cff19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:50.838898 containerd[1499]: 2025-07-15 04:33:50.810 [INFO][4099] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qldj9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0" Jul 15 04:33:50.838898 containerd[1499]: 2025-07-15 04:33:50.810 [INFO][4099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc4c61cff19 ContainerID="79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qldj9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0" Jul 15 04:33:50.838898 containerd[1499]: 2025-07-15 04:33:50.819 [INFO][4099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qldj9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0" Jul 15 04:33:50.838898 containerd[1499]: 2025-07-15 04:33:50.820 [INFO][4099] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qldj9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4d2aa392-68c8-4783-b72c-0de907a5114b", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee", Pod:"coredns-7c65d6cfc9-qldj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc4c61cff19", MAC:"ba:eb:0d:49:d5:27", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:50.838512 systemd-logind[1478]: Session 8 logged out. Waiting for processes to exit. Jul 15 04:33:50.839154 containerd[1499]: 2025-07-15 04:33:50.833 [INFO][4099] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qldj9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qldj9-eth0" Jul 15 04:33:50.840296 systemd-logind[1478]: Removed session 8. Jul 15 04:33:50.878286 containerd[1499]: time="2025-07-15T04:33:50.878221604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-nfnml,Uid:23c819d8-fed4-400c-915e-7b92b0eda130,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93\"" Jul 15 04:33:50.884157 containerd[1499]: time="2025-07-15T04:33:50.884092727Z" level=info msg="connecting to shim 79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee" address="unix:///run/containerd/s/85741cb602991581d51cb10da908e755897c8b4c252af85d4d2eff23720cf76b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:50.886743 containerd[1499]: time="2025-07-15T04:33:50.886456040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 04:33:50.910642 systemd[1]: Started cri-containerd-79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee.scope - libcontainer container 79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee. Jul 15 04:33:50.921181 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:33:50.942372 containerd[1499]: time="2025-07-15T04:33:50.942324362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qldj9,Uid:4d2aa392-68c8-4783-b72c-0de907a5114b,Namespace:kube-system,Attempt:0,} returns sandbox id \"79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee\"" Jul 15 04:33:50.943134 kubelet[2617]: E0715 04:33:50.943113 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:50.947620 containerd[1499]: time="2025-07-15T04:33:50.947581497Z" level=info msg="CreateContainer within sandbox \"79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:33:50.955521 containerd[1499]: time="2025-07-15T04:33:50.955128426Z" level=info msg="Container 6d7a67df7f2b53787704f1e044431da197ee17472083e78deac557d8ed5a0867: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:50.959935 containerd[1499]: time="2025-07-15T04:33:50.959888970Z" level=info msg="CreateContainer within sandbox \"79b4bec6dbc4e723eb789b43c876247a48e495169b116638944914760547dbee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d7a67df7f2b53787704f1e044431da197ee17472083e78deac557d8ed5a0867\"" Jul 15 04:33:50.960564 containerd[1499]: time="2025-07-15T04:33:50.960529598Z" level=info msg="StartContainer for \"6d7a67df7f2b53787704f1e044431da197ee17472083e78deac557d8ed5a0867\"" Jul 15 04:33:50.961291 containerd[1499]: time="2025-07-15T04:33:50.961243303Z" level=info msg="connecting to shim 6d7a67df7f2b53787704f1e044431da197ee17472083e78deac557d8ed5a0867" address="unix:///run/containerd/s/85741cb602991581d51cb10da908e755897c8b4c252af85d4d2eff23720cf76b" protocol=ttrpc version=3 Jul 15 04:33:50.982641 systemd[1]: Started cri-containerd-6d7a67df7f2b53787704f1e044431da197ee17472083e78deac557d8ed5a0867.scope - libcontainer container 6d7a67df7f2b53787704f1e044431da197ee17472083e78deac557d8ed5a0867. Jul 15 04:33:51.008531 containerd[1499]: time="2025-07-15T04:33:51.008499482Z" level=info msg="StartContainer for \"6d7a67df7f2b53787704f1e044431da197ee17472083e78deac557d8ed5a0867\" returns successfully" Jul 15 04:33:51.541789 kubelet[2617]: E0715 04:33:51.541612 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:51.542982 containerd[1499]: time="2025-07-15T04:33:51.542935561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zcmdf,Uid:f84ceeeb-aacc-4938-944f-2df46d19a16c,Namespace:kube-system,Attempt:0,}" Jul 15 04:33:51.544406 containerd[1499]: time="2025-07-15T04:33:51.544375333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66bcf85-cbhsv,Uid:5ed815e8-8dff-4f47-9985-135c045643f0,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:33:51.645554 systemd-networkd[1412]: calie44c7dcd78b: Link UP Jul 15 04:33:51.646050 systemd-networkd[1412]: calie44c7dcd78b: Gained carrier Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.564 [INFO][4316] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.579 [INFO][4316] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0 coredns-7c65d6cfc9- kube-system f84ceeeb-aacc-4938-944f-2df46d19a16c 814 0 2025-07-15 04:33:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-zcmdf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie44c7dcd78b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zcmdf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zcmdf-" Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.579 [INFO][4316] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zcmdf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0" Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.611 [INFO][4346] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" HandleID="k8s-pod-network.cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" Workload="localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0" Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.611 [INFO][4346] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" HandleID="k8s-pod-network.cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" Workload="localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004df60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-zcmdf", "timestamp":"2025-07-15 04:33:51.611041155 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.611 [INFO][4346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.611 [INFO][4346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.611 [INFO][4346] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.620 [INFO][4346] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" host="localhost" Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.624 [INFO][4346] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.628 [INFO][4346] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.629 [INFO][4346] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.631 [INFO][4346] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.631 [INFO][4346] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" host="localhost" Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.633 [INFO][4346] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0 Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.636 [INFO][4346] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" host="localhost" Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.641 [INFO][4346] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" host="localhost" Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.641 [INFO][4346] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" host="localhost" Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.641 [INFO][4346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:33:51.666981 containerd[1499]: 2025-07-15 04:33:51.641 [INFO][4346] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" HandleID="k8s-pod-network.cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" Workload="localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0" Jul 15 04:33:51.667539 containerd[1499]: 2025-07-15 04:33:51.643 [INFO][4316] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zcmdf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f84ceeeb-aacc-4938-944f-2df46d19a16c", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-zcmdf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie44c7dcd78b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:51.667539 containerd[1499]: 2025-07-15 04:33:51.644 [INFO][4316] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zcmdf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0" Jul 15 04:33:51.667539 containerd[1499]: 2025-07-15 04:33:51.644 [INFO][4316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie44c7dcd78b ContainerID="cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zcmdf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0" Jul 15 04:33:51.667539 containerd[1499]: 2025-07-15 04:33:51.645 [INFO][4316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zcmdf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0" Jul 15 04:33:51.667539 containerd[1499]: 2025-07-15 04:33:51.646 [INFO][4316] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zcmdf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f84ceeeb-aacc-4938-944f-2df46d19a16c", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0", Pod:"coredns-7c65d6cfc9-zcmdf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie44c7dcd78b", MAC:"12:a0:44:73:00:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:51.667696 containerd[1499]: 2025-07-15 04:33:51.662 [INFO][4316] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zcmdf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zcmdf-eth0" Jul 15 04:33:51.721574 kubelet[2617]: E0715 04:33:51.720915 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:51.739510 containerd[1499]: time="2025-07-15T04:33:51.737535614Z" level=info msg="connecting to shim cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0" address="unix:///run/containerd/s/7899fb1b84fb519babf7039c577bbc85c4fa260a2a459dbd88aea90fdf6ac35a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:51.740288 kubelet[2617]: I0715 04:33:51.740184 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qldj9" podStartSLOduration=36.740165682 podStartE2EDuration="36.740165682s" podCreationTimestamp="2025-07-15 04:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:33:51.739523535 +0000 UTC m=+41.281919346" watchObservedRunningTime="2025-07-15 04:33:51.740165682 +0000 UTC m=+41.282561493" Jul 15 04:33:51.773626 systemd[1]: Started cri-containerd-cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0.scope - libcontainer container cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0. Jul 15 04:33:51.781923 systemd-networkd[1412]: calibae28156e6e: Link UP Jul 15 04:33:51.782089 systemd-networkd[1412]: calibae28156e6e: Gained carrier Jul 15 04:33:51.793816 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.573 [INFO][4328] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.589 [INFO][4328] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0 calico-apiserver-b66bcf85- calico-apiserver 5ed815e8-8dff-4f47-9985-135c045643f0 809 0 2025-07-15 04:33:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b66bcf85 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b66bcf85-cbhsv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibae28156e6e [] [] }} ContainerID="6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-cbhsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--cbhsv-" Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.589 [INFO][4328] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-cbhsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0" Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.618 [INFO][4352] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" HandleID="k8s-pod-network.6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" Workload="localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0" Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.618 [INFO][4352] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" HandleID="k8s-pod-network.6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" Workload="localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400052aab0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b66bcf85-cbhsv", "timestamp":"2025-07-15 04:33:51.618735566 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.618 [INFO][4352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.642 [INFO][4352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.642 [INFO][4352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.723 [INFO][4352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" host="localhost" Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.733 [INFO][4352] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.745 [INFO][4352] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.747 [INFO][4352] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.758 [INFO][4352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.758 [INFO][4352] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" host="localhost" Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.761 [INFO][4352] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645 Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.768 [INFO][4352] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" host="localhost" Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.776 [INFO][4352] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" host="localhost" Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.776 [INFO][4352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" host="localhost" Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.776 [INFO][4352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:33:51.797706 containerd[1499]: 2025-07-15 04:33:51.776 [INFO][4352] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" HandleID="k8s-pod-network.6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" Workload="localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0" Jul 15 04:33:51.798208 containerd[1499]: 2025-07-15 04:33:51.779 [INFO][4328] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-cbhsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0", GenerateName:"calico-apiserver-b66bcf85-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ed815e8-8dff-4f47-9985-135c045643f0", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b66bcf85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b66bcf85-cbhsv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibae28156e6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:51.798208 containerd[1499]: 2025-07-15 04:33:51.779 [INFO][4328] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-cbhsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0" Jul 15 04:33:51.798208 containerd[1499]: 2025-07-15 04:33:51.779 [INFO][4328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibae28156e6e ContainerID="6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-cbhsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0" Jul 15 04:33:51.798208 containerd[1499]: 2025-07-15 04:33:51.780 [INFO][4328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-cbhsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0" Jul 15 04:33:51.798208 containerd[1499]: 2025-07-15 04:33:51.781 [INFO][4328] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-cbhsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0", GenerateName:"calico-apiserver-b66bcf85-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ed815e8-8dff-4f47-9985-135c045643f0", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b66bcf85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645", Pod:"calico-apiserver-b66bcf85-cbhsv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibae28156e6e", MAC:"46:0f:ad:bc:16:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:51.798208 containerd[1499]: 2025-07-15 04:33:51.794 [INFO][4328] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-cbhsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--cbhsv-eth0" Jul 15 04:33:51.822194 containerd[1499]: time="2025-07-15T04:33:51.821676616Z" level=info msg="connecting to shim 6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645" address="unix:///run/containerd/s/f7d80123737df3f8bd3c2562af452bfae2d9d4fe1badfb3e8d8363a9fb6928b1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:51.826998 containerd[1499]: time="2025-07-15T04:33:51.826941994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zcmdf,Uid:f84ceeeb-aacc-4938-944f-2df46d19a16c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0\"" Jul 15 04:33:51.827709 kubelet[2617]: E0715 04:33:51.827680 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:51.830250 containerd[1499]: time="2025-07-15T04:33:51.829923016Z" level=info msg="CreateContainer within sandbox \"cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:33:51.837243 containerd[1499]: time="2025-07-15T04:33:51.837205994Z" level=info msg="Container 4157aa2d4254fe43ea0f9b3353c9e6f9a8ef474b29e77c9e8c32dd393039f21f: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:51.843487 containerd[1499]: time="2025-07-15T04:33:51.843436833Z" level=info msg="CreateContainer within sandbox \"cae0a282cbda251a76dd50a6fd2ff3c7cef3c783a6e44baa9f2103d158865ca0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4157aa2d4254fe43ea0f9b3353c9e6f9a8ef474b29e77c9e8c32dd393039f21f\"" Jul 15 04:33:51.844072 containerd[1499]: time="2025-07-15T04:33:51.844039501Z" level=info msg="StartContainer for \"4157aa2d4254fe43ea0f9b3353c9e6f9a8ef474b29e77c9e8c32dd393039f21f\"" Jul 15 04:33:51.845702 containerd[1499]: time="2025-07-15T04:33:51.845676389Z" level=info msg="connecting to shim 4157aa2d4254fe43ea0f9b3353c9e6f9a8ef474b29e77c9e8c32dd393039f21f" address="unix:///run/containerd/s/7899fb1b84fb519babf7039c577bbc85c4fa260a2a459dbd88aea90fdf6ac35a" protocol=ttrpc version=3 Jul 15 04:33:51.850668 systemd[1]: Started cri-containerd-6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645.scope - libcontainer container 6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645. Jul 15 04:33:51.866625 systemd[1]: Started cri-containerd-4157aa2d4254fe43ea0f9b3353c9e6f9a8ef474b29e77c9e8c32dd393039f21f.scope - libcontainer container 4157aa2d4254fe43ea0f9b3353c9e6f9a8ef474b29e77c9e8c32dd393039f21f. Jul 15 04:33:51.870357 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:33:51.899700 containerd[1499]: time="2025-07-15T04:33:51.899450983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66bcf85-cbhsv,Uid:5ed815e8-8dff-4f47-9985-135c045643f0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645\"" Jul 15 04:33:51.900974 containerd[1499]: time="2025-07-15T04:33:51.900931314Z" level=info msg="StartContainer for \"4157aa2d4254fe43ea0f9b3353c9e6f9a8ef474b29e77c9e8c32dd393039f21f\" returns successfully" Jul 15 04:33:52.313561 systemd-networkd[1412]: calif40e368719c: Gained IPv6LL Jul 15 04:33:52.313823 systemd-networkd[1412]: califc4c61cff19: Gained IPv6LL Jul 15 04:33:52.576929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3952868305.mount: Deactivated successfully. Jul 15 04:33:52.735340 kubelet[2617]: E0715 04:33:52.735291 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:52.736815 kubelet[2617]: E0715 04:33:52.736790 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:52.750309 kubelet[2617]: I0715 04:33:52.750254 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zcmdf" podStartSLOduration=37.750239221 podStartE2EDuration="37.750239221s" podCreationTimestamp="2025-07-15 04:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:33:52.748832407 +0000 UTC m=+42.291228218" watchObservedRunningTime="2025-07-15 04:33:52.750239221 +0000 UTC m=+42.292634992" Jul 15 04:33:52.946635 containerd[1499]: time="2025-07-15T04:33:52.946067713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 15 04:33:52.948499 containerd[1499]: time="2025-07-15T04:33:52.948359910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:52.949194 containerd[1499]: time="2025-07-15T04:33:52.949159055Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:52.949964 containerd[1499]: time="2025-07-15T04:33:52.949916080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.063204645s" Jul 15 04:33:52.949964 containerd[1499]: time="2025-07-15T04:33:52.949957359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 15 04:33:52.950683 containerd[1499]: time="2025-07-15T04:33:52.950646946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:52.951006 containerd[1499]: time="2025-07-15T04:33:52.950982220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:33:52.952842 containerd[1499]: time="2025-07-15T04:33:52.952789466Z" level=info msg="CreateContainer within sandbox \"c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 04:33:52.959252 containerd[1499]: time="2025-07-15T04:33:52.958437679Z" level=info msg="Container ca28b50eeb534279be7f6e247f26f9ae612c399da4b4288d77e3edf73dba36a0: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:52.964516 containerd[1499]: time="2025-07-15T04:33:52.964454205Z" level=info msg="CreateContainer within sandbox \"c4d3ee1a83ce8250650d37f144c93008c161564c16ccbcfa11534a2d8ef6cf93\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ca28b50eeb534279be7f6e247f26f9ae612c399da4b4288d77e3edf73dba36a0\"" Jul 15 04:33:52.965671 containerd[1499]: time="2025-07-15T04:33:52.965594943Z" level=info msg="StartContainer for \"ca28b50eeb534279be7f6e247f26f9ae612c399da4b4288d77e3edf73dba36a0\"" Jul 15 04:33:52.967071 containerd[1499]: time="2025-07-15T04:33:52.967012756Z" level=info msg="connecting to shim ca28b50eeb534279be7f6e247f26f9ae612c399da4b4288d77e3edf73dba36a0" address="unix:///run/containerd/s/6f2959097a18eacc38327aafc0d980d6787034447734cc2a5be5089215210d19" protocol=ttrpc version=3 Jul 15 04:33:52.990630 systemd[1]: Started cri-containerd-ca28b50eeb534279be7f6e247f26f9ae612c399da4b4288d77e3edf73dba36a0.scope - libcontainer container ca28b50eeb534279be7f6e247f26f9ae612c399da4b4288d77e3edf73dba36a0. Jul 15 04:33:53.016685 systemd-networkd[1412]: calibae28156e6e: Gained IPv6LL Jul 15 04:33:53.028344 containerd[1499]: time="2025-07-15T04:33:53.027579143Z" level=info msg="StartContainer for \"ca28b50eeb534279be7f6e247f26f9ae612c399da4b4288d77e3edf73dba36a0\" returns successfully" Jul 15 04:33:53.540915 containerd[1499]: time="2025-07-15T04:33:53.540799969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66bcf85-n8tlt,Uid:ad1d9544-62ac-4e6f-8783-fcb5f0ab849d,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:33:53.640821 systemd-networkd[1412]: calida187266567: Link UP Jul 15 04:33:53.641030 systemd-networkd[1412]: calida187266567: Gained carrier Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.563 [INFO][4603] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.576 [INFO][4603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0 calico-apiserver-b66bcf85- calico-apiserver ad1d9544-62ac-4e6f-8783-fcb5f0ab849d 817 0 2025-07-15 04:33:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b66bcf85 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b66bcf85-n8tlt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calida187266567 [] [] }} ContainerID="70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-n8tlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--n8tlt-" Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.576 [INFO][4603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-n8tlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0" Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.600 [INFO][4617] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" HandleID="k8s-pod-network.70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" Workload="localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0" Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.601 [INFO][4617] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" HandleID="k8s-pod-network.70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" Workload="localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137f30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b66bcf85-n8tlt", "timestamp":"2025-07-15 04:33:53.600892622 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.601 [INFO][4617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.601 [INFO][4617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.601 [INFO][4617] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.610 [INFO][4617] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" host="localhost" Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.615 [INFO][4617] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.619 [INFO][4617] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.620 [INFO][4617] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.622 [INFO][4617] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.623 [INFO][4617] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" host="localhost" Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.624 [INFO][4617] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.629 [INFO][4617] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" host="localhost" Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.635 [INFO][4617] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" host="localhost" Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.636 [INFO][4617] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" host="localhost" Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.636 [INFO][4617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:33:53.652871 containerd[1499]: 2025-07-15 04:33:53.636 [INFO][4617] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" HandleID="k8s-pod-network.70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" Workload="localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0" Jul 15 04:33:53.653628 containerd[1499]: 2025-07-15 04:33:53.638 [INFO][4603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-n8tlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0", GenerateName:"calico-apiserver-b66bcf85-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad1d9544-62ac-4e6f-8783-fcb5f0ab849d", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b66bcf85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b66bcf85-n8tlt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida187266567", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:53.653628 containerd[1499]: 2025-07-15 04:33:53.638 [INFO][4603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-n8tlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0" Jul 15 04:33:53.653628 containerd[1499]: 2025-07-15 04:33:53.638 [INFO][4603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida187266567 ContainerID="70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-n8tlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0" Jul 15 04:33:53.653628 containerd[1499]: 2025-07-15 04:33:53.641 [INFO][4603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-n8tlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0" Jul 15 04:33:53.653628 containerd[1499]: 2025-07-15 04:33:53.642 [INFO][4603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-n8tlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0", GenerateName:"calico-apiserver-b66bcf85-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad1d9544-62ac-4e6f-8783-fcb5f0ab849d", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b66bcf85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b", Pod:"calico-apiserver-b66bcf85-n8tlt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida187266567", MAC:"96:b3:03:65:55:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:53.653628 containerd[1499]: 2025-07-15 04:33:53.650 [INFO][4603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" Namespace="calico-apiserver" Pod="calico-apiserver-b66bcf85-n8tlt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b66bcf85--n8tlt-eth0" Jul 15 04:33:53.656616 systemd-networkd[1412]: calie44c7dcd78b: Gained IPv6LL Jul 15 04:33:53.670242 containerd[1499]: time="2025-07-15T04:33:53.670089387Z" level=info msg="connecting to shim 70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b" address="unix:///run/containerd/s/bb4abcfdc4f79b4581356546811958d7e2afad01844aed08ece730966cc68dfd" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:53.698770 systemd[1]: Started cri-containerd-70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b.scope - libcontainer container 70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b. Jul 15 04:33:53.710165 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:33:53.729827 containerd[1499]: time="2025-07-15T04:33:53.729786208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66bcf85-n8tlt,Uid:ad1d9544-62ac-4e6f-8783-fcb5f0ab849d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b\"" Jul 15 04:33:53.737021 kubelet[2617]: E0715 04:33:53.736673 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:53.737021 kubelet[2617]: E0715 04:33:53.737016 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:53.760207 kubelet[2617]: I0715 04:33:53.760116 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-nfnml" podStartSLOduration=20.695463791 podStartE2EDuration="22.760098689s" podCreationTimestamp="2025-07-15 04:33:31 +0000 UTC" firstStartedPulling="2025-07-15 04:33:50.886204965 +0000 UTC m=+40.428600736" lastFinishedPulling="2025-07-15 04:33:52.950839823 +0000 UTC m=+42.493235634" observedRunningTime="2025-07-15 04:33:53.749113572 +0000 UTC m=+43.291509423" watchObservedRunningTime="2025-07-15 04:33:53.760098689 +0000 UTC m=+43.302494500" Jul 15 04:33:54.567417 containerd[1499]: time="2025-07-15T04:33:54.566869628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-959pd,Uid:8fa57634-f69a-42ff-ae8f-1f9db265d0bc,Namespace:calico-system,Attempt:0,}" Jul 15 04:33:54.574260 containerd[1499]: time="2025-07-15T04:33:54.574177537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4b45f849-dl72s,Uid:3b87aef8-f00a-4f17-be31-0b279e5b7f35,Namespace:calico-system,Attempt:0,}" Jul 15 04:33:54.717630 systemd-networkd[1412]: calid3a2cfb378f: Link UP Jul 15 04:33:54.719101 systemd-networkd[1412]: calid3a2cfb378f: Gained carrier Jul 15 04:33:54.739487 kubelet[2617]: I0715 04:33:54.738424 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:33:54.739487 kubelet[2617]: E0715 04:33:54.739038 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.613 [INFO][4707] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.629 [INFO][4707] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--959pd-eth0 csi-node-driver- calico-system 8fa57634-f69a-42ff-ae8f-1f9db265d0bc 682 0 2025-07-15 04:33:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-959pd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid3a2cfb378f [] [] }} ContainerID="0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" Namespace="calico-system" Pod="csi-node-driver-959pd" WorkloadEndpoint="localhost-k8s-csi--node--driver--959pd-" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.629 [INFO][4707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" Namespace="calico-system" Pod="csi-node-driver-959pd" WorkloadEndpoint="localhost-k8s-csi--node--driver--959pd-eth0" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.662 [INFO][4739] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" HandleID="k8s-pod-network.0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" Workload="localhost-k8s-csi--node--driver--959pd-eth0" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.662 [INFO][4739] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" HandleID="k8s-pod-network.0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" Workload="localhost-k8s-csi--node--driver--959pd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000518af0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-959pd", "timestamp":"2025-07-15 04:33:54.662366956 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.662 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.662 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.662 [INFO][4739] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.678 [INFO][4739] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" host="localhost" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.683 [INFO][4739] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.689 [INFO][4739] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.691 [INFO][4739] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.694 [INFO][4739] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.694 [INFO][4739] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" host="localhost" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.696 [INFO][4739] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096 Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.701 [INFO][4739] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" host="localhost" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.709 [INFO][4739] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" host="localhost" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.709 [INFO][4739] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" host="localhost" Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.710 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:33:54.741911 containerd[1499]: 2025-07-15 04:33:54.710 [INFO][4739] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" HandleID="k8s-pod-network.0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" Workload="localhost-k8s-csi--node--driver--959pd-eth0" Jul 15 04:33:54.742409 containerd[1499]: 2025-07-15 04:33:54.713 [INFO][4707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" Namespace="calico-system" Pod="csi-node-driver-959pd" WorkloadEndpoint="localhost-k8s-csi--node--driver--959pd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--959pd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fa57634-f69a-42ff-ae8f-1f9db265d0bc", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-959pd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid3a2cfb378f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:54.742409 containerd[1499]: 2025-07-15 04:33:54.713 [INFO][4707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" Namespace="calico-system" Pod="csi-node-driver-959pd" WorkloadEndpoint="localhost-k8s-csi--node--driver--959pd-eth0" Jul 15 04:33:54.742409 containerd[1499]: 2025-07-15 04:33:54.713 [INFO][4707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3a2cfb378f ContainerID="0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" Namespace="calico-system" Pod="csi-node-driver-959pd" WorkloadEndpoint="localhost-k8s-csi--node--driver--959pd-eth0" Jul 15 04:33:54.742409 containerd[1499]: 2025-07-15 04:33:54.719 [INFO][4707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" Namespace="calico-system" Pod="csi-node-driver-959pd" WorkloadEndpoint="localhost-k8s-csi--node--driver--959pd-eth0" Jul 15 04:33:54.742409 containerd[1499]: 2025-07-15 04:33:54.720 [INFO][4707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" Namespace="calico-system" Pod="csi-node-driver-959pd" WorkloadEndpoint="localhost-k8s-csi--node--driver--959pd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--959pd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fa57634-f69a-42ff-ae8f-1f9db265d0bc", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096", Pod:"csi-node-driver-959pd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid3a2cfb378f", MAC:"46:27:85:ef:cb:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:54.742409 containerd[1499]: 2025-07-15 04:33:54.735 [INFO][4707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" Namespace="calico-system" Pod="csi-node-driver-959pd" WorkloadEndpoint="localhost-k8s-csi--node--driver--959pd-eth0" Jul 15 04:33:54.767263 containerd[1499]: time="2025-07-15T04:33:54.767215796Z" level=info msg="connecting to shim 0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096" address="unix:///run/containerd/s/0e4e481083d1ad3d3789de89a787c341382627ead69164c96d1332b54c0ed2d8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:54.805695 systemd[1]: Started cri-containerd-0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096.scope - libcontainer container 0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096. Jul 15 04:33:54.821379 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:33:54.837142 systemd-networkd[1412]: cali8c40c7466ab: Link UP Jul 15 04:33:54.839016 systemd-networkd[1412]: cali8c40c7466ab: Gained carrier Jul 15 04:33:54.845952 containerd[1499]: time="2025-07-15T04:33:54.845837547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-959pd,Uid:8fa57634-f69a-42ff-ae8f-1f9db265d0bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096\"" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.624 [INFO][4721] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.643 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0 calico-kube-controllers-b4b45f849- calico-system 3b87aef8-f00a-4f17-be31-0b279e5b7f35 813 0 2025-07-15 04:33:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b4b45f849 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b4b45f849-dl72s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8c40c7466ab [] [] }} ContainerID="43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" Namespace="calico-system" Pod="calico-kube-controllers-b4b45f849-dl72s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.644 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" Namespace="calico-system" Pod="calico-kube-controllers-b4b45f849-dl72s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.680 [INFO][4747] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" HandleID="k8s-pod-network.43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" Workload="localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.680 [INFO][4747] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" HandleID="k8s-pod-network.43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" Workload="localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000497d30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b4b45f849-dl72s", "timestamp":"2025-07-15 04:33:54.680202476 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.680 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.710 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.710 [INFO][4747] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.780 [INFO][4747] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" host="localhost" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.785 [INFO][4747] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.798 [INFO][4747] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.802 [INFO][4747] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.808 [INFO][4747] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.808 [INFO][4747] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" host="localhost" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.812 [INFO][4747] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0 Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.818 [INFO][4747] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" host="localhost" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.830 [INFO][4747] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" host="localhost" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.830 [INFO][4747] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" host="localhost" Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.830 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:33:54.856724 containerd[1499]: 2025-07-15 04:33:54.830 [INFO][4747] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" HandleID="k8s-pod-network.43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" Workload="localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0" Jul 15 04:33:54.857272 containerd[1499]: 2025-07-15 04:33:54.833 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" Namespace="calico-system" Pod="calico-kube-controllers-b4b45f849-dl72s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0", GenerateName:"calico-kube-controllers-b4b45f849-", Namespace:"calico-system", SelfLink:"", UID:"3b87aef8-f00a-4f17-be31-0b279e5b7f35", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b4b45f849", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b4b45f849-dl72s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c40c7466ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:54.857272 containerd[1499]: 2025-07-15 04:33:54.833 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" Namespace="calico-system" Pod="calico-kube-controllers-b4b45f849-dl72s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0" Jul 15 04:33:54.857272 containerd[1499]: 2025-07-15 04:33:54.833 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c40c7466ab ContainerID="43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" Namespace="calico-system" Pod="calico-kube-controllers-b4b45f849-dl72s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0" Jul 15 04:33:54.857272 containerd[1499]: 2025-07-15 04:33:54.841 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" Namespace="calico-system" Pod="calico-kube-controllers-b4b45f849-dl72s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0" Jul 15 04:33:54.857272 containerd[1499]: 2025-07-15 04:33:54.843 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" Namespace="calico-system" Pod="calico-kube-controllers-b4b45f849-dl72s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0", GenerateName:"calico-kube-controllers-b4b45f849-", Namespace:"calico-system", SelfLink:"", UID:"3b87aef8-f00a-4f17-be31-0b279e5b7f35", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 33, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b4b45f849", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0", Pod:"calico-kube-controllers-b4b45f849-dl72s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c40c7466ab", MAC:"1e:7b:74:8b:5a:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:33:54.857272 containerd[1499]: 2025-07-15 04:33:54.854 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" Namespace="calico-system" Pod="calico-kube-controllers-b4b45f849-dl72s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4b45f849--dl72s-eth0" Jul 15 04:33:54.879495 containerd[1499]: time="2025-07-15T04:33:54.879339067Z" level=info msg="connecting to shim 43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0" address="unix:///run/containerd/s/df21fb14d01de86db854ba722c933972640512127221fd34000cec617ef61a73" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:33:54.906712 systemd[1]: Started cri-containerd-43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0.scope - libcontainer container 43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0. Jul 15 04:33:54.920353 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:33:54.942714 containerd[1499]: time="2025-07-15T04:33:54.942672691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4b45f849-dl72s,Uid:3b87aef8-f00a-4f17-be31-0b279e5b7f35,Namespace:calico-system,Attempt:0,} returns sandbox id \"43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0\"" Jul 15 04:33:55.064662 systemd-networkd[1412]: calida187266567: Gained IPv6LL Jul 15 04:33:55.211670 containerd[1499]: time="2025-07-15T04:33:55.211554972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:55.212791 containerd[1499]: time="2025-07-15T04:33:55.212760711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 15 04:33:55.213763 containerd[1499]: time="2025-07-15T04:33:55.213698935Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:55.215916 containerd[1499]: time="2025-07-15T04:33:55.215858537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:55.216571 containerd[1499]: time="2025-07-15T04:33:55.216387608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.265370108s" Jul 15 04:33:55.216571 containerd[1499]: time="2025-07-15T04:33:55.216427207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:33:55.218511 containerd[1499]: time="2025-07-15T04:33:55.217945461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:33:55.222762 containerd[1499]: time="2025-07-15T04:33:55.222713698Z" level=info msg="CreateContainer within sandbox \"6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:33:55.228137 containerd[1499]: time="2025-07-15T04:33:55.228016565Z" level=info msg="Container fe540ba4b46bc84c19b45d20d77513770e69d157a70190ab1fddf53f21f474f4: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:55.242192 containerd[1499]: time="2025-07-15T04:33:55.242130319Z" level=info msg="CreateContainer within sandbox \"6802c77d8f7ecd06cb93429aedad5f51356e446969718137356b0d9af25d0645\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe540ba4b46bc84c19b45d20d77513770e69d157a70190ab1fddf53f21f474f4\"" Jul 15 04:33:55.242718 containerd[1499]: time="2025-07-15T04:33:55.242676229Z" level=info msg="StartContainer for \"fe540ba4b46bc84c19b45d20d77513770e69d157a70190ab1fddf53f21f474f4\"" Jul 15 04:33:55.244152 containerd[1499]: time="2025-07-15T04:33:55.244122044Z" level=info msg="connecting to shim fe540ba4b46bc84c19b45d20d77513770e69d157a70190ab1fddf53f21f474f4" address="unix:///run/containerd/s/f7d80123737df3f8bd3c2562af452bfae2d9d4fe1badfb3e8d8363a9fb6928b1" protocol=ttrpc version=3 Jul 15 04:33:55.275681 systemd[1]: Started cri-containerd-fe540ba4b46bc84c19b45d20d77513770e69d157a70190ab1fddf53f21f474f4.scope - libcontainer container fe540ba4b46bc84c19b45d20d77513770e69d157a70190ab1fddf53f21f474f4. Jul 15 04:33:55.314066 containerd[1499]: time="2025-07-15T04:33:55.314022905Z" level=info msg="StartContainer for \"fe540ba4b46bc84c19b45d20d77513770e69d157a70190ab1fddf53f21f474f4\" returns successfully" Jul 15 04:33:55.594716 containerd[1499]: time="2025-07-15T04:33:55.594665009Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:55.595399 containerd[1499]: time="2025-07-15T04:33:55.595367197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 04:33:55.596976 containerd[1499]: time="2025-07-15T04:33:55.596941809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 378.960589ms" Jul 15 04:33:55.597019 containerd[1499]: time="2025-07-15T04:33:55.596988328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:33:55.598628 containerd[1499]: time="2025-07-15T04:33:55.598605060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 04:33:55.600795 containerd[1499]: time="2025-07-15T04:33:55.600732903Z" level=info msg="CreateContainer within sandbox \"70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:33:55.610533 containerd[1499]: time="2025-07-15T04:33:55.610485013Z" level=info msg="Container 4fa44c815019152e8f4ab7985dbbdbfd213a05b15875fb215a3f37cb5ec7b683: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:55.618025 containerd[1499]: time="2025-07-15T04:33:55.617981682Z" level=info msg="CreateContainer within sandbox \"70e5759cc470fc15b5ee8a082971cd8c6a768e6e61211a3552553df00849f66b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4fa44c815019152e8f4ab7985dbbdbfd213a05b15875fb215a3f37cb5ec7b683\"" Jul 15 04:33:55.619560 containerd[1499]: time="2025-07-15T04:33:55.619529135Z" level=info msg="StartContainer for \"4fa44c815019152e8f4ab7985dbbdbfd213a05b15875fb215a3f37cb5ec7b683\"" Jul 15 04:33:55.620868 containerd[1499]: time="2025-07-15T04:33:55.620834672Z" level=info msg="connecting to shim 4fa44c815019152e8f4ab7985dbbdbfd213a05b15875fb215a3f37cb5ec7b683" address="unix:///run/containerd/s/bb4abcfdc4f79b4581356546811958d7e2afad01844aed08ece730966cc68dfd" protocol=ttrpc version=3 Jul 15 04:33:55.646648 systemd[1]: Started cri-containerd-4fa44c815019152e8f4ab7985dbbdbfd213a05b15875fb215a3f37cb5ec7b683.scope - libcontainer container 4fa44c815019152e8f4ab7985dbbdbfd213a05b15875fb215a3f37cb5ec7b683. Jul 15 04:33:55.694775 containerd[1499]: time="2025-07-15T04:33:55.694736863Z" level=info msg="StartContainer for \"4fa44c815019152e8f4ab7985dbbdbfd213a05b15875fb215a3f37cb5ec7b683\" returns successfully" Jul 15 04:33:55.754572 kubelet[2617]: E0715 04:33:55.754135 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:55.765728 kubelet[2617]: I0715 04:33:55.765651 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b66bcf85-cbhsv" podStartSLOduration=27.449509983 podStartE2EDuration="30.765631826s" podCreationTimestamp="2025-07-15 04:33:25 +0000 UTC" firstStartedPulling="2025-07-15 04:33:51.901094311 +0000 UTC m=+41.443490122" lastFinishedPulling="2025-07-15 04:33:55.217216194 +0000 UTC m=+44.759611965" observedRunningTime="2025-07-15 04:33:55.764064174 +0000 UTC m=+45.306459985" watchObservedRunningTime="2025-07-15 04:33:55.765631826 +0000 UTC m=+45.308027637" Jul 15 04:33:55.812321 kubelet[2617]: I0715 04:33:55.812256 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b66bcf85-n8tlt" podStartSLOduration=28.945122369 podStartE2EDuration="30.811769581s" podCreationTimestamp="2025-07-15 04:33:25 +0000 UTC" firstStartedPulling="2025-07-15 04:33:53.731007505 +0000 UTC m=+43.273403316" lastFinishedPulling="2025-07-15 04:33:55.597654717 +0000 UTC m=+45.140050528" observedRunningTime="2025-07-15 04:33:55.808832193 +0000 UTC m=+45.351228044" watchObservedRunningTime="2025-07-15 04:33:55.811769581 +0000 UTC m=+45.354165392" Jul 15 04:33:55.849890 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:35090.service - OpenSSH per-connection server daemon (10.0.0.1:35090). Jul 15 04:33:55.916563 sshd[4970]: Accepted publickey for core from 10.0.0.1 port 35090 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:33:55.918765 sshd-session[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:33:55.923236 systemd-logind[1478]: New session 9 of user core. Jul 15 04:33:55.930738 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 04:33:56.152667 systemd-networkd[1412]: calid3a2cfb378f: Gained IPv6LL Jul 15 04:33:56.257036 sshd[4973]: Connection closed by 10.0.0.1 port 35090 Jul 15 04:33:56.257326 sshd-session[4970]: pam_unix(sshd:session): session closed for user core Jul 15 04:33:56.262108 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:35090.service: Deactivated successfully. Jul 15 04:33:56.264213 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 04:33:56.265346 systemd-logind[1478]: Session 9 logged out. Waiting for processes to exit. Jul 15 04:33:56.267094 systemd-logind[1478]: Removed session 9. Jul 15 04:33:56.664780 systemd-networkd[1412]: cali8c40c7466ab: Gained IPv6LL Jul 15 04:33:56.758662 kubelet[2617]: I0715 04:33:56.758622 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:33:56.895426 containerd[1499]: time="2025-07-15T04:33:56.895373094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:56.897496 containerd[1499]: time="2025-07-15T04:33:56.896739311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 15 04:33:56.903152 containerd[1499]: time="2025-07-15T04:33:56.902838767Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:56.907194 containerd[1499]: time="2025-07-15T04:33:56.906825739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:56.910203 containerd[1499]: time="2025-07-15T04:33:56.910166363Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.311526023s" Jul 15 04:33:56.910203 containerd[1499]: time="2025-07-15T04:33:56.910205042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 15 04:33:56.912420 containerd[1499]: time="2025-07-15T04:33:56.912369805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 04:33:56.915002 containerd[1499]: time="2025-07-15T04:33:56.914967721Z" level=info msg="CreateContainer within sandbox \"0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 04:33:56.949263 containerd[1499]: time="2025-07-15T04:33:56.949156140Z" level=info msg="Container 70011da9eef32814801ba221edd56d16c7fcae87c875359524952cfc0a7ff142: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:56.952770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773038102.mount: Deactivated successfully. Jul 15 04:33:56.974215 containerd[1499]: time="2025-07-15T04:33:56.974156596Z" level=info msg="CreateContainer within sandbox \"0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"70011da9eef32814801ba221edd56d16c7fcae87c875359524952cfc0a7ff142\"" Jul 15 04:33:56.975218 containerd[1499]: time="2025-07-15T04:33:56.975171139Z" level=info msg="StartContainer for \"70011da9eef32814801ba221edd56d16c7fcae87c875359524952cfc0a7ff142\"" Jul 15 04:33:56.976711 containerd[1499]: time="2025-07-15T04:33:56.976680753Z" level=info msg="connecting to shim 70011da9eef32814801ba221edd56d16c7fcae87c875359524952cfc0a7ff142" address="unix:///run/containerd/s/0e4e481083d1ad3d3789de89a787c341382627ead69164c96d1332b54c0ed2d8" protocol=ttrpc version=3 Jul 15 04:33:57.007682 systemd[1]: Started cri-containerd-70011da9eef32814801ba221edd56d16c7fcae87c875359524952cfc0a7ff142.scope - libcontainer container 70011da9eef32814801ba221edd56d16c7fcae87c875359524952cfc0a7ff142. Jul 15 04:33:57.062015 containerd[1499]: time="2025-07-15T04:33:57.061973852Z" level=info msg="StartContainer for \"70011da9eef32814801ba221edd56d16c7fcae87c875359524952cfc0a7ff142\" returns successfully" Jul 15 04:33:58.318009 kubelet[2617]: I0715 04:33:58.317942 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:33:58.319222 kubelet[2617]: E0715 04:33:58.319114 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:58.767983 kubelet[2617]: E0715 04:33:58.767945 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:33:59.535931 containerd[1499]: time="2025-07-15T04:33:59.535791577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:59.536584 containerd[1499]: time="2025-07-15T04:33:59.536343249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 15 04:33:59.538537 containerd[1499]: time="2025-07-15T04:33:59.537098277Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:59.540825 containerd[1499]: time="2025-07-15T04:33:59.539482000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:33:59.540825 containerd[1499]: time="2025-07-15T04:33:59.540604542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.628181778s" Jul 15 04:33:59.540825 containerd[1499]: time="2025-07-15T04:33:59.540635461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 15 04:33:59.542130 containerd[1499]: time="2025-07-15T04:33:59.542105078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 04:33:59.547895 containerd[1499]: time="2025-07-15T04:33:59.547859268Z" level=info msg="CreateContainer within sandbox \"43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 04:33:59.555962 containerd[1499]: time="2025-07-15T04:33:59.555921822Z" level=info msg="Container c9ae41eb4dd71dc387a1c7acb3d627b0d528c9ad233cc8adfca7373812d242e4: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:33:59.562757 kubelet[2617]: I0715 04:33:59.562513 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:33:59.563406 containerd[1499]: time="2025-07-15T04:33:59.562786154Z" level=info msg="CreateContainer within sandbox \"43fe1070c4f4be107c07408024b00c7559d9dbf1df1bdb4add85f21b3b5df8f0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c9ae41eb4dd71dc387a1c7acb3d627b0d528c9ad233cc8adfca7373812d242e4\"" Jul 15 04:33:59.563623 containerd[1499]: time="2025-07-15T04:33:59.563596102Z" level=info msg="StartContainer for \"c9ae41eb4dd71dc387a1c7acb3d627b0d528c9ad233cc8adfca7373812d242e4\"" Jul 15 04:33:59.565835 containerd[1499]: time="2025-07-15T04:33:59.565797267Z" level=info msg="connecting to shim c9ae41eb4dd71dc387a1c7acb3d627b0d528c9ad233cc8adfca7373812d242e4" address="unix:///run/containerd/s/df21fb14d01de86db854ba722c933972640512127221fd34000cec617ef61a73" protocol=ttrpc version=3 Jul 15 04:33:59.599247 systemd[1]: Started cri-containerd-c9ae41eb4dd71dc387a1c7acb3d627b0d528c9ad233cc8adfca7373812d242e4.scope - libcontainer container c9ae41eb4dd71dc387a1c7acb3d627b0d528c9ad233cc8adfca7373812d242e4. Jul 15 04:33:59.694320 systemd-networkd[1412]: vxlan.calico: Link UP Jul 15 04:33:59.694329 systemd-networkd[1412]: vxlan.calico: Gained carrier Jul 15 04:33:59.700666 containerd[1499]: time="2025-07-15T04:33:59.700409158Z" level=info msg="StartContainer for \"c9ae41eb4dd71dc387a1c7acb3d627b0d528c9ad233cc8adfca7373812d242e4\" returns successfully" Jul 15 04:33:59.793283 kubelet[2617]: I0715 04:33:59.793149 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b4b45f849-dl72s" podStartSLOduration=24.196191355 podStartE2EDuration="28.793130185s" podCreationTimestamp="2025-07-15 04:33:31 +0000 UTC" firstStartedPulling="2025-07-15 04:33:54.944366501 +0000 UTC m=+44.486762312" lastFinishedPulling="2025-07-15 04:33:59.541305331 +0000 UTC m=+49.083701142" observedRunningTime="2025-07-15 04:33:59.791856125 +0000 UTC m=+49.334251976" watchObservedRunningTime="2025-07-15 04:33:59.793130185 +0000 UTC m=+49.335525996" Jul 15 04:33:59.826520 containerd[1499]: time="2025-07-15T04:33:59.826477222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8fd3e2a35548bf412b1386e6945c42f7e98a4adb25b31557cc57b117926c584d\" id:\"b513003b5e3e9ca5db23880614070e8f122a0004da52816bfa6c111076dd1dbb\" pid:5217 exit_status:1 exited_at:{seconds:1752554039 nanos:821290784}" Jul 15 04:33:59.845694 containerd[1499]: time="2025-07-15T04:33:59.845647242Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9ae41eb4dd71dc387a1c7acb3d627b0d528c9ad233cc8adfca7373812d242e4\" id:\"ad33befd62244ea3086d4a8c3619f3d55e7184b1b455b9f93376bb57695d2449\" pid:5265 exited_at:{seconds:1752554039 nanos:835647319}" Jul 15 04:33:59.936396 containerd[1499]: time="2025-07-15T04:33:59.936336781Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8fd3e2a35548bf412b1386e6945c42f7e98a4adb25b31557cc57b117926c584d\" id:\"2dfffe1d8e7c78d784a6319acf2e97d69b93e6ff40c880388e413b5e133033c2\" pid:5292 exit_status:1 exited_at:{seconds:1752554039 nanos:936052865}" Jul 15 04:34:00.656486 containerd[1499]: time="2025-07-15T04:34:00.656330887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:34:00.658156 containerd[1499]: time="2025-07-15T04:34:00.658121220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 15 04:34:00.658762 containerd[1499]: time="2025-07-15T04:34:00.658741971Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:34:00.661107 containerd[1499]: time="2025-07-15T04:34:00.661072135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:34:00.661983 containerd[1499]: time="2025-07-15T04:34:00.661950162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.119812284s" Jul 15 04:34:00.661983 containerd[1499]: time="2025-07-15T04:34:00.661980681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 15 04:34:00.664123 containerd[1499]: time="2025-07-15T04:34:00.664096489Z" level=info msg="CreateContainer within sandbox \"0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 04:34:00.670943 containerd[1499]: time="2025-07-15T04:34:00.670904185Z" level=info msg="Container df2caa2ace7e188192f8580c4a9ee13040adaaef2e18fe98a553617fb7b40b8b: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:34:00.685691 containerd[1499]: time="2025-07-15T04:34:00.685643520Z" level=info msg="CreateContainer within sandbox \"0cfb8a4ea43c9d92a032065ff1985aea6d6a0e891a67b2fe76a10dc10b384096\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"df2caa2ace7e188192f8580c4a9ee13040adaaef2e18fe98a553617fb7b40b8b\"" Jul 15 04:34:00.686491 containerd[1499]: time="2025-07-15T04:34:00.686392349Z" level=info msg="StartContainer for \"df2caa2ace7e188192f8580c4a9ee13040adaaef2e18fe98a553617fb7b40b8b\"" Jul 15 04:34:00.688664 containerd[1499]: time="2025-07-15T04:34:00.688621795Z" level=info msg="connecting to shim df2caa2ace7e188192f8580c4a9ee13040adaaef2e18fe98a553617fb7b40b8b" address="unix:///run/containerd/s/0e4e481083d1ad3d3789de89a787c341382627ead69164c96d1332b54c0ed2d8" protocol=ttrpc version=3 Jul 15 04:34:00.712629 systemd[1]: Started cri-containerd-df2caa2ace7e188192f8580c4a9ee13040adaaef2e18fe98a553617fb7b40b8b.scope - libcontainer container df2caa2ace7e188192f8580c4a9ee13040adaaef2e18fe98a553617fb7b40b8b. Jul 15 04:34:00.748694 containerd[1499]: time="2025-07-15T04:34:00.748637319Z" level=info msg="StartContainer for \"df2caa2ace7e188192f8580c4a9ee13040adaaef2e18fe98a553617fb7b40b8b\" returns successfully" Jul 15 04:34:01.273225 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:35100.service - OpenSSH per-connection server daemon (10.0.0.1:35100). Jul 15 04:34:01.350048 sshd[5390]: Accepted publickey for core from 10.0.0.1 port 35100 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:34:01.352679 sshd-session[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:34:01.360643 systemd-logind[1478]: New session 10 of user core. Jul 15 04:34:01.372687 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 04:34:01.400651 systemd-networkd[1412]: vxlan.calico: Gained IPv6LL Jul 15 04:34:01.576508 sshd[5395]: Connection closed by 10.0.0.1 port 35100 Jul 15 04:34:01.577147 sshd-session[5390]: pam_unix(sshd:session): session closed for user core Jul 15 04:34:01.588131 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:35100.service: Deactivated successfully. Jul 15 04:34:01.590304 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 04:34:01.593203 systemd-logind[1478]: Session 10 logged out. Waiting for processes to exit. Jul 15 04:34:01.596629 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:35112.service - OpenSSH per-connection server daemon (10.0.0.1:35112). Jul 15 04:34:01.597791 systemd-logind[1478]: Removed session 10. Jul 15 04:34:01.647352 sshd[5410]: Accepted publickey for core from 10.0.0.1 port 35112 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:34:01.648663 sshd-session[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:34:01.654664 systemd-logind[1478]: New session 11 of user core. Jul 15 04:34:01.659093 kubelet[2617]: I0715 04:34:01.659050 2617 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 04:34:01.659959 kubelet[2617]: I0715 04:34:01.659101 2617 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 04:34:01.662622 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 04:34:01.892500 sshd[5414]: Connection closed by 10.0.0.1 port 35112 Jul 15 04:34:01.892973 sshd-session[5410]: pam_unix(sshd:session): session closed for user core Jul 15 04:34:01.909019 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:35112.service: Deactivated successfully. Jul 15 04:34:01.910882 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 04:34:01.911646 systemd-logind[1478]: Session 11 logged out. Waiting for processes to exit. Jul 15 04:34:01.914706 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:35118.service - OpenSSH per-connection server daemon (10.0.0.1:35118). Jul 15 04:34:01.916024 systemd-logind[1478]: Removed session 11. Jul 15 04:34:01.975239 sshd[5425]: Accepted publickey for core from 10.0.0.1 port 35118 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:34:01.976618 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:34:01.980769 systemd-logind[1478]: New session 12 of user core. Jul 15 04:34:01.993669 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 04:34:02.125308 sshd[5428]: Connection closed by 10.0.0.1 port 35118 Jul 15 04:34:02.126281 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Jul 15 04:34:02.130275 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:35118.service: Deactivated successfully. Jul 15 04:34:02.132122 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 04:34:02.134710 systemd-logind[1478]: Session 12 logged out. Waiting for processes to exit. Jul 15 04:34:02.136780 systemd-logind[1478]: Removed session 12. Jul 15 04:34:06.615315 kubelet[2617]: I0715 04:34:06.614910 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:34:06.735403 containerd[1499]: time="2025-07-15T04:34:06.735352892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca28b50eeb534279be7f6e247f26f9ae612c399da4b4288d77e3edf73dba36a0\" id:\"77dffd6bc1a088113f98f7d0c07f0bf1a3691fa9997bfeeb230ab0cdb810b8ff\" pid:5464 exited_at:{seconds:1752554046 nanos:734615542}" Jul 15 04:34:06.760865 kubelet[2617]: I0715 04:34:06.749588 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-959pd" podStartSLOduration=29.935202579 podStartE2EDuration="35.749570507s" podCreationTimestamp="2025-07-15 04:33:31 +0000 UTC" firstStartedPulling="2025-07-15 04:33:54.848351822 +0000 UTC m=+44.390747633" lastFinishedPulling="2025-07-15 04:34:00.66271975 +0000 UTC m=+50.205115561" observedRunningTime="2025-07-15 04:34:00.805934564 +0000 UTC m=+50.348330375" watchObservedRunningTime="2025-07-15 04:34:06.749570507 +0000 UTC m=+56.291966318" Jul 15 04:34:06.807526 containerd[1499]: time="2025-07-15T04:34:06.807488110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca28b50eeb534279be7f6e247f26f9ae612c399da4b4288d77e3edf73dba36a0\" id:\"53f23583b8af0efce6411827ab892912c317b3b7c1e8b4fb14aac439f0449dbb\" pid:5489 exited_at:{seconds:1752554046 nanos:807181754}" Jul 15 04:34:07.137346 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:60690.service - OpenSSH per-connection server daemon (10.0.0.1:60690). Jul 15 04:34:07.197049 sshd[5502]: Accepted publickey for core from 10.0.0.1 port 60690 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:34:07.198601 sshd-session[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:34:07.202983 systemd-logind[1478]: New session 13 of user core. Jul 15 04:34:07.212633 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 04:34:07.388613 sshd[5505]: Connection closed by 10.0.0.1 port 60690 Jul 15 04:34:07.388910 sshd-session[5502]: pam_unix(sshd:session): session closed for user core Jul 15 04:34:07.401076 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:60690.service: Deactivated successfully. Jul 15 04:34:07.403556 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 04:34:07.404557 systemd-logind[1478]: Session 13 logged out. Waiting for processes to exit. Jul 15 04:34:07.406487 systemd-logind[1478]: Removed session 13. Jul 15 04:34:07.408834 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:60704.service - OpenSSH per-connection server daemon (10.0.0.1:60704). Jul 15 04:34:07.465392 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 60704 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:34:07.466671 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:34:07.471127 systemd-logind[1478]: New session 14 of user core. Jul 15 04:34:07.488611 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 04:34:07.693196 sshd[5521]: Connection closed by 10.0.0.1 port 60704 Jul 15 04:34:07.693641 sshd-session[5518]: pam_unix(sshd:session): session closed for user core Jul 15 04:34:07.703388 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:60704.service: Deactivated successfully. Jul 15 04:34:07.706927 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 04:34:07.707595 systemd-logind[1478]: Session 14 logged out. Waiting for processes to exit. Jul 15 04:34:07.711088 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:60706.service - OpenSSH per-connection server daemon (10.0.0.1:60706). Jul 15 04:34:07.712068 systemd-logind[1478]: Removed session 14. Jul 15 04:34:07.766757 sshd[5532]: Accepted publickey for core from 10.0.0.1 port 60706 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:34:07.768061 sshd-session[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:34:07.772354 systemd-logind[1478]: New session 15 of user core. Jul 15 04:34:07.782617 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 04:34:09.560067 sshd[5535]: Connection closed by 10.0.0.1 port 60706 Jul 15 04:34:09.561812 sshd-session[5532]: pam_unix(sshd:session): session closed for user core Jul 15 04:34:09.571080 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:60706.service: Deactivated successfully. Jul 15 04:34:09.576847 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 04:34:09.577522 systemd[1]: session-15.scope: Consumed 544ms CPU time, 71.8M memory peak. Jul 15 04:34:09.580312 systemd-logind[1478]: Session 15 logged out. Waiting for processes to exit. Jul 15 04:34:09.586923 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:60712.service - OpenSSH per-connection server daemon (10.0.0.1:60712). Jul 15 04:34:09.591565 systemd-logind[1478]: Removed session 15. Jul 15 04:34:09.657996 sshd[5557]: Accepted publickey for core from 10.0.0.1 port 60712 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:34:09.659199 sshd-session[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:34:09.663754 systemd-logind[1478]: New session 16 of user core. Jul 15 04:34:09.674745 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 04:34:09.988019 sshd[5560]: Connection closed by 10.0.0.1 port 60712 Jul 15 04:34:09.988924 sshd-session[5557]: pam_unix(sshd:session): session closed for user core Jul 15 04:34:10.000406 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:60712.service: Deactivated successfully. Jul 15 04:34:10.005711 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 04:34:10.008204 systemd-logind[1478]: Session 16 logged out. Waiting for processes to exit. Jul 15 04:34:10.010517 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:60720.service - OpenSSH per-connection server daemon (10.0.0.1:60720). Jul 15 04:34:10.013272 systemd-logind[1478]: Removed session 16. Jul 15 04:34:10.063277 sshd[5577]: Accepted publickey for core from 10.0.0.1 port 60720 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:34:10.064284 sshd-session[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:34:10.070227 systemd-logind[1478]: New session 17 of user core. Jul 15 04:34:10.079895 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 04:34:10.212795 sshd[5582]: Connection closed by 10.0.0.1 port 60720 Jul 15 04:34:10.213320 sshd-session[5577]: pam_unix(sshd:session): session closed for user core Jul 15 04:34:10.216989 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:60720.service: Deactivated successfully. Jul 15 04:34:10.218986 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 04:34:10.221244 systemd-logind[1478]: Session 17 logged out. Waiting for processes to exit. Jul 15 04:34:10.223230 systemd-logind[1478]: Removed session 17. Jul 15 04:34:12.421884 kubelet[2617]: I0715 04:34:12.421833 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:34:13.886135 containerd[1499]: time="2025-07-15T04:34:13.885151774Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9ae41eb4dd71dc387a1c7acb3d627b0d528c9ad233cc8adfca7373812d242e4\" id:\"1154b00fd40c3f6ebbf3f8515d69c5018d4a523b4ea157f8f477f7b46fb05338\" pid:5611 exited_at:{seconds:1752554053 nanos:883165916}" Jul 15 04:34:15.229331 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:37956.service - OpenSSH per-connection server daemon (10.0.0.1:37956). Jul 15 04:34:15.285822 sshd[5623]: Accepted publickey for core from 10.0.0.1 port 37956 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:34:15.287172 sshd-session[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:34:15.291868 systemd-logind[1478]: New session 18 of user core. Jul 15 04:34:15.305715 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 04:34:15.453771 sshd[5626]: Connection closed by 10.0.0.1 port 37956 Jul 15 04:34:15.454100 sshd-session[5623]: pam_unix(sshd:session): session closed for user core Jul 15 04:34:15.459225 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:37956.service: Deactivated successfully. Jul 15 04:34:15.461172 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 04:34:15.462344 systemd-logind[1478]: Session 18 logged out. Waiting for processes to exit. Jul 15 04:34:15.464479 systemd-logind[1478]: Removed session 18. Jul 15 04:34:18.540395 kubelet[2617]: E0715 04:34:18.540358 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 04:34:20.470004 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:37968.service - OpenSSH per-connection server daemon (10.0.0.1:37968). Jul 15 04:34:20.528736 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 37968 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:34:20.529924 sshd-session[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:34:20.533747 systemd-logind[1478]: New session 19 of user core. Jul 15 04:34:20.544626 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 04:34:20.683316 sshd[5657]: Connection closed by 10.0.0.1 port 37968 Jul 15 04:34:20.683682 sshd-session[5654]: pam_unix(sshd:session): session closed for user core Jul 15 04:34:20.687884 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:37968.service: Deactivated successfully. Jul 15 04:34:20.689747 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 04:34:20.692792 systemd-logind[1478]: Session 19 logged out. Waiting for processes to exit. Jul 15 04:34:20.701593 systemd-logind[1478]: Removed session 19. Jul 15 04:34:25.706167 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:34554.service - OpenSSH per-connection server daemon (10.0.0.1:34554). Jul 15 04:34:25.762829 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 34554 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:34:25.764138 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:34:25.768662 systemd-logind[1478]: New session 20 of user core. Jul 15 04:34:25.782670 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 04:34:25.916594 sshd[5677]: Connection closed by 10.0.0.1 port 34554 Jul 15 04:34:25.918685 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Jul 15 04:34:25.922779 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:34554.service: Deactivated successfully. Jul 15 04:34:25.925040 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 04:34:25.925996 systemd-logind[1478]: Session 20 logged out. Waiting for processes to exit. Jul 15 04:34:25.927245 systemd-logind[1478]: Removed session 20. Jul 15 04:34:26.312308 containerd[1499]: time="2025-07-15T04:34:26.312219009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca28b50eeb534279be7f6e247f26f9ae612c399da4b4288d77e3edf73dba36a0\" id:\"d610e15d1b1b1932b823f0e0d316c82634f59d0ea5c9c7d6fc1b73b0fb34bb08\" pid:5700 exited_at:{seconds:1752554066 nanos:311719653}"