Jun 21 02:30:21.833723 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 21 02:30:21.833744 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sat Jun 21 00:00:47 -00 2025 Jun 21 02:30:21.833754 kernel: KASLR enabled Jun 21 02:30:21.833760 kernel: efi: EFI v2.7 by EDK II Jun 21 02:30:21.833765 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jun 21 02:30:21.833770 kernel: random: crng init done Jun 21 02:30:21.833777 kernel: secureboot: Secure boot disabled Jun 21 02:30:21.833782 kernel: ACPI: Early table checksum verification disabled Jun 21 02:30:21.833788 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jun 21 02:30:21.833795 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jun 21 02:30:21.833801 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:30:21.833807 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:30:21.833813 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:30:21.833818 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:30:21.833825 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:30:21.833833 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:30:21.833839 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:30:21.833845 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:30:21.833851 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:30:21.833857 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jun 21 02:30:21.833863 kernel: ACPI: Use ACPI SPCR as default console: Yes Jun 21 02:30:21.833869 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jun 21 02:30:21.833875 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jun 21 02:30:21.833882 kernel: Zone ranges: Jun 21 02:30:21.833888 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jun 21 02:30:21.833895 kernel: DMA32 empty Jun 21 02:30:21.833901 kernel: Normal empty Jun 21 02:30:21.833907 kernel: Device empty Jun 21 02:30:21.833913 kernel: Movable zone start for each node Jun 21 02:30:21.833919 kernel: Early memory node ranges Jun 21 02:30:21.833925 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jun 21 02:30:21.833935 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jun 21 02:30:21.833942 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jun 21 02:30:21.833948 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jun 21 02:30:21.833954 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jun 21 02:30:21.833960 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jun 21 02:30:21.833966 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jun 21 02:30:21.833974 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jun 21 02:30:21.833980 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jun 21 02:30:21.833986 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jun 21 02:30:21.833995 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jun 21 02:30:21.834002 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jun 21 02:30:21.834008 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jun 21 02:30:21.834016 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jun 21 02:30:21.834025 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jun 21 02:30:21.834033 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jun 21 02:30:21.834040 kernel: psci: probing for conduit method from ACPI. Jun 21 02:30:21.834046 kernel: psci: PSCIv1.1 detected in firmware. Jun 21 02:30:21.834052 kernel: psci: Using standard PSCI v0.2 function IDs Jun 21 02:30:21.834059 kernel: psci: Trusted OS migration not required Jun 21 02:30:21.834065 kernel: psci: SMC Calling Convention v1.1 Jun 21 02:30:21.834072 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jun 21 02:30:21.834078 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jun 21 02:30:21.834086 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jun 21 02:30:21.834093 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jun 21 02:30:21.834099 kernel: Detected PIPT I-cache on CPU0 Jun 21 02:30:21.834106 kernel: CPU features: detected: GIC system register CPU interface Jun 21 02:30:21.834112 kernel: CPU features: detected: Spectre-v4 Jun 21 02:30:21.834118 kernel: CPU features: detected: Spectre-BHB Jun 21 02:30:21.834125 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 21 02:30:21.834131 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 21 02:30:21.834138 kernel: CPU features: detected: ARM erratum 1418040 Jun 21 02:30:21.834144 kernel: CPU features: detected: SSBS not fully self-synchronizing Jun 21 02:30:21.834150 kernel: alternatives: applying boot alternatives Jun 21 02:30:21.834158 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb99487be08e9decec94bac26681ba79a4365c210ec86e0c6fe47991cb7f77db Jun 21 02:30:21.834166 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 02:30:21.834172 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 21 02:30:21.834179 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 21 02:30:21.834185 kernel: Fallback order for Node 0: 0 Jun 21 02:30:21.834191 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jun 21 02:30:21.834198 kernel: Policy zone: DMA Jun 21 02:30:21.834204 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 02:30:21.834211 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jun 21 02:30:21.834217 kernel: software IO TLB: area num 4. Jun 21 02:30:21.834224 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jun 21 02:30:21.834230 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jun 21 02:30:21.834238 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 21 02:30:21.834244 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 02:30:21.834252 kernel: rcu: RCU event tracing is enabled. Jun 21 02:30:21.834258 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 21 02:30:21.834265 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 02:30:21.834271 kernel: Tracing variant of Tasks RCU enabled. Jun 21 02:30:21.834278 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 02:30:21.834284 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 21 02:30:21.834291 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 21 02:30:21.834298 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 21 02:30:21.834304 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 21 02:30:21.834312 kernel: GICv3: 256 SPIs implemented Jun 21 02:30:21.834318 kernel: GICv3: 0 Extended SPIs implemented Jun 21 02:30:21.834324 kernel: Root IRQ handler: gic_handle_irq Jun 21 02:30:21.834331 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 21 02:30:21.834338 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jun 21 02:30:21.834344 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jun 21 02:30:21.834350 kernel: ITS [mem 0x08080000-0x0809ffff] Jun 21 02:30:21.834357 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jun 21 02:30:21.834364 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jun 21 02:30:21.834370 kernel: GICv3: using LPI property table @0x00000000400f0000 Jun 21 02:30:21.834377 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 Jun 21 02:30:21.834383 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 02:30:21.834391 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 21 02:30:21.834397 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 21 02:30:21.834404 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 21 02:30:21.834410 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 21 02:30:21.834417 kernel: arm-pv: using stolen time PV Jun 21 02:30:21.834424 kernel: Console: colour dummy device 80x25 Jun 21 02:30:21.834430 kernel: ACPI: Core revision 20240827 Jun 21 02:30:21.834437 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 21 02:30:21.834443 kernel: pid_max: default: 32768 minimum: 301 Jun 21 02:30:21.834450 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 02:30:21.834458 kernel: landlock: Up and running. Jun 21 02:30:21.834464 kernel: SELinux: Initializing. Jun 21 02:30:21.834471 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 02:30:21.834478 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 02:30:21.834484 kernel: rcu: Hierarchical SRCU implementation. Jun 21 02:30:21.834491 kernel: rcu: Max phase no-delay instances is 400. Jun 21 02:30:21.834498 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 02:30:21.834504 kernel: Remapping and enabling EFI services. Jun 21 02:30:21.834511 kernel: smp: Bringing up secondary CPUs ... Jun 21 02:30:21.834523 kernel: Detected PIPT I-cache on CPU1 Jun 21 02:30:21.834530 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jun 21 02:30:21.834537 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 Jun 21 02:30:21.834545 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 21 02:30:21.834552 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 21 02:30:21.834559 kernel: Detected PIPT I-cache on CPU2 Jun 21 02:30:21.834566 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jun 21 02:30:21.834573 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 Jun 21 02:30:21.834581 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 21 02:30:21.834588 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jun 21 02:30:21.834595 kernel: Detected PIPT I-cache on CPU3 Jun 21 02:30:21.834602 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jun 21 02:30:21.834609 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 Jun 21 02:30:21.834616 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 21 02:30:21.834658 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jun 21 02:30:21.834718 kernel: smp: Brought up 1 node, 4 CPUs Jun 21 02:30:21.834728 kernel: SMP: Total of 4 processors activated. Jun 21 02:30:21.834739 kernel: CPU: All CPU(s) started at EL1 Jun 21 02:30:21.834746 kernel: CPU features: detected: 32-bit EL0 Support Jun 21 02:30:21.834753 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 21 02:30:21.834762 kernel: CPU features: detected: Common not Private translations Jun 21 02:30:21.834772 kernel: CPU features: detected: CRC32 instructions Jun 21 02:30:21.834781 kernel: CPU features: detected: Enhanced Virtualization Traps Jun 21 02:30:21.834788 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 21 02:30:21.834795 kernel: CPU features: detected: LSE atomic instructions Jun 21 02:30:21.834802 kernel: CPU features: detected: Privileged Access Never Jun 21 02:30:21.834811 kernel: CPU features: detected: RAS Extension Support Jun 21 02:30:21.834819 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 21 02:30:21.834826 kernel: alternatives: applying system-wide alternatives Jun 21 02:30:21.834835 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jun 21 02:30:21.834842 kernel: Memory: 2424408K/2572288K available (11136K kernel code, 2284K rwdata, 8980K rodata, 39488K init, 1037K bss, 125728K reserved, 16384K cma-reserved) Jun 21 02:30:21.834850 kernel: devtmpfs: initialized Jun 21 02:30:21.834859 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 02:30:21.834867 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 21 02:30:21.834876 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 21 02:30:21.834885 kernel: 0 pages in range for non-PLT usage Jun 21 02:30:21.834893 kernel: 508496 pages in range for PLT usage Jun 21 02:30:21.834903 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 02:30:21.834913 kernel: SMBIOS 3.0.0 present. Jun 21 02:30:21.834920 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jun 21 02:30:21.834927 kernel: DMI: Memory slots populated: 1/1 Jun 21 02:30:21.834934 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 02:30:21.834941 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 21 02:30:21.834948 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 21 02:30:21.834956 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 21 02:30:21.834963 kernel: audit: initializing netlink subsys (disabled) Jun 21 02:30:21.834970 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jun 21 02:30:21.834977 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 02:30:21.834984 kernel: cpuidle: using governor menu Jun 21 02:30:21.834991 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 21 02:30:21.834998 kernel: ASID allocator initialised with 32768 entries Jun 21 02:30:21.835005 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 02:30:21.835011 kernel: Serial: AMBA PL011 UART driver Jun 21 02:30:21.835020 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 21 02:30:21.835027 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 21 02:30:21.835034 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 21 02:30:21.835041 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 21 02:30:21.835048 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 02:30:21.835055 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 02:30:21.835062 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 21 02:30:21.835069 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 21 02:30:21.835076 kernel: ACPI: Added _OSI(Module Device) Jun 21 02:30:21.835084 kernel: ACPI: Added _OSI(Processor Device) Jun 21 02:30:21.835091 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 02:30:21.835098 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 02:30:21.835105 kernel: ACPI: Interpreter enabled Jun 21 02:30:21.835112 kernel: ACPI: Using GIC for interrupt routing Jun 21 02:30:21.835119 kernel: ACPI: MCFG table detected, 1 entries Jun 21 02:30:21.835126 kernel: ACPI: CPU0 has been hot-added Jun 21 02:30:21.835133 kernel: ACPI: CPU1 has been hot-added Jun 21 02:30:21.835141 kernel: ACPI: CPU2 has been hot-added Jun 21 02:30:21.835147 kernel: ACPI: CPU3 has been hot-added Jun 21 02:30:21.835156 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jun 21 02:30:21.835163 kernel: printk: legacy console [ttyAMA0] enabled Jun 21 02:30:21.835170 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 21 02:30:21.835305 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 21 02:30:21.835376 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 21 02:30:21.835437 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 21 02:30:21.835570 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jun 21 02:30:21.835667 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jun 21 02:30:21.835679 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jun 21 02:30:21.835687 kernel: PCI host bridge to bus 0000:00 Jun 21 02:30:21.835770 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jun 21 02:30:21.835831 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 21 02:30:21.835889 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jun 21 02:30:21.835946 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 21 02:30:21.836036 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jun 21 02:30:21.836119 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 02:30:21.836191 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jun 21 02:30:21.836256 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jun 21 02:30:21.836323 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jun 21 02:30:21.836399 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jun 21 02:30:21.836466 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jun 21 02:30:21.836534 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jun 21 02:30:21.836594 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jun 21 02:30:21.836668 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 21 02:30:21.836741 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jun 21 02:30:21.836751 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 21 02:30:21.836759 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 21 02:30:21.836767 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 21 02:30:21.836777 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 21 02:30:21.836785 kernel: iommu: Default domain type: Translated Jun 21 02:30:21.836792 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 21 02:30:21.836799 kernel: efivars: Registered efivars operations Jun 21 02:30:21.836807 kernel: vgaarb: loaded Jun 21 02:30:21.836814 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 21 02:30:21.836821 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 02:30:21.836829 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 02:30:21.836836 kernel: pnp: PnP ACPI init Jun 21 02:30:21.836912 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jun 21 02:30:21.836923 kernel: pnp: PnP ACPI: found 1 devices Jun 21 02:30:21.836930 kernel: NET: Registered PF_INET protocol family Jun 21 02:30:21.836938 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 21 02:30:21.836946 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 21 02:30:21.836954 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 02:30:21.836961 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 21 02:30:21.836969 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 21 02:30:21.836978 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 21 02:30:21.836985 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 02:30:21.836993 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 02:30:21.837000 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 02:30:21.837007 kernel: PCI: CLS 0 bytes, default 64 Jun 21 02:30:21.837014 kernel: kvm [1]: HYP mode not available Jun 21 02:30:21.837022 kernel: Initialise system trusted keyrings Jun 21 02:30:21.837029 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 21 02:30:21.837036 kernel: Key type asymmetric registered Jun 21 02:30:21.837045 kernel: Asymmetric key parser 'x509' registered Jun 21 02:30:21.837052 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 21 02:30:21.837060 kernel: io scheduler mq-deadline registered Jun 21 02:30:21.837067 kernel: io scheduler kyber registered Jun 21 02:30:21.837074 kernel: io scheduler bfq registered Jun 21 02:30:21.837082 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 21 02:30:21.837089 kernel: ACPI: button: Power Button [PWRB] Jun 21 02:30:21.837097 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 21 02:30:21.837165 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jun 21 02:30:21.837176 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 02:30:21.837184 kernel: thunder_xcv, ver 1.0 Jun 21 02:30:21.837191 kernel: thunder_bgx, ver 1.0 Jun 21 02:30:21.837198 kernel: nicpf, ver 1.0 Jun 21 02:30:21.837205 kernel: nicvf, ver 1.0 Jun 21 02:30:21.837285 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 21 02:30:21.837348 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-21T02:30:21 UTC (1750473021) Jun 21 02:30:21.837358 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 21 02:30:21.837365 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jun 21 02:30:21.837375 kernel: watchdog: NMI not fully supported Jun 21 02:30:21.837382 kernel: watchdog: Hard watchdog permanently disabled Jun 21 02:30:21.837389 kernel: NET: Registered PF_INET6 protocol family Jun 21 02:30:21.837396 kernel: Segment Routing with IPv6 Jun 21 02:30:21.837404 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 02:30:21.837411 kernel: NET: Registered PF_PACKET protocol family Jun 21 02:30:21.837418 kernel: Key type dns_resolver registered Jun 21 02:30:21.837425 kernel: registered taskstats version 1 Jun 21 02:30:21.837432 kernel: Loading compiled-in X.509 certificates Jun 21 02:30:21.837442 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 0d4b619b81572779adc2f9dd5f1325c23c2a41ec' Jun 21 02:30:21.837449 kernel: Demotion targets for Node 0: null Jun 21 02:30:21.837456 kernel: Key type .fscrypt registered Jun 21 02:30:21.837463 kernel: Key type fscrypt-provisioning registered Jun 21 02:30:21.837471 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 02:30:21.837478 kernel: ima: Allocated hash algorithm: sha1 Jun 21 02:30:21.837486 kernel: ima: No architecture policies found Jun 21 02:30:21.837493 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 21 02:30:21.837501 kernel: clk: Disabling unused clocks Jun 21 02:30:21.837509 kernel: PM: genpd: Disabling unused power domains Jun 21 02:30:21.837516 kernel: Warning: unable to open an initial console. Jun 21 02:30:21.837524 kernel: Freeing unused kernel memory: 39488K Jun 21 02:30:21.837531 kernel: Run /init as init process Jun 21 02:30:21.837538 kernel: with arguments: Jun 21 02:30:21.837546 kernel: /init Jun 21 02:30:21.837553 kernel: with environment: Jun 21 02:30:21.837560 kernel: HOME=/ Jun 21 02:30:21.837567 kernel: TERM=linux Jun 21 02:30:21.837575 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 02:30:21.837583 systemd[1]: Successfully made /usr/ read-only. Jun 21 02:30:21.837594 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 02:30:21.837603 systemd[1]: Detected virtualization kvm. Jun 21 02:30:21.837611 systemd[1]: Detected architecture arm64. Jun 21 02:30:21.837618 systemd[1]: Running in initrd. Jun 21 02:30:21.837643 systemd[1]: No hostname configured, using default hostname. Jun 21 02:30:21.837653 systemd[1]: Hostname set to . Jun 21 02:30:21.837661 systemd[1]: Initializing machine ID from VM UUID. Jun 21 02:30:21.837669 systemd[1]: Queued start job for default target initrd.target. Jun 21 02:30:21.837677 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 02:30:21.837684 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 02:30:21.837693 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 02:30:21.837701 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 02:30:21.837716 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 02:30:21.837726 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 02:30:21.837735 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 02:30:21.837743 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 02:30:21.837751 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 02:30:21.837758 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 02:30:21.837766 systemd[1]: Reached target paths.target - Path Units. Jun 21 02:30:21.837774 systemd[1]: Reached target slices.target - Slice Units. Jun 21 02:30:21.837784 systemd[1]: Reached target swap.target - Swaps. Jun 21 02:30:21.837791 systemd[1]: Reached target timers.target - Timer Units. Jun 21 02:30:21.837799 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 02:30:21.837806 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 02:30:21.837815 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 02:30:21.837822 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 02:30:21.837830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 02:30:21.837838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 02:30:21.837847 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 02:30:21.837855 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 02:30:21.837863 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 02:30:21.837871 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 02:30:21.837879 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 02:30:21.837887 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 02:30:21.837895 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 02:30:21.837902 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 02:30:21.837910 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 02:30:21.837919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 02:30:21.837926 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 02:30:21.837935 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 02:30:21.837943 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 02:30:21.837953 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 02:30:21.837980 systemd-journald[245]: Collecting audit messages is disabled. Jun 21 02:30:21.837999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:30:21.838007 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 02:30:21.838018 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 02:30:21.838026 systemd-journald[245]: Journal started Jun 21 02:30:21.838046 systemd-journald[245]: Runtime Journal (/run/log/journal/ede6738ee8f14d2a88b70ca12220f360) is 6M, max 48.5M, 42.4M free. Jun 21 02:30:21.820582 systemd-modules-load[246]: Inserted module 'overlay' Jun 21 02:30:21.843178 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 02:30:21.843207 kernel: Bridge firewalling registered Jun 21 02:30:21.843671 systemd-modules-load[246]: Inserted module 'br_netfilter' Jun 21 02:30:21.849810 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 02:30:21.851228 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 02:30:21.855708 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 02:30:21.857337 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 02:30:21.867172 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 02:30:21.868510 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 02:30:21.873596 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 02:30:21.875143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 02:30:21.877230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 02:30:21.877583 systemd-tmpfiles[283]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 02:30:21.881999 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 02:30:21.886566 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 02:30:21.899842 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb99487be08e9decec94bac26681ba79a4365c210ec86e0c6fe47991cb7f77db Jun 21 02:30:21.919730 systemd-resolved[291]: Positive Trust Anchors: Jun 21 02:30:21.919747 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 02:30:21.919778 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 02:30:21.924546 systemd-resolved[291]: Defaulting to hostname 'linux'. Jun 21 02:30:21.925601 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 02:30:21.929496 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 02:30:21.978655 kernel: SCSI subsystem initialized Jun 21 02:30:21.983643 kernel: Loading iSCSI transport class v2.0-870. Jun 21 02:30:21.994668 kernel: iscsi: registered transport (tcp) Jun 21 02:30:22.011658 kernel: iscsi: registered transport (qla4xxx) Jun 21 02:30:22.011684 kernel: QLogic iSCSI HBA Driver Jun 21 02:30:22.033529 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 02:30:22.050348 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 02:30:22.053066 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 02:30:22.099661 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 02:30:22.102047 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 02:30:22.175659 kernel: raid6: neonx8 gen() 15786 MB/s Jun 21 02:30:22.192673 kernel: raid6: neonx4 gen() 15812 MB/s Jun 21 02:30:22.209648 kernel: raid6: neonx2 gen() 13196 MB/s Jun 21 02:30:22.226645 kernel: raid6: neonx1 gen() 10425 MB/s Jun 21 02:30:22.243650 kernel: raid6: int64x8 gen() 6902 MB/s Jun 21 02:30:22.260650 kernel: raid6: int64x4 gen() 7347 MB/s Jun 21 02:30:22.277648 kernel: raid6: int64x2 gen() 6102 MB/s Jun 21 02:30:22.294715 kernel: raid6: int64x1 gen() 5053 MB/s Jun 21 02:30:22.294741 kernel: raid6: using algorithm neonx4 gen() 15812 MB/s Jun 21 02:30:22.312694 kernel: raid6: .... xor() 12335 MB/s, rmw enabled Jun 21 02:30:22.312713 kernel: raid6: using neon recovery algorithm Jun 21 02:30:22.318133 kernel: xor: measuring software checksum speed Jun 21 02:30:22.318148 kernel: 8regs : 21545 MB/sec Jun 21 02:30:22.318812 kernel: 32regs : 21664 MB/sec Jun 21 02:30:22.320011 kernel: arm64_neon : 28061 MB/sec Jun 21 02:30:22.320039 kernel: xor: using function: arm64_neon (28061 MB/sec) Jun 21 02:30:22.372655 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 02:30:22.379220 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 02:30:22.381737 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 02:30:22.409179 systemd-udevd[500]: Using default interface naming scheme 'v255'. Jun 21 02:30:22.413290 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 02:30:22.415727 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 02:30:22.437621 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jun 21 02:30:22.460691 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 02:30:22.463028 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 02:30:22.517834 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 02:30:22.521062 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 02:30:22.569698 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jun 21 02:30:22.570005 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 21 02:30:22.580639 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 21 02:30:22.580689 kernel: GPT:9289727 != 19775487 Jun 21 02:30:22.580699 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 21 02:30:22.580716 kernel: GPT:9289727 != 19775487 Jun 21 02:30:22.581173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 02:30:22.581297 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:30:22.585441 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 21 02:30:22.585460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 02:30:22.585255 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 02:30:22.587370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 02:30:22.617026 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 21 02:30:22.621664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:30:22.622939 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 02:30:22.632332 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 21 02:30:22.646728 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 02:30:22.653833 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 21 02:30:22.655072 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 21 02:30:22.658161 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 02:30:22.660449 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 02:30:22.662694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 02:30:22.665465 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 02:30:22.667405 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 02:30:22.682450 disk-uuid[593]: Primary Header is updated. Jun 21 02:30:22.682450 disk-uuid[593]: Secondary Entries is updated. Jun 21 02:30:22.682450 disk-uuid[593]: Secondary Header is updated. Jun 21 02:30:22.686650 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 02:30:22.687474 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 02:30:23.696657 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 02:30:23.697102 disk-uuid[596]: The operation has completed successfully. Jun 21 02:30:23.721580 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 02:30:23.721711 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 02:30:23.751603 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 02:30:23.775428 sh[613]: Success Jun 21 02:30:23.788695 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 02:30:23.790366 kernel: device-mapper: uevent: version 1.0.3 Jun 21 02:30:23.790400 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 02:30:23.798913 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jun 21 02:30:23.821848 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 02:30:23.824544 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 02:30:23.835243 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 02:30:23.842843 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 02:30:23.842949 kernel: BTRFS: device fsid 750e5bb7-0e5c-4b2e-87f6-233588ea3c64 devid 1 transid 51 /dev/mapper/usr (253:0) scanned by mount (625) Jun 21 02:30:23.844437 kernel: BTRFS info (device dm-0): first mount of filesystem 750e5bb7-0e5c-4b2e-87f6-233588ea3c64 Jun 21 02:30:23.845383 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 21 02:30:23.845402 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 02:30:23.849195 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 02:30:23.850420 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 02:30:23.851762 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 02:30:23.852505 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 02:30:23.854067 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 02:30:23.885638 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (656) Jun 21 02:30:23.885691 kernel: BTRFS info (device vda6): first mount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:30:23.885709 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 21 02:30:23.887178 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 02:30:23.892646 kernel: BTRFS info (device vda6): last unmount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:30:23.893325 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 02:30:23.895428 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 02:30:23.962708 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 02:30:23.966601 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 02:30:24.012581 systemd-networkd[798]: lo: Link UP Jun 21 02:30:24.012595 systemd-networkd[798]: lo: Gained carrier Jun 21 02:30:24.013367 systemd-networkd[798]: Enumeration completed Jun 21 02:30:24.013645 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 02:30:24.013881 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 02:30:24.013885 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 02:30:24.014821 systemd-networkd[798]: eth0: Link UP Jun 21 02:30:24.014824 systemd-networkd[798]: eth0: Gained carrier Jun 21 02:30:24.014832 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 02:30:24.015825 systemd[1]: Reached target network.target - Network. Jun 21 02:30:24.033033 ignition[699]: Ignition 2.21.0 Jun 21 02:30:24.033048 ignition[699]: Stage: fetch-offline Jun 21 02:30:24.033082 ignition[699]: no configs at "/usr/lib/ignition/base.d" Jun 21 02:30:24.033103 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:30:24.033287 ignition[699]: parsed url from cmdline: "" Jun 21 02:30:24.033290 ignition[699]: no config URL provided Jun 21 02:30:24.033294 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 02:30:24.033300 ignition[699]: no config at "/usr/lib/ignition/user.ign" Jun 21 02:30:24.033320 ignition[699]: op(1): [started] loading QEMU firmware config module Jun 21 02:30:24.033324 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 21 02:30:24.039685 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 21 02:30:24.042291 ignition[699]: op(1): [finished] loading QEMU firmware config module Jun 21 02:30:24.080192 ignition[699]: parsing config with SHA512: 58d98b3793495da88de2c7f43b67ffeccf5fadf60b802fdb29637888eb2d7184afa676fea82c58e2b206da9a2ada9020e010946f5c9ee3bdc1f3123a2bfae8cd Jun 21 02:30:24.086103 unknown[699]: fetched base config from "system" Jun 21 02:30:24.086119 unknown[699]: fetched user config from "qemu" Jun 21 02:30:24.086526 ignition[699]: fetch-offline: fetch-offline passed Jun 21 02:30:24.088834 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 02:30:24.086589 ignition[699]: Ignition finished successfully Jun 21 02:30:24.090413 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 21 02:30:24.091382 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 02:30:24.119871 ignition[811]: Ignition 2.21.0 Jun 21 02:30:24.119888 ignition[811]: Stage: kargs Jun 21 02:30:24.120032 ignition[811]: no configs at "/usr/lib/ignition/base.d" Jun 21 02:30:24.120042 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:30:24.121478 ignition[811]: kargs: kargs passed Jun 21 02:30:24.121551 ignition[811]: Ignition finished successfully Jun 21 02:30:24.127007 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 02:30:24.129003 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 02:30:24.150105 ignition[819]: Ignition 2.21.0 Jun 21 02:30:24.150123 ignition[819]: Stage: disks Jun 21 02:30:24.150369 ignition[819]: no configs at "/usr/lib/ignition/base.d" Jun 21 02:30:24.150385 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:30:24.154214 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 02:30:24.151745 ignition[819]: disks: disks passed Jun 21 02:30:24.155566 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 02:30:24.151799 ignition[819]: Ignition finished successfully Jun 21 02:30:24.157332 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 02:30:24.158899 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 02:30:24.160744 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 02:30:24.162249 systemd[1]: Reached target basic.target - Basic System. Jun 21 02:30:24.165066 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 02:30:24.198024 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 21 02:30:24.202948 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 02:30:24.205859 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 02:30:24.264652 kernel: EXT4-fs (vda9): mounted filesystem 9ad072e4-7680-4e5b-adc0-72c770c20c86 r/w with ordered data mode. Quota mode: none. Jun 21 02:30:24.264825 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 02:30:24.266121 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 02:30:24.268671 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 02:30:24.270402 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 02:30:24.271400 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 21 02:30:24.271443 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 02:30:24.271482 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 02:30:24.282734 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 02:30:24.286074 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 02:30:24.287661 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (837) Jun 21 02:30:24.291186 kernel: BTRFS info (device vda6): first mount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:30:24.291211 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 21 02:30:24.291221 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 02:30:24.294122 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 02:30:24.342375 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 02:30:24.345465 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Jun 21 02:30:24.348472 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 02:30:24.351407 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 02:30:24.425114 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 02:30:24.427303 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 02:30:24.428912 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 02:30:24.455701 kernel: BTRFS info (device vda6): last unmount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:30:24.472459 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 02:30:24.482712 ignition[951]: INFO : Ignition 2.21.0 Jun 21 02:30:24.482712 ignition[951]: INFO : Stage: mount Jun 21 02:30:24.485778 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 02:30:24.485778 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:30:24.485778 ignition[951]: INFO : mount: mount passed Jun 21 02:30:24.485778 ignition[951]: INFO : Ignition finished successfully Jun 21 02:30:24.488020 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 02:30:24.491210 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 02:30:24.841806 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 02:30:24.843356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 02:30:24.867641 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (963) Jun 21 02:30:24.870000 kernel: BTRFS info (device vda6): first mount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:30:24.870028 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 21 02:30:24.870038 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 02:30:24.873895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 02:30:24.913158 ignition[980]: INFO : Ignition 2.21.0 Jun 21 02:30:24.913158 ignition[980]: INFO : Stage: files Jun 21 02:30:24.915477 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 02:30:24.915477 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:30:24.915477 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Jun 21 02:30:24.918658 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 02:30:24.918658 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 02:30:24.921264 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 02:30:24.921264 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 02:30:24.921264 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 02:30:24.920802 unknown[980]: wrote ssh authorized keys file for user: core Jun 21 02:30:24.926396 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 21 02:30:24.926396 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jun 21 02:30:24.958055 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 02:30:25.180101 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 21 02:30:25.180101 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 21 02:30:25.183864 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 02:30:25.183864 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 02:30:25.183864 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 02:30:25.183864 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 02:30:25.183864 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 02:30:25.183864 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 02:30:25.183864 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 02:30:25.183864 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 02:30:25.183794 systemd-networkd[798]: eth0: Gained IPv6LL Jun 21 02:30:25.198337 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 02:30:25.198337 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 21 02:30:25.198337 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 21 02:30:25.198337 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 21 02:30:25.198337 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jun 21 02:30:25.682924 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 21 02:30:26.245168 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 21 02:30:26.245168 ignition[980]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 21 02:30:26.248754 ignition[980]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 02:30:26.263147 ignition[980]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 02:30:26.263147 ignition[980]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 21 02:30:26.263147 ignition[980]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 21 02:30:26.267868 ignition[980]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 21 02:30:26.267868 ignition[980]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 21 02:30:26.267868 ignition[980]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 21 02:30:26.267868 ignition[980]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 21 02:30:26.280524 ignition[980]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 21 02:30:26.283926 ignition[980]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 21 02:30:26.286838 ignition[980]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 21 02:30:26.286838 ignition[980]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 21 02:30:26.286838 ignition[980]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 02:30:26.286838 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 02:30:26.286838 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 02:30:26.286838 ignition[980]: INFO : files: files passed Jun 21 02:30:26.286838 ignition[980]: INFO : Ignition finished successfully Jun 21 02:30:26.287415 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 02:30:26.290280 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 02:30:26.293795 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 02:30:26.311552 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 02:30:26.312611 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory Jun 21 02:30:26.311676 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 02:30:26.316514 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 02:30:26.316514 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 02:30:26.320787 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 02:30:26.317651 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 02:30:26.319572 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 02:30:26.322639 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 02:30:26.352528 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 02:30:26.352664 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 02:30:26.354887 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 02:30:26.355915 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 02:30:26.357901 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 02:30:26.358719 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 02:30:26.381972 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 02:30:26.384410 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 02:30:26.408877 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 02:30:26.410173 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 02:30:26.412189 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 02:30:26.413952 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 02:30:26.414088 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 02:30:26.416447 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 02:30:26.417510 systemd[1]: Stopped target basic.target - Basic System. Jun 21 02:30:26.419297 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 02:30:26.421039 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 02:30:26.422773 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 02:30:26.424670 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 02:30:26.426687 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 02:30:26.428474 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 02:30:26.430619 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 02:30:26.432543 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 02:30:26.434555 systemd[1]: Stopped target swap.target - Swaps. Jun 21 02:30:26.436049 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 02:30:26.436190 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 02:30:26.438381 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 02:30:26.440315 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 02:30:26.442195 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 02:30:26.442992 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 02:30:26.444253 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 02:30:26.444394 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 02:30:26.447027 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 02:30:26.447170 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 02:30:26.449510 systemd[1]: Stopped target paths.target - Path Units. Jun 21 02:30:26.451034 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 02:30:26.451872 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 02:30:26.453182 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 02:30:26.454921 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 02:30:26.456771 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 02:30:26.456865 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 02:30:26.458619 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 02:30:26.458739 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 02:30:26.460579 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 02:30:26.460735 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 02:30:26.463058 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 02:30:26.463170 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 02:30:26.465569 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 02:30:26.467863 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 02:30:26.468800 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 02:30:26.468937 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 02:30:26.470837 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 02:30:26.470948 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 02:30:26.476674 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 02:30:26.481806 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 02:30:26.490509 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 02:30:26.494271 ignition[1035]: INFO : Ignition 2.21.0 Jun 21 02:30:26.494271 ignition[1035]: INFO : Stage: umount Jun 21 02:30:26.500021 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 02:30:26.500021 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:30:26.500021 ignition[1035]: INFO : umount: umount passed Jun 21 02:30:26.500021 ignition[1035]: INFO : Ignition finished successfully Jun 21 02:30:26.497344 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 02:30:26.497439 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 02:30:26.499048 systemd[1]: Stopped target network.target - Network. Jun 21 02:30:26.500715 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 02:30:26.500791 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 02:30:26.502169 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 02:30:26.502217 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 02:30:26.504173 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 02:30:26.504225 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 02:30:26.505900 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 02:30:26.505940 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 02:30:26.507796 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 02:30:26.509616 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 02:30:26.519257 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 02:30:26.519369 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 02:30:26.522703 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 02:30:26.522921 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 02:30:26.522959 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 02:30:26.526129 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 02:30:26.529884 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 02:30:26.532027 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 02:30:26.534588 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 02:30:26.534748 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 02:30:26.535887 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 02:30:26.535919 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 02:30:26.538516 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 02:30:26.539434 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 02:30:26.539488 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 02:30:26.541391 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 02:30:26.541436 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 02:30:26.544323 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 02:30:26.544365 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 02:30:26.546950 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 02:30:26.552867 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 02:30:26.559824 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 02:30:26.559920 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 02:30:26.562129 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 02:30:26.562233 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 02:30:26.563827 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 02:30:26.563959 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 02:30:26.566163 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 02:30:26.566224 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 02:30:26.568786 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 02:30:26.568819 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 02:30:26.570607 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 02:30:26.570709 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 02:30:26.573618 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 02:30:26.573697 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 02:30:26.576398 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 02:30:26.576455 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 02:30:26.579370 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 02:30:26.579428 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 02:30:26.582246 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 02:30:26.583408 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 02:30:26.583469 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 02:30:26.586496 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 02:30:26.586540 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 02:30:26.589963 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 21 02:30:26.590008 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 02:30:26.593212 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 02:30:26.593255 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 02:30:26.595411 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 02:30:26.595458 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:30:26.599290 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 02:30:26.600667 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 02:30:26.602513 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 02:30:26.605301 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 02:30:26.623712 systemd[1]: Switching root. Jun 21 02:30:26.655793 systemd-journald[245]: Journal stopped Jun 21 02:30:27.514132 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jun 21 02:30:27.514188 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 02:30:27.514202 kernel: SELinux: policy capability open_perms=1 Jun 21 02:30:27.514213 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 02:30:27.514222 kernel: SELinux: policy capability always_check_network=0 Jun 21 02:30:27.514236 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 02:30:27.514248 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 02:30:27.514257 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 02:30:27.514268 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 02:30:27.514277 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 02:30:27.514287 kernel: audit: type=1403 audit(1750473026.835:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 02:30:27.514301 systemd[1]: Successfully loaded SELinux policy in 47.592ms. Jun 21 02:30:27.514324 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.075ms. Jun 21 02:30:27.514336 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 02:30:27.514347 systemd[1]: Detected virtualization kvm. Jun 21 02:30:27.514357 systemd[1]: Detected architecture arm64. Jun 21 02:30:27.514370 systemd[1]: Detected first boot. Jun 21 02:30:27.514380 systemd[1]: Initializing machine ID from VM UUID. Jun 21 02:30:27.514390 kernel: NET: Registered PF_VSOCK protocol family Jun 21 02:30:27.514400 zram_generator::config[1083]: No configuration found. Jun 21 02:30:27.514412 systemd[1]: Populated /etc with preset unit settings. Jun 21 02:30:27.514423 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 02:30:27.514434 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 02:30:27.514444 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 02:30:27.514454 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 02:30:27.514466 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 02:30:27.514476 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 02:30:27.514491 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 02:30:27.514501 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 02:30:27.514512 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 02:30:27.514522 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 02:30:27.514532 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 02:30:27.514542 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 02:30:27.514555 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 02:30:27.514566 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 02:30:27.514577 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 02:30:27.514588 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 02:30:27.514598 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 02:30:27.514609 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 02:30:27.514619 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 21 02:30:27.514645 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 02:30:27.514660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 02:30:27.514671 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 02:30:27.514682 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 02:30:27.514699 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 02:30:27.514711 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 02:30:27.514721 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 02:30:27.514731 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 02:30:27.514742 systemd[1]: Reached target slices.target - Slice Units. Jun 21 02:30:27.514752 systemd[1]: Reached target swap.target - Swaps. Jun 21 02:30:27.514764 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 02:30:27.514779 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 02:30:27.514789 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 02:30:27.514800 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 02:30:27.514812 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 02:30:27.514822 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 02:30:27.514833 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 02:30:27.514845 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 02:30:27.514855 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 02:30:27.514867 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 02:30:27.514877 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 02:30:27.514888 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 02:30:27.514898 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 02:30:27.514909 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 02:30:27.514920 systemd[1]: Reached target machines.target - Containers. Jun 21 02:30:27.514930 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 02:30:27.514941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 02:30:27.514953 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 02:30:27.514964 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 02:30:27.514974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 02:30:27.514985 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 02:30:27.514999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 02:30:27.515010 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 02:30:27.515021 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 02:30:27.515032 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 02:30:27.515042 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 02:30:27.515053 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 02:30:27.515064 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 02:30:27.515074 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 02:30:27.515084 kernel: loop: module loaded Jun 21 02:30:27.515094 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 02:30:27.515105 kernel: fuse: init (API version 7.41) Jun 21 02:30:27.515115 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 02:30:27.515125 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 02:30:27.515136 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 02:30:27.515148 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 02:30:27.515158 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 02:30:27.515168 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 02:30:27.515179 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 02:30:27.515190 systemd[1]: Stopped verity-setup.service. Jun 21 02:30:27.515201 kernel: ACPI: bus type drm_connector registered Jun 21 02:30:27.515212 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 02:30:27.515222 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 02:30:27.515233 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 02:30:27.515243 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 02:30:27.515254 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 02:30:27.515264 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 02:30:27.515274 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 02:30:27.515286 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 02:30:27.515297 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 02:30:27.515308 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 02:30:27.515318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 02:30:27.515328 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 02:30:27.515340 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 02:30:27.515374 systemd-journald[1151]: Collecting audit messages is disabled. Jun 21 02:30:27.515397 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 02:30:27.515408 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 02:30:27.515418 systemd-journald[1151]: Journal started Jun 21 02:30:27.515440 systemd-journald[1151]: Runtime Journal (/run/log/journal/ede6738ee8f14d2a88b70ca12220f360) is 6M, max 48.5M, 42.4M free. Jun 21 02:30:27.227232 systemd[1]: Queued start job for default target multi-user.target. Jun 21 02:30:27.248723 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 21 02:30:27.249145 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 02:30:27.519564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 02:30:27.521673 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 02:30:27.522583 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 02:30:27.522830 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 02:30:27.524245 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 02:30:27.524434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 02:30:27.525944 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 02:30:27.527455 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 02:30:27.529174 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 02:30:27.530957 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 02:30:27.544262 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 02:30:27.551645 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 02:30:27.557481 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 02:30:27.559922 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 02:30:27.561202 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 02:30:27.561266 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 02:30:27.564067 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 02:30:27.570770 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 02:30:27.571993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 02:30:27.580774 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 02:30:27.584800 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 02:30:27.586766 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 02:30:27.590943 systemd-journald[1151]: Time spent on flushing to /var/log/journal/ede6738ee8f14d2a88b70ca12220f360 is 15.763ms for 880 entries. Jun 21 02:30:27.590943 systemd-journald[1151]: System Journal (/var/log/journal/ede6738ee8f14d2a88b70ca12220f360) is 8M, max 195.6M, 187.6M free. Jun 21 02:30:27.610773 systemd-journald[1151]: Received client request to flush runtime journal. Jun 21 02:30:27.590937 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 02:30:27.593539 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 02:30:27.596815 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 02:30:27.599160 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 02:30:27.601864 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 02:30:27.604893 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 02:30:27.606225 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 02:30:27.618867 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 02:30:27.621174 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 02:30:27.624886 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 02:30:27.630202 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 02:30:27.630718 kernel: loop0: detected capacity change from 0 to 107312 Jun 21 02:30:27.646311 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 02:30:27.649992 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 02:30:27.660477 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jun 21 02:30:27.660497 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jun 21 02:30:27.667731 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 02:30:27.672123 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 02:30:27.677053 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 02:30:27.679664 kernel: loop1: detected capacity change from 0 to 138376 Jun 21 02:30:27.705747 kernel: loop2: detected capacity change from 0 to 211168 Jun 21 02:30:27.716991 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 02:30:27.719711 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 02:30:27.727654 kernel: loop3: detected capacity change from 0 to 107312 Jun 21 02:30:27.733759 kernel: loop4: detected capacity change from 0 to 138376 Jun 21 02:30:27.741896 kernel: loop5: detected capacity change from 0 to 211168 Jun 21 02:30:27.746366 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jun 21 02:30:27.746384 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jun 21 02:30:27.747019 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 21 02:30:27.747439 (sd-merge)[1222]: Merged extensions into '/usr'. Jun 21 02:30:27.752592 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 02:30:27.755355 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 02:30:27.755372 systemd[1]: Reloading... Jun 21 02:30:27.830679 zram_generator::config[1248]: No configuration found. Jun 21 02:30:27.888693 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 02:30:27.909588 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 02:30:27.973913 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 02:30:27.974546 systemd[1]: Reloading finished in 218 ms. Jun 21 02:30:27.989441 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 02:30:27.992123 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 02:30:28.005847 systemd[1]: Starting ensure-sysext.service... Jun 21 02:30:28.007788 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 02:30:28.015759 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Jun 21 02:30:28.015774 systemd[1]: Reloading... Jun 21 02:30:28.023913 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 02:30:28.023953 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 02:30:28.024178 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 02:30:28.024362 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 02:30:28.024998 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 02:30:28.025201 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jun 21 02:30:28.025252 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jun 21 02:30:28.027681 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 02:30:28.027700 systemd-tmpfiles[1286]: Skipping /boot Jun 21 02:30:28.036975 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 02:30:28.036992 systemd-tmpfiles[1286]: Skipping /boot Jun 21 02:30:28.058945 zram_generator::config[1311]: No configuration found. Jun 21 02:30:28.133332 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 02:30:28.195868 systemd[1]: Reloading finished in 179 ms. Jun 21 02:30:28.219414 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 02:30:28.225342 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 02:30:28.238896 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 02:30:28.241407 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 02:30:28.255595 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 02:30:28.259090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 02:30:28.263978 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 02:30:28.267739 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 02:30:28.276981 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 02:30:28.279911 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 02:30:28.282868 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 02:30:28.293161 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 02:30:28.295972 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 02:30:28.297411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 02:30:28.297540 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 02:30:28.299229 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 02:30:28.301313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 02:30:28.301708 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 02:30:28.303466 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jun 21 02:30:28.304180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 02:30:28.304336 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 02:30:28.310969 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 02:30:28.311157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 02:30:28.316215 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 02:30:28.318911 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 02:30:28.321187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 02:30:28.323437 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 02:30:28.325863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 02:30:28.326002 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 02:30:28.327313 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 02:30:28.329217 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 02:30:28.341215 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 02:30:28.343582 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 02:30:28.345463 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 02:30:28.345641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 02:30:28.347241 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 02:30:28.347398 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 02:30:28.352401 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 02:30:28.355069 augenrules[1401]: No rules Jun 21 02:30:28.356874 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 02:30:28.360171 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 02:30:28.368665 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 02:30:28.370319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 02:30:28.370505 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 02:30:28.381698 systemd[1]: Finished ensure-sysext.service. Jun 21 02:30:28.389997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 02:30:28.392823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 02:30:28.395253 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 02:30:28.398900 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 02:30:28.400070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 02:30:28.400121 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 02:30:28.402673 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 02:30:28.404710 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 02:30:28.411393 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 21 02:30:28.412794 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 02:30:28.413238 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 02:30:28.413413 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 02:30:28.414822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 02:30:28.414987 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 02:30:28.421932 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 02:30:28.423074 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 02:30:28.427533 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 02:30:28.429789 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 21 02:30:28.479082 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 02:30:28.483608 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 02:30:28.511700 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 02:30:28.556334 systemd-networkd[1434]: lo: Link UP Jun 21 02:30:28.556343 systemd-networkd[1434]: lo: Gained carrier Jun 21 02:30:28.557201 systemd-networkd[1434]: Enumeration completed Jun 21 02:30:28.557322 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 02:30:28.557821 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 02:30:28.557831 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 02:30:28.558312 systemd-networkd[1434]: eth0: Link UP Jun 21 02:30:28.558427 systemd-networkd[1434]: eth0: Gained carrier Jun 21 02:30:28.558446 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 02:30:28.561527 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 02:30:28.564175 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 02:30:28.569403 systemd-resolved[1353]: Positive Trust Anchors: Jun 21 02:30:28.572962 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 02:30:28.573086 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 02:30:28.574700 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 21 02:30:28.586454 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 21 02:30:28.587825 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 02:30:28.593845 systemd-timesyncd[1435]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 21 02:30:28.593904 systemd-timesyncd[1435]: Initial clock synchronization to Sat 2025-06-21 02:30:28.356631 UTC. Jun 21 02:30:28.595759 systemd-resolved[1353]: Defaulting to hostname 'linux'. Jun 21 02:30:28.617600 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 02:30:28.622042 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 02:30:28.623208 systemd[1]: Reached target network.target - Network. Jun 21 02:30:28.624117 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 02:30:28.625515 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 02:30:28.627020 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 02:30:28.628869 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 02:30:28.630912 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 02:30:28.632072 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 02:30:28.633831 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 02:30:28.635253 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 02:30:28.635295 systemd[1]: Reached target paths.target - Path Units. Jun 21 02:30:28.636328 systemd[1]: Reached target timers.target - Timer Units. Jun 21 02:30:28.640126 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 02:30:28.642783 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 02:30:28.646229 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 02:30:28.647621 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 02:30:28.648832 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 02:30:28.652024 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 02:30:28.653705 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 02:30:28.655437 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 02:30:28.662870 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 02:30:28.663826 systemd[1]: Reached target basic.target - Basic System. Jun 21 02:30:28.664768 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 02:30:28.664801 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 02:30:28.665874 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 02:30:28.667960 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 02:30:28.669814 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 02:30:28.673480 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 02:30:28.675567 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 02:30:28.676672 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 02:30:28.677705 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 02:30:28.681763 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 02:30:28.683745 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 02:30:28.684061 jq[1474]: false Jun 21 02:30:28.685910 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 02:30:28.694982 extend-filesystems[1475]: Found /dev/vda6 Jun 21 02:30:28.697886 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 02:30:28.700948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 02:30:28.702950 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 02:30:28.703421 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 02:30:28.704076 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 02:30:28.705822 extend-filesystems[1475]: Found /dev/vda9 Jun 21 02:30:28.706899 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 02:30:28.711690 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 02:30:28.714854 extend-filesystems[1475]: Checking size of /dev/vda9 Jun 21 02:30:28.714587 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 02:30:28.715334 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 02:30:28.715619 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 02:30:28.715847 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 02:30:28.724551 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 02:30:28.726743 jq[1494]: true Jun 21 02:30:28.727014 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 02:30:28.736273 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 02:30:28.749379 tar[1500]: linux-arm64/LICENSE Jun 21 02:30:28.749681 tar[1500]: linux-arm64/helm Jun 21 02:30:28.751043 jq[1502]: true Jun 21 02:30:28.761117 extend-filesystems[1475]: Resized partition /dev/vda9 Jun 21 02:30:28.781482 extend-filesystems[1518]: resize2fs 1.47.2 (1-Jan-2025) Jun 21 02:30:28.789656 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 21 02:30:28.794418 update_engine[1493]: I20250621 02:30:28.794246 1493 main.cc:92] Flatcar Update Engine starting Jun 21 02:30:28.798803 systemd-logind[1489]: Watching system buttons on /dev/input/event0 (Power Button) Jun 21 02:30:28.801632 systemd-logind[1489]: New seat seat0. Jun 21 02:30:28.806529 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 02:30:28.815532 dbus-daemon[1472]: [system] SELinux support is enabled Jun 21 02:30:28.816000 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 02:30:28.820162 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 02:30:28.820202 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 02:30:28.822362 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 02:30:28.822391 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 02:30:28.822791 update_engine[1493]: I20250621 02:30:28.822654 1493 update_check_scheduler.cc:74] Next update check in 7m59s Jun 21 02:30:28.826557 systemd[1]: Started update-engine.service - Update Engine. Jun 21 02:30:28.826675 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 21 02:30:28.827488 dbus-daemon[1472]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 21 02:30:28.830674 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:30:28.837896 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 02:30:28.839048 extend-filesystems[1518]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 21 02:30:28.839048 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 21 02:30:28.839048 extend-filesystems[1518]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 21 02:30:28.846830 extend-filesystems[1475]: Resized filesystem in /dev/vda9 Jun 21 02:30:28.840362 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 02:30:28.840562 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 02:30:28.865953 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Jun 21 02:30:28.871388 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 02:30:28.874807 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 21 02:30:28.904584 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 02:30:29.007586 containerd[1503]: time="2025-06-21T02:30:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 02:30:29.010544 containerd[1503]: time="2025-06-21T02:30:29.010506934Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 02:30:29.023344 containerd[1503]: time="2025-06-21T02:30:29.023243589Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.936µs" Jun 21 02:30:29.023382 containerd[1503]: time="2025-06-21T02:30:29.023342527Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 02:30:29.023382 containerd[1503]: time="2025-06-21T02:30:29.023362866Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 02:30:29.023669 containerd[1503]: time="2025-06-21T02:30:29.023583913Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 02:30:29.023694 containerd[1503]: time="2025-06-21T02:30:29.023675981Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 02:30:29.023723 containerd[1503]: time="2025-06-21T02:30:29.023710137Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 02:30:29.023841 containerd[1503]: time="2025-06-21T02:30:29.023818856Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 02:30:29.023869 containerd[1503]: time="2025-06-21T02:30:29.023841408Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 02:30:29.024228 containerd[1503]: time="2025-06-21T02:30:29.024196714Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 02:30:29.024254 containerd[1503]: time="2025-06-21T02:30:29.024225320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 02:30:29.024254 containerd[1503]: time="2025-06-21T02:30:29.024238944Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 02:30:29.024254 containerd[1503]: time="2025-06-21T02:30:29.024247212Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 02:30:29.024405 containerd[1503]: time="2025-06-21T02:30:29.024331672Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 02:30:29.024703 containerd[1503]: time="2025-06-21T02:30:29.024681001Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 02:30:29.024797 containerd[1503]: time="2025-06-21T02:30:29.024775125Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 02:30:29.024831 containerd[1503]: time="2025-06-21T02:30:29.024814871Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 02:30:29.024919 containerd[1503]: time="2025-06-21T02:30:29.024897934Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 02:30:29.025259 containerd[1503]: time="2025-06-21T02:30:29.025237676Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 02:30:29.025335 containerd[1503]: time="2025-06-21T02:30:29.025316663Z" level=info msg="metadata content store policy set" policy=shared Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.029929360Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030003883Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030026163Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030050926Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030070450Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030087645Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030104102Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030121918Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030136590Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030152348Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030166127Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030195510Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030339511Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 02:30:29.030627 containerd[1503]: time="2025-06-21T02:30:29.030364313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 02:30:29.030866 containerd[1503]: time="2025-06-21T02:30:29.030383876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 02:30:29.030866 containerd[1503]: time="2025-06-21T02:30:29.030399712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 02:30:29.030866 containerd[1503]: time="2025-06-21T02:30:29.030414811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 02:30:29.030866 containerd[1503]: time="2025-06-21T02:30:29.030427697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 02:30:29.030866 containerd[1503]: time="2025-06-21T02:30:29.030444232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 02:30:29.030866 containerd[1503]: time="2025-06-21T02:30:29.030459447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 02:30:29.030866 containerd[1503]: time="2025-06-21T02:30:29.030476215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 02:30:29.030866 containerd[1503]: time="2025-06-21T02:30:29.030489334Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 02:30:29.030866 containerd[1503]: time="2025-06-21T02:30:29.030506490Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 02:30:29.031014 containerd[1503]: time="2025-06-21T02:30:29.030984411Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 02:30:29.031014 containerd[1503]: time="2025-06-21T02:30:29.031010689Z" level=info msg="Start snapshots syncer" Jun 21 02:30:29.031046 containerd[1503]: time="2025-06-21T02:30:29.031036500Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 02:30:29.031445 containerd[1503]: time="2025-06-21T02:30:29.031399337Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 02:30:29.031539 containerd[1503]: time="2025-06-21T02:30:29.031488066Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 02:30:29.031957 containerd[1503]: time="2025-06-21T02:30:29.031933150Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 02:30:29.032088 containerd[1503]: time="2025-06-21T02:30:29.032067526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 02:30:29.032112 containerd[1503]: time="2025-06-21T02:30:29.032096365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 02:30:29.032112 containerd[1503]: time="2025-06-21T02:30:29.032108203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 02:30:29.032144 containerd[1503]: time="2025-06-21T02:30:29.032119226Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 02:30:29.032144 containerd[1503]: time="2025-06-21T02:30:29.032131414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 02:30:29.032144 containerd[1503]: time="2025-06-21T02:30:29.032141816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 02:30:29.032203 containerd[1503]: time="2025-06-21T02:30:29.032153577Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 02:30:29.032203 containerd[1503]: time="2025-06-21T02:30:29.032181679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 02:30:29.032203 containerd[1503]: time="2025-06-21T02:30:29.032194060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 02:30:29.032250 containerd[1503]: time="2025-06-21T02:30:29.032204618Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 02:30:29.032250 containerd[1503]: time="2025-06-21T02:30:29.032240948Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 02:30:29.032285 containerd[1503]: time="2025-06-21T02:30:29.032256784Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 02:30:29.032285 containerd[1503]: time="2025-06-21T02:30:29.032266139Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 02:30:29.032285 containerd[1503]: time="2025-06-21T02:30:29.032280384Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 02:30:29.032332 containerd[1503]: time="2025-06-21T02:30:29.032289117Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 02:30:29.032332 containerd[1503]: time="2025-06-21T02:30:29.032302469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 02:30:29.032332 containerd[1503]: time="2025-06-21T02:30:29.032312871Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 02:30:29.032628 containerd[1503]: time="2025-06-21T02:30:29.032440376Z" level=info msg="runtime interface created" Jun 21 02:30:29.032628 containerd[1503]: time="2025-06-21T02:30:29.032450895Z" level=info msg="created NRI interface" Jun 21 02:30:29.032628 containerd[1503]: time="2025-06-21T02:30:29.032463820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 02:30:29.032628 containerd[1503]: time="2025-06-21T02:30:29.032480083Z" level=info msg="Connect containerd service" Jun 21 02:30:29.032628 containerd[1503]: time="2025-06-21T02:30:29.032506322Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 02:30:29.033315 containerd[1503]: time="2025-06-21T02:30:29.033285830Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 02:30:29.153298 containerd[1503]: time="2025-06-21T02:30:29.153184815Z" level=info msg="Start subscribing containerd event" Jun 21 02:30:29.153384 containerd[1503]: time="2025-06-21T02:30:29.153356258Z" level=info msg="Start recovering state" Jun 21 02:30:29.153586 containerd[1503]: time="2025-06-21T02:30:29.153564963Z" level=info msg="Start event monitor" Jun 21 02:30:29.153615 containerd[1503]: time="2025-06-21T02:30:29.153577616Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 02:30:29.153661 containerd[1503]: time="2025-06-21T02:30:29.153598693Z" level=info msg="Start cni network conf syncer for default" Jun 21 02:30:29.153661 containerd[1503]: time="2025-06-21T02:30:29.153656215Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 02:30:29.153702 containerd[1503]: time="2025-06-21T02:30:29.153663707Z" level=info msg="Start streaming server" Jun 21 02:30:29.153702 containerd[1503]: time="2025-06-21T02:30:29.153675428Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 02:30:29.153702 containerd[1503]: time="2025-06-21T02:30:29.153682997Z" level=info msg="runtime interface starting up..." Jun 21 02:30:29.153702 containerd[1503]: time="2025-06-21T02:30:29.153688354Z" level=info msg="starting plugins..." Jun 21 02:30:29.153784 containerd[1503]: time="2025-06-21T02:30:29.153709546Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 02:30:29.154076 containerd[1503]: time="2025-06-21T02:30:29.154056857Z" level=info msg="containerd successfully booted in 0.146901s" Jun 21 02:30:29.154164 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 02:30:29.186802 tar[1500]: linux-arm64/README.md Jun 21 02:30:29.212697 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 02:30:29.765696 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 02:30:29.783852 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 02:30:29.787000 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 02:30:29.792779 systemd-networkd[1434]: eth0: Gained IPv6LL Jun 21 02:30:29.801796 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 02:30:29.803690 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 02:30:29.806069 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 21 02:30:29.808504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:30:29.822175 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 02:30:29.823766 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 02:30:29.823981 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 02:30:29.831909 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 02:30:29.841000 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 21 02:30:29.842679 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 21 02:30:29.844786 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 02:30:29.846542 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 02:30:29.850124 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 02:30:29.851239 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 02:30:29.852429 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 21 02:30:29.853782 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 02:30:30.364723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:30:30.366218 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 02:30:30.368240 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 02:30:30.368495 systemd[1]: Startup finished in 2.113s (kernel) + 5.208s (initrd) + 3.590s (userspace) = 10.911s. Jun 21 02:30:30.767986 kubelet[1611]: E0621 02:30:30.767887 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 02:30:30.770535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 02:30:30.770696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 02:30:30.770973 systemd[1]: kubelet.service: Consumed 824ms CPU time, 259.7M memory peak. Jun 21 02:30:34.825931 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 02:30:34.827384 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:36632.service - OpenSSH per-connection server daemon (10.0.0.1:36632). Jun 21 02:30:34.916154 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 36632 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:30:34.919762 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:30:34.925554 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 02:30:34.926648 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 02:30:34.933321 systemd-logind[1489]: New session 1 of user core. Jun 21 02:30:34.948950 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 02:30:34.951897 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 02:30:34.973380 (systemd)[1629]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 02:30:34.975544 systemd-logind[1489]: New session c1 of user core. Jun 21 02:30:35.095261 systemd[1629]: Queued start job for default target default.target. Jun 21 02:30:35.111667 systemd[1629]: Created slice app.slice - User Application Slice. Jun 21 02:30:35.111693 systemd[1629]: Reached target paths.target - Paths. Jun 21 02:30:35.111731 systemd[1629]: Reached target timers.target - Timers. Jun 21 02:30:35.112917 systemd[1629]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 02:30:35.121600 systemd[1629]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 02:30:35.121682 systemd[1629]: Reached target sockets.target - Sockets. Jun 21 02:30:35.121723 systemd[1629]: Reached target basic.target - Basic System. Jun 21 02:30:35.121752 systemd[1629]: Reached target default.target - Main User Target. Jun 21 02:30:35.121777 systemd[1629]: Startup finished in 140ms. Jun 21 02:30:35.121922 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 02:30:35.123219 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 02:30:35.188010 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:36642.service - OpenSSH per-connection server daemon (10.0.0.1:36642). Jun 21 02:30:35.240201 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 36642 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:30:35.241412 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:30:35.246786 systemd-logind[1489]: New session 2 of user core. Jun 21 02:30:35.260799 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 02:30:35.312471 sshd[1642]: Connection closed by 10.0.0.1 port 36642 Jun 21 02:30:35.312839 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Jun 21 02:30:35.323597 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:36642.service: Deactivated successfully. Jun 21 02:30:35.325329 systemd[1]: session-2.scope: Deactivated successfully. Jun 21 02:30:35.328103 systemd-logind[1489]: Session 2 logged out. Waiting for processes to exit. Jun 21 02:30:35.330580 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:36650.service - OpenSSH per-connection server daemon (10.0.0.1:36650). Jun 21 02:30:35.331330 systemd-logind[1489]: Removed session 2. Jun 21 02:30:35.388984 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 36650 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:30:35.390232 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:30:35.394724 systemd-logind[1489]: New session 3 of user core. Jun 21 02:30:35.409784 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 02:30:35.457057 sshd[1650]: Connection closed by 10.0.0.1 port 36650 Jun 21 02:30:35.457558 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Jun 21 02:30:35.464513 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:36650.service: Deactivated successfully. Jun 21 02:30:35.466241 systemd[1]: session-3.scope: Deactivated successfully. Jun 21 02:30:35.468128 systemd-logind[1489]: Session 3 logged out. Waiting for processes to exit. Jun 21 02:30:35.470802 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:36666.service - OpenSSH per-connection server daemon (10.0.0.1:36666). Jun 21 02:30:35.471548 systemd-logind[1489]: Removed session 3. Jun 21 02:30:35.528317 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 36666 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:30:35.529619 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:30:35.533481 systemd-logind[1489]: New session 4 of user core. Jun 21 02:30:35.546765 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 02:30:35.596681 sshd[1658]: Connection closed by 10.0.0.1 port 36666 Jun 21 02:30:35.597060 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Jun 21 02:30:35.610478 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:36666.service: Deactivated successfully. Jun 21 02:30:35.611929 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 02:30:35.613824 systemd-logind[1489]: Session 4 logged out. Waiting for processes to exit. Jun 21 02:30:35.614848 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:36680.service - OpenSSH per-connection server daemon (10.0.0.1:36680). Jun 21 02:30:35.615658 systemd-logind[1489]: Removed session 4. Jun 21 02:30:35.674234 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 36680 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:30:35.675691 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:30:35.679684 systemd-logind[1489]: New session 5 of user core. Jun 21 02:30:35.695757 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 02:30:35.753744 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 02:30:35.754011 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 02:30:35.768167 sudo[1667]: pam_unix(sudo:session): session closed for user root Jun 21 02:30:35.769692 sshd[1666]: Connection closed by 10.0.0.1 port 36680 Jun 21 02:30:35.770113 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jun 21 02:30:35.790793 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:36680.service: Deactivated successfully. Jun 21 02:30:35.792153 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 02:30:35.792777 systemd-logind[1489]: Session 5 logged out. Waiting for processes to exit. Jun 21 02:30:35.795301 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:36688.service - OpenSSH per-connection server daemon (10.0.0.1:36688). Jun 21 02:30:35.796238 systemd-logind[1489]: Removed session 5. Jun 21 02:30:35.849190 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 36688 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:30:35.850561 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:30:35.854695 systemd-logind[1489]: New session 6 of user core. Jun 21 02:30:35.866770 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 02:30:35.917939 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 02:30:35.918201 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 02:30:35.992806 sudo[1677]: pam_unix(sudo:session): session closed for user root Jun 21 02:30:35.998137 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 02:30:35.998745 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 02:30:36.007690 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 02:30:36.045836 augenrules[1699]: No rules Jun 21 02:30:36.047186 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 02:30:36.047382 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 02:30:36.048368 sudo[1676]: pam_unix(sudo:session): session closed for user root Jun 21 02:30:36.049757 sshd[1675]: Connection closed by 10.0.0.1 port 36688 Jun 21 02:30:36.050203 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Jun 21 02:30:36.061698 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:36688.service: Deactivated successfully. Jun 21 02:30:36.063061 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 02:30:36.065188 systemd-logind[1489]: Session 6 logged out. Waiting for processes to exit. Jun 21 02:30:36.067467 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:36700.service - OpenSSH per-connection server daemon (10.0.0.1:36700). Jun 21 02:30:36.067990 systemd-logind[1489]: Removed session 6. Jun 21 02:30:36.124775 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 36700 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:30:36.125970 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:30:36.130638 systemd-logind[1489]: New session 7 of user core. Jun 21 02:30:36.139769 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 02:30:36.190383 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 02:30:36.190696 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 02:30:36.574638 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 02:30:36.592948 (dockerd)[1731]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 02:30:36.848031 dockerd[1731]: time="2025-06-21T02:30:36.847906629Z" level=info msg="Starting up" Jun 21 02:30:36.849226 dockerd[1731]: time="2025-06-21T02:30:36.849195213Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 02:30:36.892572 dockerd[1731]: time="2025-06-21T02:30:36.892535231Z" level=info msg="Loading containers: start." Jun 21 02:30:36.901651 kernel: Initializing XFRM netlink socket Jun 21 02:30:37.233645 systemd-networkd[1434]: docker0: Link UP Jun 21 02:30:37.239660 dockerd[1731]: time="2025-06-21T02:30:37.239608055Z" level=info msg="Loading containers: done." Jun 21 02:30:37.256038 dockerd[1731]: time="2025-06-21T02:30:37.255990838Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 02:30:37.256158 dockerd[1731]: time="2025-06-21T02:30:37.256074458Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 02:30:37.256183 dockerd[1731]: time="2025-06-21T02:30:37.256171064Z" level=info msg="Initializing buildkit" Jun 21 02:30:37.276004 dockerd[1731]: time="2025-06-21T02:30:37.275970213Z" level=info msg="Completed buildkit initialization" Jun 21 02:30:37.281661 dockerd[1731]: time="2025-06-21T02:30:37.281609783Z" level=info msg="Daemon has completed initialization" Jun 21 02:30:37.281821 dockerd[1731]: time="2025-06-21T02:30:37.281722068Z" level=info msg="API listen on /run/docker.sock" Jun 21 02:30:37.281915 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 02:30:37.776480 containerd[1503]: time="2025-06-21T02:30:37.776444647Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 21 02:30:38.276022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2601247958.mount: Deactivated successfully. Jun 21 02:30:39.517613 containerd[1503]: time="2025-06-21T02:30:39.517532302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:39.517999 containerd[1503]: time="2025-06-21T02:30:39.517908268Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jun 21 02:30:39.518907 containerd[1503]: time="2025-06-21T02:30:39.518872728Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:39.521317 containerd[1503]: time="2025-06-21T02:30:39.521281239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:39.522416 containerd[1503]: time="2025-06-21T02:30:39.522384925Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.745900743s" Jun 21 02:30:39.522455 containerd[1503]: time="2025-06-21T02:30:39.522420366Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jun 21 02:30:39.525713 containerd[1503]: time="2025-06-21T02:30:39.525684830Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 21 02:30:40.685298 containerd[1503]: time="2025-06-21T02:30:40.685243162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:40.685941 containerd[1503]: time="2025-06-21T02:30:40.685894488Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jun 21 02:30:40.686567 containerd[1503]: time="2025-06-21T02:30:40.686538741Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:40.689009 containerd[1503]: time="2025-06-21T02:30:40.688971310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:40.689784 containerd[1503]: time="2025-06-21T02:30:40.689744398Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.164025786s" Jun 21 02:30:40.689784 containerd[1503]: time="2025-06-21T02:30:40.689779914Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jun 21 02:30:40.690295 containerd[1503]: time="2025-06-21T02:30:40.690272133Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 21 02:30:41.021047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 02:30:41.023441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:30:41.155586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:30:41.159790 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 02:30:41.193201 kubelet[2009]: E0621 02:30:41.193138 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 02:30:41.196298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 02:30:41.196438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 02:30:41.196753 systemd[1]: kubelet.service: Consumed 140ms CPU time, 107.2M memory peak. Jun 21 02:30:41.847597 containerd[1503]: time="2025-06-21T02:30:41.847549656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:41.848543 containerd[1503]: time="2025-06-21T02:30:41.848509888Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jun 21 02:30:41.849197 containerd[1503]: time="2025-06-21T02:30:41.849148572Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:41.852131 containerd[1503]: time="2025-06-21T02:30:41.852094161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:41.853031 containerd[1503]: time="2025-06-21T02:30:41.852986044Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.162685261s" Jun 21 02:30:41.853031 containerd[1503]: time="2025-06-21T02:30:41.853017575Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jun 21 02:30:41.853417 containerd[1503]: time="2025-06-21T02:30:41.853395863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 21 02:30:42.794268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025097510.mount: Deactivated successfully. Jun 21 02:30:43.019021 containerd[1503]: time="2025-06-21T02:30:43.018920862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:43.019781 containerd[1503]: time="2025-06-21T02:30:43.019741813Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jun 21 02:30:43.021079 containerd[1503]: time="2025-06-21T02:30:43.021042122Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:43.022784 containerd[1503]: time="2025-06-21T02:30:43.022745380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:43.023392 containerd[1503]: time="2025-06-21T02:30:43.023368837Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.169944056s" Jun 21 02:30:43.023448 containerd[1503]: time="2025-06-21T02:30:43.023397108Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jun 21 02:30:43.023967 containerd[1503]: time="2025-06-21T02:30:43.023938939Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 21 02:30:43.713461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201538319.mount: Deactivated successfully. Jun 21 02:30:44.818255 containerd[1503]: time="2025-06-21T02:30:44.818194247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:44.819189 containerd[1503]: time="2025-06-21T02:30:44.818815910Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jun 21 02:30:44.820264 containerd[1503]: time="2025-06-21T02:30:44.820238279Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:44.823484 containerd[1503]: time="2025-06-21T02:30:44.823444922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:44.824824 containerd[1503]: time="2025-06-21T02:30:44.824789044Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.800818683s" Jun 21 02:30:44.824968 containerd[1503]: time="2025-06-21T02:30:44.824922468Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jun 21 02:30:44.825553 containerd[1503]: time="2025-06-21T02:30:44.825407520Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 02:30:45.235530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734756134.mount: Deactivated successfully. Jun 21 02:30:45.239972 containerd[1503]: time="2025-06-21T02:30:45.239923631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 02:30:45.240565 containerd[1503]: time="2025-06-21T02:30:45.240523524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jun 21 02:30:45.241147 containerd[1503]: time="2025-06-21T02:30:45.241114090Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 02:30:45.243101 containerd[1503]: time="2025-06-21T02:30:45.243069144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 02:30:45.243727 containerd[1503]: time="2025-06-21T02:30:45.243698175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 418.264914ms" Jun 21 02:30:45.243765 containerd[1503]: time="2025-06-21T02:30:45.243728747Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 21 02:30:45.244167 containerd[1503]: time="2025-06-21T02:30:45.244134243Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 21 02:30:45.825737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount314085249.mount: Deactivated successfully. Jun 21 02:30:47.913386 containerd[1503]: time="2025-06-21T02:30:47.913330638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:47.914333 containerd[1503]: time="2025-06-21T02:30:47.913718516Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jun 21 02:30:47.914843 containerd[1503]: time="2025-06-21T02:30:47.914789040Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:47.917864 containerd[1503]: time="2025-06-21T02:30:47.917812397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:30:47.918978 containerd[1503]: time="2025-06-21T02:30:47.918947348Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.674696627s" Jun 21 02:30:47.919046 containerd[1503]: time="2025-06-21T02:30:47.918982334Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jun 21 02:30:51.446916 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 21 02:30:51.448351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:30:51.607515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:30:51.618972 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 02:30:51.652956 kubelet[2172]: E0621 02:30:51.652891 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 02:30:51.655604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 02:30:51.655753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 02:30:51.656128 systemd[1]: kubelet.service: Consumed 137ms CPU time, 105.9M memory peak. Jun 21 02:30:52.168962 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:30:52.169101 systemd[1]: kubelet.service: Consumed 137ms CPU time, 105.9M memory peak. Jun 21 02:30:52.171128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:30:52.191940 systemd[1]: Reload requested from client PID 2187 ('systemctl') (unit session-7.scope)... Jun 21 02:30:52.191957 systemd[1]: Reloading... Jun 21 02:30:52.260659 zram_generator::config[2229]: No configuration found. Jun 21 02:30:52.429045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 02:30:52.515180 systemd[1]: Reloading finished in 322 ms. Jun 21 02:30:52.590148 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 02:30:52.590237 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 02:30:52.590487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:30:52.590540 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95M memory peak. Jun 21 02:30:52.592232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:30:52.708582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:30:52.713249 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 02:30:52.747763 kubelet[2274]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 02:30:52.747763 kubelet[2274]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 02:30:52.747763 kubelet[2274]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 02:30:52.748102 kubelet[2274]: I0621 02:30:52.747812 2274 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 02:30:53.843236 kubelet[2274]: I0621 02:30:53.843185 2274 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 21 02:30:53.843236 kubelet[2274]: I0621 02:30:53.843221 2274 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 02:30:53.843656 kubelet[2274]: I0621 02:30:53.843448 2274 server.go:956] "Client rotation is on, will bootstrap in background" Jun 21 02:30:53.892862 kubelet[2274]: E0621 02:30:53.892804 2274 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 21 02:30:53.893310 kubelet[2274]: I0621 02:30:53.893297 2274 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 02:30:53.901183 kubelet[2274]: I0621 02:30:53.901158 2274 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 02:30:53.903940 kubelet[2274]: I0621 02:30:53.903902 2274 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 02:30:53.904210 kubelet[2274]: I0621 02:30:53.904175 2274 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 02:30:53.904359 kubelet[2274]: I0621 02:30:53.904200 2274 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 02:30:53.904442 kubelet[2274]: I0621 02:30:53.904415 2274 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 02:30:53.904442 kubelet[2274]: I0621 02:30:53.904425 2274 container_manager_linux.go:303] "Creating device plugin manager" Jun 21 02:30:53.904632 kubelet[2274]: I0621 02:30:53.904603 2274 state_mem.go:36] "Initialized new in-memory state store" Jun 21 02:30:53.907069 kubelet[2274]: I0621 02:30:53.907043 2274 kubelet.go:480] "Attempting to sync node with API server" Jun 21 02:30:53.907069 kubelet[2274]: I0621 02:30:53.907067 2274 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 02:30:53.907143 kubelet[2274]: I0621 02:30:53.907101 2274 kubelet.go:386] "Adding apiserver pod source" Jun 21 02:30:53.908106 kubelet[2274]: I0621 02:30:53.908083 2274 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 02:30:53.909031 kubelet[2274]: I0621 02:30:53.908979 2274 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 02:30:53.909721 kubelet[2274]: I0621 02:30:53.909654 2274 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 21 02:30:53.909781 kubelet[2274]: W0621 02:30:53.909765 2274 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 02:30:53.910219 kubelet[2274]: E0621 02:30:53.910185 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 21 02:30:53.910325 kubelet[2274]: E0621 02:30:53.910304 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 21 02:30:53.916217 kubelet[2274]: I0621 02:30:53.916182 2274 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 02:30:53.916291 kubelet[2274]: I0621 02:30:53.916268 2274 server.go:1289] "Started kubelet" Jun 21 02:30:53.916638 kubelet[2274]: I0621 02:30:53.916600 2274 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 02:30:53.916828 kubelet[2274]: I0621 02:30:53.916766 2274 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 02:30:53.920398 kubelet[2274]: I0621 02:30:53.920373 2274 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 02:30:53.922596 kubelet[2274]: I0621 02:30:53.922477 2274 server.go:317] "Adding debug handlers to kubelet server" Jun 21 02:30:53.922742 kubelet[2274]: I0621 02:30:53.922728 2274 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 02:30:53.923611 kubelet[2274]: I0621 02:30:53.923577 2274 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 02:30:53.924458 kubelet[2274]: E0621 02:30:53.923032 2274 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184aedf7d7918f4a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-21 02:30:53.916204874 +0000 UTC m=+1.199366966,LastTimestamp:2025-06-21 02:30:53.916204874 +0000 UTC m=+1.199366966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 21 02:30:53.924727 kubelet[2274]: E0621 02:30:53.924514 2274 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 02:30:53.924727 kubelet[2274]: I0621 02:30:53.924542 2274 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 02:30:53.925129 kubelet[2274]: I0621 02:30:53.924774 2274 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 02:30:53.925129 kubelet[2274]: I0621 02:30:53.924868 2274 reconciler.go:26] "Reconciler: start to sync state" Jun 21 02:30:53.925243 kubelet[2274]: E0621 02:30:53.925204 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Jun 21 02:30:53.925367 kubelet[2274]: E0621 02:30:53.925332 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 21 02:30:53.925596 kubelet[2274]: E0621 02:30:53.925564 2274 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 02:30:53.926702 kubelet[2274]: I0621 02:30:53.926675 2274 factory.go:223] Registration of the containerd container factory successfully Jun 21 02:30:53.926702 kubelet[2274]: I0621 02:30:53.926694 2274 factory.go:223] Registration of the systemd container factory successfully Jun 21 02:30:53.926799 kubelet[2274]: I0621 02:30:53.926770 2274 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 02:30:53.929755 kubelet[2274]: I0621 02:30:53.929718 2274 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 21 02:30:53.936033 kubelet[2274]: I0621 02:30:53.936006 2274 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 02:30:53.936141 kubelet[2274]: I0621 02:30:53.936129 2274 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 02:30:53.936198 kubelet[2274]: I0621 02:30:53.936190 2274 state_mem.go:36] "Initialized new in-memory state store" Jun 21 02:30:54.025317 kubelet[2274]: E0621 02:30:54.025259 2274 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 02:30:54.125832 kubelet[2274]: E0621 02:30:54.125737 2274 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 02:30:54.126821 kubelet[2274]: E0621 02:30:54.125925 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Jun 21 02:30:54.151465 kubelet[2274]: I0621 02:30:54.151436 2274 policy_none.go:49] "None policy: Start" Jun 21 02:30:54.151465 kubelet[2274]: I0621 02:30:54.151468 2274 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 02:30:54.151591 kubelet[2274]: I0621 02:30:54.151480 2274 state_mem.go:35] "Initializing new in-memory state store" Jun 21 02:30:54.157486 kubelet[2274]: I0621 02:30:54.157445 2274 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 21 02:30:54.157486 kubelet[2274]: I0621 02:30:54.157476 2274 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 21 02:30:54.157486 kubelet[2274]: I0621 02:30:54.157496 2274 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 02:30:54.157486 kubelet[2274]: I0621 02:30:54.157502 2274 kubelet.go:2436] "Starting kubelet main sync loop" Jun 21 02:30:54.157486 kubelet[2274]: E0621 02:30:54.157545 2274 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 02:30:54.158123 kubelet[2274]: E0621 02:30:54.157945 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 21 02:30:54.160384 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 02:30:54.175282 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 02:30:54.178948 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 02:30:54.188511 kubelet[2274]: E0621 02:30:54.188460 2274 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 21 02:30:54.188812 kubelet[2274]: I0621 02:30:54.188732 2274 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 02:30:54.188812 kubelet[2274]: I0621 02:30:54.188750 2274 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 02:30:54.188999 kubelet[2274]: I0621 02:30:54.188980 2274 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 02:30:54.189665 kubelet[2274]: E0621 02:30:54.189621 2274 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 02:30:54.189734 kubelet[2274]: E0621 02:30:54.189691 2274 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 21 02:30:54.270185 systemd[1]: Created slice kubepods-burstable-pod1a27182cceea47f8cbac874b7b4ee862.slice - libcontainer container kubepods-burstable-pod1a27182cceea47f8cbac874b7b4ee862.slice. Jun 21 02:30:54.290436 kubelet[2274]: I0621 02:30:54.290302 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 02:30:54.290767 kubelet[2274]: E0621 02:30:54.290741 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jun 21 02:30:54.291319 kubelet[2274]: E0621 02:30:54.291115 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:30:54.293825 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jun 21 02:30:54.307899 kubelet[2274]: E0621 02:30:54.307867 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:30:54.310334 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jun 21 02:30:54.312205 kubelet[2274]: E0621 02:30:54.312024 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:30:54.326218 kubelet[2274]: I0621 02:30:54.326183 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jun 21 02:30:54.426909 kubelet[2274]: I0621 02:30:54.426749 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:30:54.427221 kubelet[2274]: I0621 02:30:54.426813 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a27182cceea47f8cbac874b7b4ee862-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a27182cceea47f8cbac874b7b4ee862\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:30:54.427221 kubelet[2274]: I0621 02:30:54.427093 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:30:54.427221 kubelet[2274]: I0621 02:30:54.427116 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a27182cceea47f8cbac874b7b4ee862-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a27182cceea47f8cbac874b7b4ee862\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:30:54.427221 kubelet[2274]: I0621 02:30:54.427132 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a27182cceea47f8cbac874b7b4ee862-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a27182cceea47f8cbac874b7b4ee862\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:30:54.427221 kubelet[2274]: I0621 02:30:54.427147 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:30:54.427383 kubelet[2274]: I0621 02:30:54.427163 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:30:54.427383 kubelet[2274]: I0621 02:30:54.427180 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:30:54.493047 kubelet[2274]: I0621 02:30:54.492999 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 02:30:54.493397 kubelet[2274]: E0621 02:30:54.493367 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jun 21 02:30:54.526975 kubelet[2274]: E0621 02:30:54.526934 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Jun 21 02:30:54.593141 containerd[1503]: time="2025-06-21T02:30:54.593099552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a27182cceea47f8cbac874b7b4ee862,Namespace:kube-system,Attempt:0,}" Jun 21 02:30:54.609881 containerd[1503]: time="2025-06-21T02:30:54.609837681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jun 21 02:30:54.612320 containerd[1503]: time="2025-06-21T02:30:54.612257572Z" level=info msg="connecting to shim 896c3c09023e990b6b02ed49b60aaf38b7e82a5e69f955a78bf127a8642f65a3" address="unix:///run/containerd/s/cc0d793c5963794484e78a38cbfa3f7428f10faaca7d7911473403767960b11a" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:30:54.613599 containerd[1503]: time="2025-06-21T02:30:54.613560640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jun 21 02:30:54.637880 systemd[1]: Started cri-containerd-896c3c09023e990b6b02ed49b60aaf38b7e82a5e69f955a78bf127a8642f65a3.scope - libcontainer container 896c3c09023e990b6b02ed49b60aaf38b7e82a5e69f955a78bf127a8642f65a3. Jun 21 02:30:54.647079 containerd[1503]: time="2025-06-21T02:30:54.646992386Z" level=info msg="connecting to shim 184b874dc57139a129f4428fee0dc85ae6961026bbe6eae0bbcefff9cb444bb9" address="unix:///run/containerd/s/572919989acc3675bab55cc8eecd7f3197f3d98610c9737426a5b0ebcf993022" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:30:54.647705 containerd[1503]: time="2025-06-21T02:30:54.647276646Z" level=info msg="connecting to shim c811c92046702627a1cb8e528f2352c64100aa7e9d72d911887df674d1add23b" address="unix:///run/containerd/s/feef6d3c49bed3ce02c0c12a3a217ad206525c7c4595505148d9d42921b03a6c" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:30:54.672859 systemd[1]: Started cri-containerd-184b874dc57139a129f4428fee0dc85ae6961026bbe6eae0bbcefff9cb444bb9.scope - libcontainer container 184b874dc57139a129f4428fee0dc85ae6961026bbe6eae0bbcefff9cb444bb9. Jun 21 02:30:54.676409 systemd[1]: Started cri-containerd-c811c92046702627a1cb8e528f2352c64100aa7e9d72d911887df674d1add23b.scope - libcontainer container c811c92046702627a1cb8e528f2352c64100aa7e9d72d911887df674d1add23b. Jun 21 02:30:54.683904 containerd[1503]: time="2025-06-21T02:30:54.683756701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a27182cceea47f8cbac874b7b4ee862,Namespace:kube-system,Attempt:0,} returns sandbox id \"896c3c09023e990b6b02ed49b60aaf38b7e82a5e69f955a78bf127a8642f65a3\"" Jun 21 02:30:54.690864 containerd[1503]: time="2025-06-21T02:30:54.690732793Z" level=info msg="CreateContainer within sandbox \"896c3c09023e990b6b02ed49b60aaf38b7e82a5e69f955a78bf127a8642f65a3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 02:30:54.703188 containerd[1503]: time="2025-06-21T02:30:54.703150354Z" level=info msg="Container 877849f358ac8427045bc6006ebf25ff065a07bc8df85f17e6604b9fb5de6395: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:30:54.712311 containerd[1503]: time="2025-06-21T02:30:54.712149635Z" level=info msg="CreateContainer within sandbox \"896c3c09023e990b6b02ed49b60aaf38b7e82a5e69f955a78bf127a8642f65a3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"877849f358ac8427045bc6006ebf25ff065a07bc8df85f17e6604b9fb5de6395\"" Jun 21 02:30:54.713835 containerd[1503]: time="2025-06-21T02:30:54.713799896Z" level=info msg="StartContainer for \"877849f358ac8427045bc6006ebf25ff065a07bc8df85f17e6604b9fb5de6395\"" Jun 21 02:30:54.715336 containerd[1503]: time="2025-06-21T02:30:54.715304152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"184b874dc57139a129f4428fee0dc85ae6961026bbe6eae0bbcefff9cb444bb9\"" Jun 21 02:30:54.715423 containerd[1503]: time="2025-06-21T02:30:54.715405046Z" level=info msg="connecting to shim 877849f358ac8427045bc6006ebf25ff065a07bc8df85f17e6604b9fb5de6395" address="unix:///run/containerd/s/cc0d793c5963794484e78a38cbfa3f7428f10faaca7d7911473403767960b11a" protocol=ttrpc version=3 Jun 21 02:30:54.720400 containerd[1503]: time="2025-06-21T02:30:54.720359667Z" level=info msg="CreateContainer within sandbox \"184b874dc57139a129f4428fee0dc85ae6961026bbe6eae0bbcefff9cb444bb9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 02:30:54.726076 containerd[1503]: time="2025-06-21T02:30:54.725976950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c811c92046702627a1cb8e528f2352c64100aa7e9d72d911887df674d1add23b\"" Jun 21 02:30:54.730881 containerd[1503]: time="2025-06-21T02:30:54.730830358Z" level=info msg="CreateContainer within sandbox \"c811c92046702627a1cb8e528f2352c64100aa7e9d72d911887df674d1add23b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 02:30:54.730983 containerd[1503]: time="2025-06-21T02:30:54.730949712Z" level=info msg="Container 8c8f52fc7555e309ee5ea9020a94e0b185c68afff6eadd3b1dbdb2fadd8b3740: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:30:54.739410 containerd[1503]: time="2025-06-21T02:30:54.739357336Z" level=info msg="Container 685ec871d312e7735db9a534f6b77e243f1343b95ecf986b2a638e8bd97b359a: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:30:54.739535 containerd[1503]: time="2025-06-21T02:30:54.739367206Z" level=info msg="CreateContainer within sandbox \"184b874dc57139a129f4428fee0dc85ae6961026bbe6eae0bbcefff9cb444bb9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8c8f52fc7555e309ee5ea9020a94e0b185c68afff6eadd3b1dbdb2fadd8b3740\"" Jun 21 02:30:54.740066 systemd[1]: Started cri-containerd-877849f358ac8427045bc6006ebf25ff065a07bc8df85f17e6604b9fb5de6395.scope - libcontainer container 877849f358ac8427045bc6006ebf25ff065a07bc8df85f17e6604b9fb5de6395. Jun 21 02:30:54.740466 containerd[1503]: time="2025-06-21T02:30:54.740431965Z" level=info msg="StartContainer for \"8c8f52fc7555e309ee5ea9020a94e0b185c68afff6eadd3b1dbdb2fadd8b3740\"" Jun 21 02:30:54.741812 containerd[1503]: time="2025-06-21T02:30:54.741718609Z" level=info msg="connecting to shim 8c8f52fc7555e309ee5ea9020a94e0b185c68afff6eadd3b1dbdb2fadd8b3740" address="unix:///run/containerd/s/572919989acc3675bab55cc8eecd7f3197f3d98610c9737426a5b0ebcf993022" protocol=ttrpc version=3 Jun 21 02:30:54.747483 containerd[1503]: time="2025-06-21T02:30:54.747425638Z" level=info msg="CreateContainer within sandbox \"c811c92046702627a1cb8e528f2352c64100aa7e9d72d911887df674d1add23b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"685ec871d312e7735db9a534f6b77e243f1343b95ecf986b2a638e8bd97b359a\"" Jun 21 02:30:54.748256 containerd[1503]: time="2025-06-21T02:30:54.748215246Z" level=info msg="StartContainer for \"685ec871d312e7735db9a534f6b77e243f1343b95ecf986b2a638e8bd97b359a\"" Jun 21 02:30:54.751276 containerd[1503]: time="2025-06-21T02:30:54.751224836Z" level=info msg="connecting to shim 685ec871d312e7735db9a534f6b77e243f1343b95ecf986b2a638e8bd97b359a" address="unix:///run/containerd/s/feef6d3c49bed3ce02c0c12a3a217ad206525c7c4595505148d9d42921b03a6c" protocol=ttrpc version=3 Jun 21 02:30:54.766850 systemd[1]: Started cri-containerd-8c8f52fc7555e309ee5ea9020a94e0b185c68afff6eadd3b1dbdb2fadd8b3740.scope - libcontainer container 8c8f52fc7555e309ee5ea9020a94e0b185c68afff6eadd3b1dbdb2fadd8b3740. Jun 21 02:30:54.771146 systemd[1]: Started cri-containerd-685ec871d312e7735db9a534f6b77e243f1343b95ecf986b2a638e8bd97b359a.scope - libcontainer container 685ec871d312e7735db9a534f6b77e243f1343b95ecf986b2a638e8bd97b359a. Jun 21 02:30:54.791740 containerd[1503]: time="2025-06-21T02:30:54.791537015Z" level=info msg="StartContainer for \"877849f358ac8427045bc6006ebf25ff065a07bc8df85f17e6604b9fb5de6395\" returns successfully" Jun 21 02:30:54.849837 containerd[1503]: time="2025-06-21T02:30:54.849767081Z" level=info msg="StartContainer for \"8c8f52fc7555e309ee5ea9020a94e0b185c68afff6eadd3b1dbdb2fadd8b3740\" returns successfully" Jun 21 02:30:54.851987 containerd[1503]: time="2025-06-21T02:30:54.851956534Z" level=info msg="StartContainer for \"685ec871d312e7735db9a534f6b77e243f1343b95ecf986b2a638e8bd97b359a\" returns successfully" Jun 21 02:30:54.900703 kubelet[2274]: E0621 02:30:54.899915 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 21 02:30:54.907681 kubelet[2274]: I0621 02:30:54.907441 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 02:30:54.907878 kubelet[2274]: E0621 02:30:54.907853 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jun 21 02:30:54.966574 kubelet[2274]: E0621 02:30:54.966313 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 21 02:30:55.167756 kubelet[2274]: E0621 02:30:55.167675 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:30:55.171669 kubelet[2274]: E0621 02:30:55.170194 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:30:55.172092 kubelet[2274]: E0621 02:30:55.172071 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:30:55.710018 kubelet[2274]: I0621 02:30:55.709985 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 02:30:56.174470 kubelet[2274]: E0621 02:30:56.173921 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:30:56.174470 kubelet[2274]: E0621 02:30:56.174236 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:30:57.575388 kubelet[2274]: E0621 02:30:57.575338 2274 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 21 02:30:57.652334 kubelet[2274]: I0621 02:30:57.652185 2274 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 21 02:30:57.652334 kubelet[2274]: E0621 02:30:57.652224 2274 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jun 21 02:30:57.725740 kubelet[2274]: I0621 02:30:57.725694 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 21 02:30:57.731540 kubelet[2274]: E0621 02:30:57.731482 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jun 21 02:30:57.731540 kubelet[2274]: I0621 02:30:57.731527 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 21 02:30:57.733696 kubelet[2274]: E0621 02:30:57.733405 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jun 21 02:30:57.733696 kubelet[2274]: I0621 02:30:57.733431 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 21 02:30:57.735011 kubelet[2274]: E0621 02:30:57.734974 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jun 21 02:30:57.910683 kubelet[2274]: I0621 02:30:57.910293 2274 apiserver.go:52] "Watching apiserver" Jun 21 02:30:57.925904 kubelet[2274]: I0621 02:30:57.925852 2274 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 02:30:59.674403 systemd[1]: Reload requested from client PID 2558 ('systemctl') (unit session-7.scope)... Jun 21 02:30:59.674730 systemd[1]: Reloading... Jun 21 02:30:59.746671 zram_generator::config[2601]: No configuration found. Jun 21 02:30:59.813370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 02:30:59.909503 systemd[1]: Reloading finished in 234 ms. Jun 21 02:30:59.927689 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:30:59.942475 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 02:30:59.942757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:30:59.942816 systemd[1]: kubelet.service: Consumed 1.648s CPU time, 129M memory peak. Jun 21 02:30:59.944461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:31:00.069274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:31:00.072644 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 02:31:00.107284 kubelet[2643]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 02:31:00.107284 kubelet[2643]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 02:31:00.107284 kubelet[2643]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 02:31:00.107621 kubelet[2643]: I0621 02:31:00.107298 2643 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 02:31:00.113202 kubelet[2643]: I0621 02:31:00.113129 2643 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 21 02:31:00.113202 kubelet[2643]: I0621 02:31:00.113155 2643 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 02:31:00.113579 kubelet[2643]: I0621 02:31:00.113382 2643 server.go:956] "Client rotation is on, will bootstrap in background" Jun 21 02:31:00.114780 kubelet[2643]: I0621 02:31:00.114704 2643 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 21 02:31:00.119108 kubelet[2643]: I0621 02:31:00.119066 2643 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 02:31:00.122772 kubelet[2643]: I0621 02:31:00.122755 2643 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 02:31:00.125408 kubelet[2643]: I0621 02:31:00.125353 2643 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 02:31:00.125569 kubelet[2643]: I0621 02:31:00.125547 2643 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 02:31:00.125726 kubelet[2643]: I0621 02:31:00.125571 2643 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 02:31:00.125825 kubelet[2643]: I0621 02:31:00.125736 2643 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 02:31:00.125825 kubelet[2643]: I0621 02:31:00.125746 2643 container_manager_linux.go:303] "Creating device plugin manager" Jun 21 02:31:00.125825 kubelet[2643]: I0621 02:31:00.125783 2643 state_mem.go:36] "Initialized new in-memory state store" Jun 21 02:31:00.125930 kubelet[2643]: I0621 02:31:00.125919 2643 kubelet.go:480] "Attempting to sync node with API server" Jun 21 02:31:00.125961 kubelet[2643]: I0621 02:31:00.125936 2643 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 02:31:00.125961 kubelet[2643]: I0621 02:31:00.125958 2643 kubelet.go:386] "Adding apiserver pod source" Jun 21 02:31:00.125961 kubelet[2643]: I0621 02:31:00.125971 2643 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 02:31:00.127440 kubelet[2643]: I0621 02:31:00.127370 2643 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 02:31:00.128248 kubelet[2643]: I0621 02:31:00.128231 2643 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 21 02:31:00.132167 kubelet[2643]: I0621 02:31:00.132146 2643 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 02:31:00.133099 kubelet[2643]: I0621 02:31:00.132672 2643 server.go:1289] "Started kubelet" Jun 21 02:31:00.135934 kubelet[2643]: I0621 02:31:00.135908 2643 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 02:31:00.136252 kubelet[2643]: I0621 02:31:00.133171 2643 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 02:31:00.137198 kubelet[2643]: I0621 02:31:00.137178 2643 server.go:317] "Adding debug handlers to kubelet server" Jun 21 02:31:00.140771 kubelet[2643]: I0621 02:31:00.140691 2643 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 02:31:00.140822 kubelet[2643]: I0621 02:31:00.140774 2643 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 02:31:00.141004 kubelet[2643]: E0621 02:31:00.140880 2643 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 02:31:00.141088 kubelet[2643]: I0621 02:31:00.141061 2643 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 02:31:00.141202 kubelet[2643]: I0621 02:31:00.141188 2643 reconciler.go:26] "Reconciler: start to sync state" Jun 21 02:31:00.142948 kubelet[2643]: I0621 02:31:00.142904 2643 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 02:31:00.143192 kubelet[2643]: I0621 02:31:00.143176 2643 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 02:31:00.145945 kubelet[2643]: I0621 02:31:00.145921 2643 factory.go:223] Registration of the systemd container factory successfully Jun 21 02:31:00.146039 kubelet[2643]: I0621 02:31:00.146018 2643 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 02:31:00.148934 kubelet[2643]: I0621 02:31:00.148889 2643 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 21 02:31:00.152636 kubelet[2643]: I0621 02:31:00.152601 2643 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 21 02:31:00.152636 kubelet[2643]: I0621 02:31:00.152635 2643 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 21 02:31:00.152737 kubelet[2643]: I0621 02:31:00.152654 2643 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 02:31:00.152737 kubelet[2643]: I0621 02:31:00.152666 2643 kubelet.go:2436] "Starting kubelet main sync loop" Jun 21 02:31:00.152737 kubelet[2643]: E0621 02:31:00.152706 2643 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 02:31:00.159422 kubelet[2643]: E0621 02:31:00.159202 2643 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 02:31:00.159513 kubelet[2643]: I0621 02:31:00.159499 2643 factory.go:223] Registration of the containerd container factory successfully Jun 21 02:31:00.189304 kubelet[2643]: I0621 02:31:00.189214 2643 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 02:31:00.189304 kubelet[2643]: I0621 02:31:00.189262 2643 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 02:31:00.189304 kubelet[2643]: I0621 02:31:00.189285 2643 state_mem.go:36] "Initialized new in-memory state store" Jun 21 02:31:00.189433 kubelet[2643]: I0621 02:31:00.189400 2643 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 02:31:00.189433 kubelet[2643]: I0621 02:31:00.189409 2643 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 02:31:00.189433 kubelet[2643]: I0621 02:31:00.189425 2643 policy_none.go:49] "None policy: Start" Jun 21 02:31:00.189433 kubelet[2643]: I0621 02:31:00.189434 2643 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 02:31:00.189508 kubelet[2643]: I0621 02:31:00.189442 2643 state_mem.go:35] "Initializing new in-memory state store" Jun 21 02:31:00.189531 kubelet[2643]: I0621 02:31:00.189517 2643 state_mem.go:75] "Updated machine memory state" Jun 21 02:31:00.193643 kubelet[2643]: E0621 02:31:00.193603 2643 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 21 02:31:00.194531 kubelet[2643]: I0621 02:31:00.194443 2643 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 02:31:00.194531 kubelet[2643]: I0621 02:31:00.194460 2643 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 02:31:00.194773 kubelet[2643]: I0621 02:31:00.194757 2643 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 02:31:00.196064 kubelet[2643]: E0621 02:31:00.195979 2643 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 02:31:00.253748 kubelet[2643]: I0621 02:31:00.253714 2643 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 21 02:31:00.254004 kubelet[2643]: I0621 02:31:00.253750 2643 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 21 02:31:00.254122 kubelet[2643]: I0621 02:31:00.253783 2643 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 21 02:31:00.296649 kubelet[2643]: I0621 02:31:00.296607 2643 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 02:31:00.302570 kubelet[2643]: I0621 02:31:00.302542 2643 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jun 21 02:31:00.302679 kubelet[2643]: I0621 02:31:00.302610 2643 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 21 02:31:00.342523 kubelet[2643]: I0621 02:31:00.342474 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:31:00.342523 kubelet[2643]: I0621 02:31:00.342508 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:31:00.342523 kubelet[2643]: I0621 02:31:00.342530 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:31:00.342760 kubelet[2643]: I0621 02:31:00.342549 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a27182cceea47f8cbac874b7b4ee862-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a27182cceea47f8cbac874b7b4ee862\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:31:00.342760 kubelet[2643]: I0621 02:31:00.342565 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:31:00.342760 kubelet[2643]: I0621 02:31:00.342581 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:31:00.342760 kubelet[2643]: I0621 02:31:00.342595 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jun 21 02:31:00.342760 kubelet[2643]: I0621 02:31:00.342609 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a27182cceea47f8cbac874b7b4ee862-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a27182cceea47f8cbac874b7b4ee862\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:31:00.342898 kubelet[2643]: I0621 02:31:00.342651 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a27182cceea47f8cbac874b7b4ee862-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a27182cceea47f8cbac874b7b4ee862\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:31:01.127201 kubelet[2643]: I0621 02:31:01.127171 2643 apiserver.go:52] "Watching apiserver" Jun 21 02:31:01.141830 kubelet[2643]: I0621 02:31:01.141778 2643 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 02:31:01.173647 kubelet[2643]: I0621 02:31:01.173561 2643 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 21 02:31:01.173782 kubelet[2643]: I0621 02:31:01.173756 2643 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 21 02:31:01.178895 kubelet[2643]: E0621 02:31:01.178800 2643 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 21 02:31:01.178895 kubelet[2643]: E0621 02:31:01.178814 2643 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 21 02:31:01.191546 kubelet[2643]: I0621 02:31:01.191451 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.191399248 podStartE2EDuration="1.191399248s" podCreationTimestamp="2025-06-21 02:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:31:01.189785766 +0000 UTC m=+1.113254921" watchObservedRunningTime="2025-06-21 02:31:01.191399248 +0000 UTC m=+1.114868363" Jun 21 02:31:01.205105 kubelet[2643]: I0621 02:31:01.205047 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.205030024 podStartE2EDuration="1.205030024s" podCreationTimestamp="2025-06-21 02:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:31:01.197515641 +0000 UTC m=+1.120984796" watchObservedRunningTime="2025-06-21 02:31:01.205030024 +0000 UTC m=+1.128499179" Jun 21 02:31:01.205284 kubelet[2643]: I0621 02:31:01.205134 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.205128669 podStartE2EDuration="1.205128669s" podCreationTimestamp="2025-06-21 02:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:31:01.204688687 +0000 UTC m=+1.128157802" watchObservedRunningTime="2025-06-21 02:31:01.205128669 +0000 UTC m=+1.128597824" Jun 21 02:31:05.828709 kubelet[2643]: I0621 02:31:05.828676 2643 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 02:31:05.829088 containerd[1503]: time="2025-06-21T02:31:05.829039662Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 02:31:05.829358 kubelet[2643]: I0621 02:31:05.829256 2643 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 02:31:06.716617 systemd[1]: Created slice kubepods-besteffort-podd228e315_acd6_495a_8d56_a99a3b0e386d.slice - libcontainer container kubepods-besteffort-podd228e315_acd6_495a_8d56_a99a3b0e386d.slice. Jun 21 02:31:06.785614 kubelet[2643]: I0621 02:31:06.785517 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d228e315-acd6-495a-8d56-a99a3b0e386d-lib-modules\") pod \"kube-proxy-kbnv9\" (UID: \"d228e315-acd6-495a-8d56-a99a3b0e386d\") " pod="kube-system/kube-proxy-kbnv9" Jun 21 02:31:06.785614 kubelet[2643]: I0621 02:31:06.785585 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qd5q\" (UniqueName: \"kubernetes.io/projected/d228e315-acd6-495a-8d56-a99a3b0e386d-kube-api-access-8qd5q\") pod \"kube-proxy-kbnv9\" (UID: \"d228e315-acd6-495a-8d56-a99a3b0e386d\") " pod="kube-system/kube-proxy-kbnv9" Jun 21 02:31:06.785614 kubelet[2643]: I0621 02:31:06.785613 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d228e315-acd6-495a-8d56-a99a3b0e386d-xtables-lock\") pod \"kube-proxy-kbnv9\" (UID: \"d228e315-acd6-495a-8d56-a99a3b0e386d\") " pod="kube-system/kube-proxy-kbnv9" Jun 21 02:31:06.785833 kubelet[2643]: I0621 02:31:06.785695 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d228e315-acd6-495a-8d56-a99a3b0e386d-kube-proxy\") pod \"kube-proxy-kbnv9\" (UID: \"d228e315-acd6-495a-8d56-a99a3b0e386d\") " pod="kube-system/kube-proxy-kbnv9" Jun 21 02:31:07.026047 containerd[1503]: time="2025-06-21T02:31:07.025857475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbnv9,Uid:d228e315-acd6-495a-8d56-a99a3b0e386d,Namespace:kube-system,Attempt:0,}" Jun 21 02:31:07.028376 systemd[1]: Created slice kubepods-besteffort-pod44bcb198_0b26_40d0_9de1_f23fcad742a7.slice - libcontainer container kubepods-besteffort-pod44bcb198_0b26_40d0_9de1_f23fcad742a7.slice. Jun 21 02:31:07.043915 containerd[1503]: time="2025-06-21T02:31:07.043867212Z" level=info msg="connecting to shim 316be73a5a26eb3536097816037363270fd08952ee012c8aad6200202c94507d" address="unix:///run/containerd/s/284c4b01d1f5ecc6702c9ab48f98ff17a9278c033832c6729853a80ca337b169" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:31:07.071768 systemd[1]: Started cri-containerd-316be73a5a26eb3536097816037363270fd08952ee012c8aad6200202c94507d.scope - libcontainer container 316be73a5a26eb3536097816037363270fd08952ee012c8aad6200202c94507d. Jun 21 02:31:07.087749 kubelet[2643]: I0621 02:31:07.087699 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfs9h\" (UniqueName: \"kubernetes.io/projected/44bcb198-0b26-40d0-9de1-f23fcad742a7-kube-api-access-rfs9h\") pod \"tigera-operator-68f7c7984d-24vkm\" (UID: \"44bcb198-0b26-40d0-9de1-f23fcad742a7\") " pod="tigera-operator/tigera-operator-68f7c7984d-24vkm" Jun 21 02:31:07.088029 kubelet[2643]: I0621 02:31:07.087767 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/44bcb198-0b26-40d0-9de1-f23fcad742a7-var-lib-calico\") pod \"tigera-operator-68f7c7984d-24vkm\" (UID: \"44bcb198-0b26-40d0-9de1-f23fcad742a7\") " pod="tigera-operator/tigera-operator-68f7c7984d-24vkm" Jun 21 02:31:07.091558 containerd[1503]: time="2025-06-21T02:31:07.091525231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbnv9,Uid:d228e315-acd6-495a-8d56-a99a3b0e386d,Namespace:kube-system,Attempt:0,} returns sandbox id \"316be73a5a26eb3536097816037363270fd08952ee012c8aad6200202c94507d\"" Jun 21 02:31:07.096785 containerd[1503]: time="2025-06-21T02:31:07.096724540Z" level=info msg="CreateContainer within sandbox \"316be73a5a26eb3536097816037363270fd08952ee012c8aad6200202c94507d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 02:31:07.105236 containerd[1503]: time="2025-06-21T02:31:07.105204730Z" level=info msg="Container 1602c5de64f8803b03baf677f61cf8e5cda5c9623f0e7eacbe74a7bc50971338: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:07.112354 containerd[1503]: time="2025-06-21T02:31:07.112314589Z" level=info msg="CreateContainer within sandbox \"316be73a5a26eb3536097816037363270fd08952ee012c8aad6200202c94507d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1602c5de64f8803b03baf677f61cf8e5cda5c9623f0e7eacbe74a7bc50971338\"" Jun 21 02:31:07.112896 containerd[1503]: time="2025-06-21T02:31:07.112861609Z" level=info msg="StartContainer for \"1602c5de64f8803b03baf677f61cf8e5cda5c9623f0e7eacbe74a7bc50971338\"" Jun 21 02:31:07.114126 containerd[1503]: time="2025-06-21T02:31:07.114102054Z" level=info msg="connecting to shim 1602c5de64f8803b03baf677f61cf8e5cda5c9623f0e7eacbe74a7bc50971338" address="unix:///run/containerd/s/284c4b01d1f5ecc6702c9ab48f98ff17a9278c033832c6729853a80ca337b169" protocol=ttrpc version=3 Jun 21 02:31:07.131784 systemd[1]: Started cri-containerd-1602c5de64f8803b03baf677f61cf8e5cda5c9623f0e7eacbe74a7bc50971338.scope - libcontainer container 1602c5de64f8803b03baf677f61cf8e5cda5c9623f0e7eacbe74a7bc50971338. Jun 21 02:31:07.169722 containerd[1503]: time="2025-06-21T02:31:07.169685563Z" level=info msg="StartContainer for \"1602c5de64f8803b03baf677f61cf8e5cda5c9623f0e7eacbe74a7bc50971338\" returns successfully" Jun 21 02:31:07.331652 containerd[1503]: time="2025-06-21T02:31:07.331501947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-24vkm,Uid:44bcb198-0b26-40d0-9de1-f23fcad742a7,Namespace:tigera-operator,Attempt:0,}" Jun 21 02:31:07.348578 containerd[1503]: time="2025-06-21T02:31:07.348455206Z" level=info msg="connecting to shim e9f3036ded1da18c51a3ce6e21fffc1b7dcf7e0633ec3699a413e1c8a581c9cc" address="unix:///run/containerd/s/801590042653bba5e67f154992fd8de84d1037fc9f5eb65d4279a47af1ef876e" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:31:07.376620 systemd[1]: Started cri-containerd-e9f3036ded1da18c51a3ce6e21fffc1b7dcf7e0633ec3699a413e1c8a581c9cc.scope - libcontainer container e9f3036ded1da18c51a3ce6e21fffc1b7dcf7e0633ec3699a413e1c8a581c9cc. Jun 21 02:31:07.410142 containerd[1503]: time="2025-06-21T02:31:07.410104735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-24vkm,Uid:44bcb198-0b26-40d0-9de1-f23fcad742a7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e9f3036ded1da18c51a3ce6e21fffc1b7dcf7e0633ec3699a413e1c8a581c9cc\"" Jun 21 02:31:07.412370 containerd[1503]: time="2025-06-21T02:31:07.412055646Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 21 02:31:08.502952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292448992.mount: Deactivated successfully. Jun 21 02:31:08.888072 kubelet[2643]: I0621 02:31:08.888015 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kbnv9" podStartSLOduration=2.888001712 podStartE2EDuration="2.888001712s" podCreationTimestamp="2025-06-21 02:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:31:07.194807879 +0000 UTC m=+7.118277034" watchObservedRunningTime="2025-06-21 02:31:08.888001712 +0000 UTC m=+8.811470827" Jun 21 02:31:09.369483 containerd[1503]: time="2025-06-21T02:31:09.369217439Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:09.370287 containerd[1503]: time="2025-06-21T02:31:09.370056306Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=22149772" Jun 21 02:31:09.370910 containerd[1503]: time="2025-06-21T02:31:09.370875053Z" level=info msg="ImageCreate event name:\"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:09.382835 containerd[1503]: time="2025-06-21T02:31:09.382794043Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:09.384213 containerd[1503]: time="2025-06-21T02:31:09.383874679Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"22145767\" in 1.971319175s" Jun 21 02:31:09.384213 containerd[1503]: time="2025-06-21T02:31:09.383909640Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\"" Jun 21 02:31:09.390580 containerd[1503]: time="2025-06-21T02:31:09.390551137Z" level=info msg="CreateContainer within sandbox \"e9f3036ded1da18c51a3ce6e21fffc1b7dcf7e0633ec3699a413e1c8a581c9cc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 21 02:31:09.403687 containerd[1503]: time="2025-06-21T02:31:09.403640926Z" level=info msg="Container 34c07be779141763c109d68e934f77034c551834e4947f9a4b0e5ee35994ff35: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:09.409596 containerd[1503]: time="2025-06-21T02:31:09.409546640Z" level=info msg="CreateContainer within sandbox \"e9f3036ded1da18c51a3ce6e21fffc1b7dcf7e0633ec3699a413e1c8a581c9cc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"34c07be779141763c109d68e934f77034c551834e4947f9a4b0e5ee35994ff35\"" Jun 21 02:31:09.410173 containerd[1503]: time="2025-06-21T02:31:09.410099498Z" level=info msg="StartContainer for \"34c07be779141763c109d68e934f77034c551834e4947f9a4b0e5ee35994ff35\"" Jun 21 02:31:09.411133 containerd[1503]: time="2025-06-21T02:31:09.411093330Z" level=info msg="connecting to shim 34c07be779141763c109d68e934f77034c551834e4947f9a4b0e5ee35994ff35" address="unix:///run/containerd/s/801590042653bba5e67f154992fd8de84d1037fc9f5eb65d4279a47af1ef876e" protocol=ttrpc version=3 Jun 21 02:31:09.431777 systemd[1]: Started cri-containerd-34c07be779141763c109d68e934f77034c551834e4947f9a4b0e5ee35994ff35.scope - libcontainer container 34c07be779141763c109d68e934f77034c551834e4947f9a4b0e5ee35994ff35. Jun 21 02:31:09.459227 containerd[1503]: time="2025-06-21T02:31:09.459135104Z" level=info msg="StartContainer for \"34c07be779141763c109d68e934f77034c551834e4947f9a4b0e5ee35994ff35\" returns successfully" Jun 21 02:31:14.461903 update_engine[1493]: I20250621 02:31:14.461836 1493 update_attempter.cc:509] Updating boot flags... Jun 21 02:31:14.812167 sudo[1711]: pam_unix(sudo:session): session closed for user root Jun 21 02:31:14.816086 sshd[1710]: Connection closed by 10.0.0.1 port 36700 Jun 21 02:31:14.817227 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Jun 21 02:31:14.822747 systemd-logind[1489]: Session 7 logged out. Waiting for processes to exit. Jun 21 02:31:14.823892 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:36700.service: Deactivated successfully. Jun 21 02:31:14.826249 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 02:31:14.826556 systemd[1]: session-7.scope: Consumed 6.385s CPU time, 228M memory peak. Jun 21 02:31:14.828667 systemd-logind[1489]: Removed session 7. Jun 21 02:31:21.035455 kubelet[2643]: I0621 02:31:21.035377 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-24vkm" podStartSLOduration=13.05902033 podStartE2EDuration="15.035361873s" podCreationTimestamp="2025-06-21 02:31:06 +0000 UTC" firstStartedPulling="2025-06-21 02:31:07.411406183 +0000 UTC m=+7.334875338" lastFinishedPulling="2025-06-21 02:31:09.387747766 +0000 UTC m=+9.311216881" observedRunningTime="2025-06-21 02:31:10.200545972 +0000 UTC m=+10.124015127" watchObservedRunningTime="2025-06-21 02:31:21.035361873 +0000 UTC m=+20.958831028" Jun 21 02:31:21.060425 systemd[1]: Created slice kubepods-besteffort-pod886018f3_35a7_4291_afec_98a3bf993f43.slice - libcontainer container kubepods-besteffort-pod886018f3_35a7_4291_afec_98a3bf993f43.slice. Jun 21 02:31:21.085579 kubelet[2643]: I0621 02:31:21.085533 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/886018f3-35a7-4291-afec-98a3bf993f43-typha-certs\") pod \"calico-typha-7976dd4b49-n988v\" (UID: \"886018f3-35a7-4291-afec-98a3bf993f43\") " pod="calico-system/calico-typha-7976dd4b49-n988v" Jun 21 02:31:21.085579 kubelet[2643]: I0621 02:31:21.085583 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbf7q\" (UniqueName: \"kubernetes.io/projected/886018f3-35a7-4291-afec-98a3bf993f43-kube-api-access-nbf7q\") pod \"calico-typha-7976dd4b49-n988v\" (UID: \"886018f3-35a7-4291-afec-98a3bf993f43\") " pod="calico-system/calico-typha-7976dd4b49-n988v" Jun 21 02:31:21.085796 kubelet[2643]: I0621 02:31:21.085616 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/886018f3-35a7-4291-afec-98a3bf993f43-tigera-ca-bundle\") pod \"calico-typha-7976dd4b49-n988v\" (UID: \"886018f3-35a7-4291-afec-98a3bf993f43\") " pod="calico-system/calico-typha-7976dd4b49-n988v" Jun 21 02:31:21.333668 systemd[1]: Created slice kubepods-besteffort-podbbf8c276_88cf_4e10_bd06_eed49d356b4a.slice - libcontainer container kubepods-besteffort-podbbf8c276_88cf_4e10_bd06_eed49d356b4a.slice. Jun 21 02:31:21.364517 containerd[1503]: time="2025-06-21T02:31:21.364237605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7976dd4b49-n988v,Uid:886018f3-35a7-4291-afec-98a3bf993f43,Namespace:calico-system,Attempt:0,}" Jun 21 02:31:21.388383 kubelet[2643]: I0621 02:31:21.388313 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bbf8c276-88cf-4e10-bd06-eed49d356b4a-cni-net-dir\") pod \"calico-node-l89ws\" (UID: \"bbf8c276-88cf-4e10-bd06-eed49d356b4a\") " pod="calico-system/calico-node-l89ws" Jun 21 02:31:21.388383 kubelet[2643]: I0621 02:31:21.388364 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbf8c276-88cf-4e10-bd06-eed49d356b4a-xtables-lock\") pod \"calico-node-l89ws\" (UID: \"bbf8c276-88cf-4e10-bd06-eed49d356b4a\") " pod="calico-system/calico-node-l89ws" Jun 21 02:31:21.388998 kubelet[2643]: I0621 02:31:21.388948 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bbf8c276-88cf-4e10-bd06-eed49d356b4a-flexvol-driver-host\") pod \"calico-node-l89ws\" (UID: \"bbf8c276-88cf-4e10-bd06-eed49d356b4a\") " pod="calico-system/calico-node-l89ws" Jun 21 02:31:21.389162 kubelet[2643]: I0621 02:31:21.389092 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bbf8c276-88cf-4e10-bd06-eed49d356b4a-policysync\") pod \"calico-node-l89ws\" (UID: \"bbf8c276-88cf-4e10-bd06-eed49d356b4a\") " pod="calico-system/calico-node-l89ws" Jun 21 02:31:21.389162 kubelet[2643]: I0621 02:31:21.389124 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbf8c276-88cf-4e10-bd06-eed49d356b4a-lib-modules\") pod \"calico-node-l89ws\" (UID: \"bbf8c276-88cf-4e10-bd06-eed49d356b4a\") " pod="calico-system/calico-node-l89ws" Jun 21 02:31:21.389895 kubelet[2643]: I0621 02:31:21.389143 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dllsw\" (UniqueName: \"kubernetes.io/projected/bbf8c276-88cf-4e10-bd06-eed49d356b4a-kube-api-access-dllsw\") pod \"calico-node-l89ws\" (UID: \"bbf8c276-88cf-4e10-bd06-eed49d356b4a\") " pod="calico-system/calico-node-l89ws" Jun 21 02:31:21.389895 kubelet[2643]: I0621 02:31:21.389378 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bbf8c276-88cf-4e10-bd06-eed49d356b4a-node-certs\") pod \"calico-node-l89ws\" (UID: \"bbf8c276-88cf-4e10-bd06-eed49d356b4a\") " pod="calico-system/calico-node-l89ws" Jun 21 02:31:21.389895 kubelet[2643]: I0621 02:31:21.389395 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bbf8c276-88cf-4e10-bd06-eed49d356b4a-var-lib-calico\") pod \"calico-node-l89ws\" (UID: \"bbf8c276-88cf-4e10-bd06-eed49d356b4a\") " pod="calico-system/calico-node-l89ws" Jun 21 02:31:21.389895 kubelet[2643]: I0621 02:31:21.389804 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bbf8c276-88cf-4e10-bd06-eed49d356b4a-cni-bin-dir\") pod \"calico-node-l89ws\" (UID: \"bbf8c276-88cf-4e10-bd06-eed49d356b4a\") " pod="calico-system/calico-node-l89ws" Jun 21 02:31:21.389895 kubelet[2643]: I0621 02:31:21.389829 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bbf8c276-88cf-4e10-bd06-eed49d356b4a-cni-log-dir\") pod \"calico-node-l89ws\" (UID: \"bbf8c276-88cf-4e10-bd06-eed49d356b4a\") " pod="calico-system/calico-node-l89ws" Jun 21 02:31:21.390141 kubelet[2643]: I0621 02:31:21.389846 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbf8c276-88cf-4e10-bd06-eed49d356b4a-tigera-ca-bundle\") pod \"calico-node-l89ws\" (UID: \"bbf8c276-88cf-4e10-bd06-eed49d356b4a\") " pod="calico-system/calico-node-l89ws" Jun 21 02:31:21.390141 kubelet[2643]: I0621 02:31:21.390095 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bbf8c276-88cf-4e10-bd06-eed49d356b4a-var-run-calico\") pod \"calico-node-l89ws\" (UID: \"bbf8c276-88cf-4e10-bd06-eed49d356b4a\") " pod="calico-system/calico-node-l89ws" Jun 21 02:31:21.403375 containerd[1503]: time="2025-06-21T02:31:21.403337755Z" level=info msg="connecting to shim 189786b74bc88f90efa760ca2c7b0c0e66fd9d366100586a6665cdbb19b68865" address="unix:///run/containerd/s/1f6b5e364add994e65d4eced5f4492815659cc9d0705565660e412acdcc9a0e3" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:31:21.471814 systemd[1]: Started cri-containerd-189786b74bc88f90efa760ca2c7b0c0e66fd9d366100586a6665cdbb19b68865.scope - libcontainer container 189786b74bc88f90efa760ca2c7b0c0e66fd9d366100586a6665cdbb19b68865. Jun 21 02:31:21.496058 kubelet[2643]: E0621 02:31:21.496020 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.496417 kubelet[2643]: W0621 02:31:21.496149 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.498407 kubelet[2643]: E0621 02:31:21.498110 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.505146 kubelet[2643]: E0621 02:31:21.505116 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.505379 kubelet[2643]: W0621 02:31:21.505315 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.505379 kubelet[2643]: E0621 02:31:21.505341 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.508594 containerd[1503]: time="2025-06-21T02:31:21.508553865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7976dd4b49-n988v,Uid:886018f3-35a7-4291-afec-98a3bf993f43,Namespace:calico-system,Attempt:0,} returns sandbox id \"189786b74bc88f90efa760ca2c7b0c0e66fd9d366100586a6665cdbb19b68865\"" Jun 21 02:31:21.514734 containerd[1503]: time="2025-06-21T02:31:21.514668576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 21 02:31:21.567674 kubelet[2643]: E0621 02:31:21.567611 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vk5dt" podUID="0f3a877e-df8f-466c-a544-3a7180344d8d" Jun 21 02:31:21.571593 kubelet[2643]: E0621 02:31:21.571531 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.571593 kubelet[2643]: W0621 02:31:21.571556 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.571593 kubelet[2643]: E0621 02:31:21.571579 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.572432 kubelet[2643]: E0621 02:31:21.572411 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.577564 kubelet[2643]: W0621 02:31:21.572427 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.577564 kubelet[2643]: E0621 02:31:21.577561 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.578448 kubelet[2643]: E0621 02:31:21.578410 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.578603 kubelet[2643]: W0621 02:31:21.578549 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.578603 kubelet[2643]: E0621 02:31:21.578572 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.579102 kubelet[2643]: E0621 02:31:21.578893 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.579102 kubelet[2643]: W0621 02:31:21.578907 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.579102 kubelet[2643]: E0621 02:31:21.578922 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.579269 kubelet[2643]: E0621 02:31:21.579235 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.579269 kubelet[2643]: W0621 02:31:21.579266 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.579334 kubelet[2643]: E0621 02:31:21.579281 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.579497 kubelet[2643]: E0621 02:31:21.579478 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.579532 kubelet[2643]: W0621 02:31:21.579513 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.579532 kubelet[2643]: E0621 02:31:21.579526 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.580118 kubelet[2643]: E0621 02:31:21.579987 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.580118 kubelet[2643]: W0621 02:31:21.580013 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.580118 kubelet[2643]: E0621 02:31:21.580029 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.580270 kubelet[2643]: E0621 02:31:21.580200 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.580270 kubelet[2643]: W0621 02:31:21.580209 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.580270 kubelet[2643]: E0621 02:31:21.580219 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.580839 kubelet[2643]: E0621 02:31:21.580808 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.581795 kubelet[2643]: W0621 02:31:21.581464 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.581795 kubelet[2643]: E0621 02:31:21.581496 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.582068 kubelet[2643]: E0621 02:31:21.582012 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.582068 kubelet[2643]: W0621 02:31:21.582034 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.582068 kubelet[2643]: E0621 02:31:21.582046 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.582476 kubelet[2643]: E0621 02:31:21.582455 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.582476 kubelet[2643]: W0621 02:31:21.582472 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.582680 kubelet[2643]: E0621 02:31:21.582485 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.584489 kubelet[2643]: E0621 02:31:21.584401 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.584489 kubelet[2643]: W0621 02:31:21.584422 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.584489 kubelet[2643]: E0621 02:31:21.584438 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.584827 kubelet[2643]: E0621 02:31:21.584750 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.584827 kubelet[2643]: W0621 02:31:21.584762 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.584827 kubelet[2643]: E0621 02:31:21.584774 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.586665 kubelet[2643]: E0621 02:31:21.585209 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.586665 kubelet[2643]: W0621 02:31:21.585232 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.586665 kubelet[2643]: E0621 02:31:21.585260 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.586665 kubelet[2643]: E0621 02:31:21.586662 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.586959 kubelet[2643]: W0621 02:31:21.586706 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.586959 kubelet[2643]: E0621 02:31:21.586731 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.587284 kubelet[2643]: E0621 02:31:21.587224 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.587284 kubelet[2643]: W0621 02:31:21.587255 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.587284 kubelet[2643]: E0621 02:31:21.587278 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.588265 kubelet[2643]: E0621 02:31:21.588221 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.588265 kubelet[2643]: W0621 02:31:21.588239 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.588265 kubelet[2643]: E0621 02:31:21.588264 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.590431 kubelet[2643]: E0621 02:31:21.590406 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.590431 kubelet[2643]: W0621 02:31:21.590427 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.590527 kubelet[2643]: E0621 02:31:21.590443 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.590736 kubelet[2643]: E0621 02:31:21.590644 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.590736 kubelet[2643]: W0621 02:31:21.590656 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.590736 kubelet[2643]: E0621 02:31:21.590665 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.591226 kubelet[2643]: E0621 02:31:21.591169 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.591226 kubelet[2643]: W0621 02:31:21.591185 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.591226 kubelet[2643]: E0621 02:31:21.591196 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.593531 kubelet[2643]: E0621 02:31:21.593511 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.593531 kubelet[2643]: W0621 02:31:21.593527 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.593666 kubelet[2643]: E0621 02:31:21.593541 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.593666 kubelet[2643]: I0621 02:31:21.593570 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g59x4\" (UniqueName: \"kubernetes.io/projected/0f3a877e-df8f-466c-a544-3a7180344d8d-kube-api-access-g59x4\") pod \"csi-node-driver-vk5dt\" (UID: \"0f3a877e-df8f-466c-a544-3a7180344d8d\") " pod="calico-system/csi-node-driver-vk5dt" Jun 21 02:31:21.593842 kubelet[2643]: E0621 02:31:21.593817 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.593842 kubelet[2643]: W0621 02:31:21.593836 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.593926 kubelet[2643]: E0621 02:31:21.593848 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.594005 kubelet[2643]: E0621 02:31:21.593992 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.594005 kubelet[2643]: W0621 02:31:21.594003 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.594072 kubelet[2643]: E0621 02:31:21.594011 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.594169 kubelet[2643]: E0621 02:31:21.594156 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.594169 kubelet[2643]: W0621 02:31:21.594166 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.594264 kubelet[2643]: E0621 02:31:21.594174 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.594264 kubelet[2643]: I0621 02:31:21.594198 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0f3a877e-df8f-466c-a544-3a7180344d8d-varrun\") pod \"csi-node-driver-vk5dt\" (UID: \"0f3a877e-df8f-466c-a544-3a7180344d8d\") " pod="calico-system/csi-node-driver-vk5dt" Jun 21 02:31:21.594588 kubelet[2643]: E0621 02:31:21.594368 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.594588 kubelet[2643]: W0621 02:31:21.594380 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.594588 kubelet[2643]: E0621 02:31:21.594389 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.594588 kubelet[2643]: I0621 02:31:21.594408 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f3a877e-df8f-466c-a544-3a7180344d8d-kubelet-dir\") pod \"csi-node-driver-vk5dt\" (UID: \"0f3a877e-df8f-466c-a544-3a7180344d8d\") " pod="calico-system/csi-node-driver-vk5dt" Jun 21 02:31:21.594588 kubelet[2643]: E0621 02:31:21.594576 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.594588 kubelet[2643]: W0621 02:31:21.594589 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.594778 kubelet[2643]: E0621 02:31:21.594600 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.595397 kubelet[2643]: E0621 02:31:21.595349 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.595397 kubelet[2643]: W0621 02:31:21.595368 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.595397 kubelet[2643]: E0621 02:31:21.595381 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.595662 kubelet[2643]: E0621 02:31:21.595618 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.595662 kubelet[2643]: W0621 02:31:21.595649 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.595662 kubelet[2643]: E0621 02:31:21.595662 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.595871 kubelet[2643]: I0621 02:31:21.595688 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f3a877e-df8f-466c-a544-3a7180344d8d-socket-dir\") pod \"csi-node-driver-vk5dt\" (UID: \"0f3a877e-df8f-466c-a544-3a7180344d8d\") " pod="calico-system/csi-node-driver-vk5dt" Jun 21 02:31:21.596044 kubelet[2643]: E0621 02:31:21.596017 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.596044 kubelet[2643]: W0621 02:31:21.596037 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.596299 kubelet[2643]: E0621 02:31:21.596051 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.596545 kubelet[2643]: E0621 02:31:21.596526 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.596545 kubelet[2643]: W0621 02:31:21.596542 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.596545 kubelet[2643]: E0621 02:31:21.596553 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.596810 kubelet[2643]: E0621 02:31:21.596793 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.596810 kubelet[2643]: W0621 02:31:21.596808 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.596863 kubelet[2643]: E0621 02:31:21.596821 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.596863 kubelet[2643]: I0621 02:31:21.596845 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f3a877e-df8f-466c-a544-3a7180344d8d-registration-dir\") pod \"csi-node-driver-vk5dt\" (UID: \"0f3a877e-df8f-466c-a544-3a7180344d8d\") " pod="calico-system/csi-node-driver-vk5dt" Jun 21 02:31:21.597067 kubelet[2643]: E0621 02:31:21.597049 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.597102 kubelet[2643]: W0621 02:31:21.597066 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.597102 kubelet[2643]: E0621 02:31:21.597079 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.597269 kubelet[2643]: E0621 02:31:21.597229 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.597269 kubelet[2643]: W0621 02:31:21.597241 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.597269 kubelet[2643]: E0621 02:31:21.597258 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.597422 kubelet[2643]: E0621 02:31:21.597410 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.597422 kubelet[2643]: W0621 02:31:21.597421 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.597462 kubelet[2643]: E0621 02:31:21.597429 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.597555 kubelet[2643]: E0621 02:31:21.597544 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.597555 kubelet[2643]: W0621 02:31:21.597554 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.597599 kubelet[2643]: E0621 02:31:21.597561 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.638728 containerd[1503]: time="2025-06-21T02:31:21.638679988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l89ws,Uid:bbf8c276-88cf-4e10-bd06-eed49d356b4a,Namespace:calico-system,Attempt:0,}" Jun 21 02:31:21.655660 containerd[1503]: time="2025-06-21T02:31:21.655551094Z" level=info msg="connecting to shim 24e4488512bc3fd46f17ab33bed62ba81d5ad8ddeb3140e7d27520ed233fe0f6" address="unix:///run/containerd/s/61cb82622a6d575df21ede975c62f33316f177e1cc4906861084f89922c8234f" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:31:21.682810 systemd[1]: Started cri-containerd-24e4488512bc3fd46f17ab33bed62ba81d5ad8ddeb3140e7d27520ed233fe0f6.scope - libcontainer container 24e4488512bc3fd46f17ab33bed62ba81d5ad8ddeb3140e7d27520ed233fe0f6. Jun 21 02:31:21.698408 kubelet[2643]: E0621 02:31:21.698193 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.698408 kubelet[2643]: W0621 02:31:21.698232 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.698408 kubelet[2643]: E0621 02:31:21.698264 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.698566 kubelet[2643]: E0621 02:31:21.698437 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.698566 kubelet[2643]: W0621 02:31:21.698446 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.698566 kubelet[2643]: E0621 02:31:21.698454 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.698650 kubelet[2643]: E0621 02:31:21.698597 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.698650 kubelet[2643]: W0621 02:31:21.698604 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.698650 kubelet[2643]: E0621 02:31:21.698614 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.698821 kubelet[2643]: E0621 02:31:21.698790 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.698821 kubelet[2643]: W0621 02:31:21.698805 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.698821 kubelet[2643]: E0621 02:31:21.698815 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.699213 kubelet[2643]: E0621 02:31:21.698968 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.699213 kubelet[2643]: W0621 02:31:21.698976 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.699213 kubelet[2643]: E0621 02:31:21.698983 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.699412 kubelet[2643]: E0621 02:31:21.699354 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.699412 kubelet[2643]: W0621 02:31:21.699370 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.699570 kubelet[2643]: E0621 02:31:21.699485 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.700171 kubelet[2643]: E0621 02:31:21.700033 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.700171 kubelet[2643]: W0621 02:31:21.700048 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.700171 kubelet[2643]: E0621 02:31:21.700063 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.700974 kubelet[2643]: E0621 02:31:21.700852 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.700974 kubelet[2643]: W0621 02:31:21.700870 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.700974 kubelet[2643]: E0621 02:31:21.700883 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.701272 kubelet[2643]: E0621 02:31:21.701139 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.701272 kubelet[2643]: W0621 02:31:21.701152 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.701272 kubelet[2643]: E0621 02:31:21.701163 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.701420 kubelet[2643]: E0621 02:31:21.701406 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.701501 kubelet[2643]: W0621 02:31:21.701488 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.701587 kubelet[2643]: E0621 02:31:21.701546 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.701923 kubelet[2643]: E0621 02:31:21.701908 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.702128 kubelet[2643]: W0621 02:31:21.702079 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.702128 kubelet[2643]: E0621 02:31:21.702097 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.702750 kubelet[2643]: E0621 02:31:21.702609 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.702750 kubelet[2643]: W0621 02:31:21.702638 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.702750 kubelet[2643]: E0621 02:31:21.702654 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.703098 kubelet[2643]: E0621 02:31:21.702984 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.703098 kubelet[2643]: W0621 02:31:21.702997 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.703098 kubelet[2643]: E0621 02:31:21.703008 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.703593 kubelet[2643]: E0621 02:31:21.703466 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.703593 kubelet[2643]: W0621 02:31:21.703480 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.703593 kubelet[2643]: E0621 02:31:21.703491 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.704226 kubelet[2643]: E0621 02:31:21.704197 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.704641 kubelet[2643]: W0621 02:31:21.704553 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.704641 kubelet[2643]: E0621 02:31:21.704574 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.705638 kubelet[2643]: E0621 02:31:21.705599 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.705638 kubelet[2643]: W0621 02:31:21.705613 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.705638 kubelet[2643]: E0621 02:31:21.705638 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.705862 kubelet[2643]: E0621 02:31:21.705848 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.705862 kubelet[2643]: W0621 02:31:21.705862 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.705944 kubelet[2643]: E0621 02:31:21.705873 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.706047 kubelet[2643]: E0621 02:31:21.706035 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.706047 kubelet[2643]: W0621 02:31:21.706047 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.706047 kubelet[2643]: E0621 02:31:21.706059 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.706247 kubelet[2643]: E0621 02:31:21.706234 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.706247 kubelet[2643]: W0621 02:31:21.706246 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.706361 kubelet[2643]: E0621 02:31:21.706266 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.706445 kubelet[2643]: E0621 02:31:21.706433 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.706445 kubelet[2643]: W0621 02:31:21.706445 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.706565 kubelet[2643]: E0621 02:31:21.706454 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.706908 kubelet[2643]: E0621 02:31:21.706887 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.706908 kubelet[2643]: W0621 02:31:21.706902 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.707173 kubelet[2643]: E0621 02:31:21.706914 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.707280 kubelet[2643]: E0621 02:31:21.707262 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.707334 kubelet[2643]: W0621 02:31:21.707279 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.707334 kubelet[2643]: E0621 02:31:21.707300 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.707554 kubelet[2643]: E0621 02:31:21.707540 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.707554 kubelet[2643]: W0621 02:31:21.707553 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.707605 kubelet[2643]: E0621 02:31:21.707566 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.708295 kubelet[2643]: E0621 02:31:21.708244 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.708295 kubelet[2643]: W0621 02:31:21.708292 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.708581 kubelet[2643]: E0621 02:31:21.708305 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.708990 kubelet[2643]: E0621 02:31:21.708970 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.708990 kubelet[2643]: W0621 02:31:21.708986 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.708990 kubelet[2643]: E0621 02:31:21.708998 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.717100 kubelet[2643]: E0621 02:31:21.716934 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:21.717100 kubelet[2643]: W0621 02:31:21.717096 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:21.717206 kubelet[2643]: E0621 02:31:21.717112 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:21.773785 containerd[1503]: time="2025-06-21T02:31:21.773746560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l89ws,Uid:bbf8c276-88cf-4e10-bd06-eed49d356b4a,Namespace:calico-system,Attempt:0,} returns sandbox id \"24e4488512bc3fd46f17ab33bed62ba81d5ad8ddeb3140e7d27520ed233fe0f6\"" Jun 21 02:31:22.589644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1493343364.mount: Deactivated successfully. Jun 21 02:31:23.153696 kubelet[2643]: E0621 02:31:23.153617 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vk5dt" podUID="0f3a877e-df8f-466c-a544-3a7180344d8d" Jun 21 02:31:24.485775 containerd[1503]: time="2025-06-21T02:31:24.485693688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:24.486575 containerd[1503]: time="2025-06-21T02:31:24.486508661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=33070817" Jun 21 02:31:24.488054 containerd[1503]: time="2025-06-21T02:31:24.488017725Z" level=info msg="ImageCreate event name:\"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:24.490458 containerd[1503]: time="2025-06-21T02:31:24.490407643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:24.491091 containerd[1503]: time="2025-06-21T02:31:24.491044813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"33070671\" in 2.976340556s" Jun 21 02:31:24.491091 containerd[1503]: time="2025-06-21T02:31:24.491076254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\"" Jun 21 02:31:24.492036 containerd[1503]: time="2025-06-21T02:31:24.491998149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 21 02:31:24.510149 containerd[1503]: time="2025-06-21T02:31:24.510105838Z" level=info msg="CreateContainer within sandbox \"189786b74bc88f90efa760ca2c7b0c0e66fd9d366100586a6665cdbb19b68865\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 21 02:31:24.516016 containerd[1503]: time="2025-06-21T02:31:24.515976691Z" level=info msg="Container fe4888733039a800e6c4f3f4afdd73989442d2afa03752e7c036d8a8014ea05d: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:24.523972 containerd[1503]: time="2025-06-21T02:31:24.523891378Z" level=info msg="CreateContainer within sandbox \"189786b74bc88f90efa760ca2c7b0c0e66fd9d366100586a6665cdbb19b68865\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fe4888733039a800e6c4f3f4afdd73989442d2afa03752e7c036d8a8014ea05d\"" Jun 21 02:31:24.524642 containerd[1503]: time="2025-06-21T02:31:24.524590189Z" level=info msg="StartContainer for \"fe4888733039a800e6c4f3f4afdd73989442d2afa03752e7c036d8a8014ea05d\"" Jun 21 02:31:24.527157 containerd[1503]: time="2025-06-21T02:31:24.527121949Z" level=info msg="connecting to shim fe4888733039a800e6c4f3f4afdd73989442d2afa03752e7c036d8a8014ea05d" address="unix:///run/containerd/s/1f6b5e364add994e65d4eced5f4492815659cc9d0705565660e412acdcc9a0e3" protocol=ttrpc version=3 Jun 21 02:31:24.549812 systemd[1]: Started cri-containerd-fe4888733039a800e6c4f3f4afdd73989442d2afa03752e7c036d8a8014ea05d.scope - libcontainer container fe4888733039a800e6c4f3f4afdd73989442d2afa03752e7c036d8a8014ea05d. Jun 21 02:31:24.690669 containerd[1503]: time="2025-06-21T02:31:24.690597198Z" level=info msg="StartContainer for \"fe4888733039a800e6c4f3f4afdd73989442d2afa03752e7c036d8a8014ea05d\" returns successfully" Jun 21 02:31:25.153087 kubelet[2643]: E0621 02:31:25.153036 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vk5dt" podUID="0f3a877e-df8f-466c-a544-3a7180344d8d" Jun 21 02:31:25.226512 kubelet[2643]: E0621 02:31:25.226469 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:25.312515 kubelet[2643]: E0621 02:31:25.312423 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.312515 kubelet[2643]: W0621 02:31:25.312454 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.312515 kubelet[2643]: E0621 02:31:25.312476 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.312741 kubelet[2643]: E0621 02:31:25.312687 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.312741 kubelet[2643]: W0621 02:31:25.312696 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.312741 kubelet[2643]: E0621 02:31:25.312705 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.312884 kubelet[2643]: E0621 02:31:25.312870 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.312884 kubelet[2643]: W0621 02:31:25.312882 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.312944 kubelet[2643]: E0621 02:31:25.312891 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.313057 kubelet[2643]: E0621 02:31:25.313045 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.313057 kubelet[2643]: W0621 02:31:25.313055 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.313123 kubelet[2643]: E0621 02:31:25.313064 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.313225 kubelet[2643]: E0621 02:31:25.313213 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.313261 kubelet[2643]: W0621 02:31:25.313225 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.313261 kubelet[2643]: E0621 02:31:25.313235 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.313385 kubelet[2643]: E0621 02:31:25.313373 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.313385 kubelet[2643]: W0621 02:31:25.313384 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.313446 kubelet[2643]: E0621 02:31:25.313393 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.313531 kubelet[2643]: E0621 02:31:25.313519 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.313568 kubelet[2643]: W0621 02:31:25.313530 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.313568 kubelet[2643]: E0621 02:31:25.313539 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.313713 kubelet[2643]: E0621 02:31:25.313687 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.313750 kubelet[2643]: W0621 02:31:25.313713 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.313750 kubelet[2643]: E0621 02:31:25.313724 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.313908 kubelet[2643]: E0621 02:31:25.313896 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.313908 kubelet[2643]: W0621 02:31:25.313908 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.313965 kubelet[2643]: E0621 02:31:25.313916 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.314068 kubelet[2643]: E0621 02:31:25.314057 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.314068 kubelet[2643]: W0621 02:31:25.314068 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.314123 kubelet[2643]: E0621 02:31:25.314076 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.314231 kubelet[2643]: E0621 02:31:25.314218 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.314296 kubelet[2643]: W0621 02:31:25.314228 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.314335 kubelet[2643]: E0621 02:31:25.314307 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.314489 kubelet[2643]: E0621 02:31:25.314476 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.314531 kubelet[2643]: W0621 02:31:25.314490 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.314531 kubelet[2643]: E0621 02:31:25.314499 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.314657 kubelet[2643]: E0621 02:31:25.314645 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.314657 kubelet[2643]: W0621 02:31:25.314656 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.314723 kubelet[2643]: E0621 02:31:25.314664 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.314809 kubelet[2643]: E0621 02:31:25.314797 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.314809 kubelet[2643]: W0621 02:31:25.314809 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.314875 kubelet[2643]: E0621 02:31:25.314817 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.314975 kubelet[2643]: E0621 02:31:25.314961 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.315017 kubelet[2643]: W0621 02:31:25.314988 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.315017 kubelet[2643]: E0621 02:31:25.314999 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.323271 kubelet[2643]: E0621 02:31:25.323248 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.323271 kubelet[2643]: W0621 02:31:25.323269 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.323383 kubelet[2643]: E0621 02:31:25.323284 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.323512 kubelet[2643]: E0621 02:31:25.323499 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.323512 kubelet[2643]: W0621 02:31:25.323511 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.323559 kubelet[2643]: E0621 02:31:25.323521 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.323742 kubelet[2643]: E0621 02:31:25.323723 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.323742 kubelet[2643]: W0621 02:31:25.323741 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.323819 kubelet[2643]: E0621 02:31:25.323753 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.323909 kubelet[2643]: E0621 02:31:25.323894 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.323909 kubelet[2643]: W0621 02:31:25.323906 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.323978 kubelet[2643]: E0621 02:31:25.323913 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.324050 kubelet[2643]: E0621 02:31:25.324037 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.324050 kubelet[2643]: W0621 02:31:25.324049 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.324119 kubelet[2643]: E0621 02:31:25.324058 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.324230 kubelet[2643]: E0621 02:31:25.324209 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.324230 kubelet[2643]: W0621 02:31:25.324219 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.324230 kubelet[2643]: E0621 02:31:25.324227 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.324499 kubelet[2643]: E0621 02:31:25.324481 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.324530 kubelet[2643]: W0621 02:31:25.324500 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.324530 kubelet[2643]: E0621 02:31:25.324513 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.324756 kubelet[2643]: E0621 02:31:25.324741 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.324756 kubelet[2643]: W0621 02:31:25.324755 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.324825 kubelet[2643]: E0621 02:31:25.324764 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.324939 kubelet[2643]: E0621 02:31:25.324927 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.324939 kubelet[2643]: W0621 02:31:25.324939 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.324992 kubelet[2643]: E0621 02:31:25.324948 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.325107 kubelet[2643]: E0621 02:31:25.325091 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.325107 kubelet[2643]: W0621 02:31:25.325103 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.325161 kubelet[2643]: E0621 02:31:25.325112 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.325293 kubelet[2643]: E0621 02:31:25.325280 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.325293 kubelet[2643]: W0621 02:31:25.325292 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.325356 kubelet[2643]: E0621 02:31:25.325309 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.325470 kubelet[2643]: E0621 02:31:25.325458 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.325500 kubelet[2643]: W0621 02:31:25.325472 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.325500 kubelet[2643]: E0621 02:31:25.325481 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.325665 kubelet[2643]: E0621 02:31:25.325651 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.325665 kubelet[2643]: W0621 02:31:25.325664 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.325721 kubelet[2643]: E0621 02:31:25.325673 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.325921 kubelet[2643]: E0621 02:31:25.325894 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.325921 kubelet[2643]: W0621 02:31:25.325909 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.325921 kubelet[2643]: E0621 02:31:25.325919 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.326388 kubelet[2643]: E0621 02:31:25.326365 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.326388 kubelet[2643]: W0621 02:31:25.326381 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.326442 kubelet[2643]: E0621 02:31:25.326390 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.326596 kubelet[2643]: E0621 02:31:25.326582 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.326596 kubelet[2643]: W0621 02:31:25.326593 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.326667 kubelet[2643]: E0621 02:31:25.326602 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.326814 kubelet[2643]: E0621 02:31:25.326800 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.326814 kubelet[2643]: W0621 02:31:25.326812 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.326871 kubelet[2643]: E0621 02:31:25.326822 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.327099 kubelet[2643]: E0621 02:31:25.327085 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:31:25.327099 kubelet[2643]: W0621 02:31:25.327097 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:31:25.327149 kubelet[2643]: E0621 02:31:25.327105 2643 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:31:25.507482 containerd[1503]: time="2025-06-21T02:31:25.507378709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:25.508229 containerd[1503]: time="2025-06-21T02:31:25.508190402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4264319" Jun 21 02:31:25.509743 containerd[1503]: time="2025-06-21T02:31:25.509691545Z" level=info msg="ImageCreate event name:\"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:25.511988 containerd[1503]: time="2025-06-21T02:31:25.511958379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:25.512755 containerd[1503]: time="2025-06-21T02:31:25.512601829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5633520\" in 1.0205748s" Jun 21 02:31:25.512755 containerd[1503]: time="2025-06-21T02:31:25.512649710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\"" Jun 21 02:31:25.516807 containerd[1503]: time="2025-06-21T02:31:25.516757333Z" level=info msg="CreateContainer within sandbox \"24e4488512bc3fd46f17ab33bed62ba81d5ad8ddeb3140e7d27520ed233fe0f6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 21 02:31:25.526451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2022189594.mount: Deactivated successfully. Jun 21 02:31:25.531113 containerd[1503]: time="2025-06-21T02:31:25.523789681Z" level=info msg="Container 38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:25.538205 containerd[1503]: time="2025-06-21T02:31:25.538157261Z" level=info msg="CreateContainer within sandbox \"24e4488512bc3fd46f17ab33bed62ba81d5ad8ddeb3140e7d27520ed233fe0f6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905\"" Jun 21 02:31:25.539020 containerd[1503]: time="2025-06-21T02:31:25.538979393Z" level=info msg="StartContainer for \"38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905\"" Jun 21 02:31:25.540841 containerd[1503]: time="2025-06-21T02:31:25.540809861Z" level=info msg="connecting to shim 38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905" address="unix:///run/containerd/s/61cb82622a6d575df21ede975c62f33316f177e1cc4906861084f89922c8234f" protocol=ttrpc version=3 Jun 21 02:31:25.565782 systemd[1]: Started cri-containerd-38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905.scope - libcontainer container 38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905. Jun 21 02:31:25.603434 containerd[1503]: time="2025-06-21T02:31:25.603397660Z" level=info msg="StartContainer for \"38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905\" returns successfully" Jun 21 02:31:25.629307 systemd[1]: cri-containerd-38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905.scope: Deactivated successfully. Jun 21 02:31:25.666050 containerd[1503]: time="2025-06-21T02:31:25.665901857Z" level=info msg="received exit event container_id:\"38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905\" id:\"38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905\" pid:3352 exited_at:{seconds:1750473085 nanos:646029953}" Jun 21 02:31:25.670948 containerd[1503]: time="2025-06-21T02:31:25.670892534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905\" id:\"38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905\" pid:3352 exited_at:{seconds:1750473085 nanos:646029953}" Jun 21 02:31:25.701454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38a94b348036dac232f49d56d26575abd8fc8330083906c09cef8743ac744905-rootfs.mount: Deactivated successfully. Jun 21 02:31:26.230699 kubelet[2643]: I0621 02:31:26.230620 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:31:26.231102 kubelet[2643]: E0621 02:31:26.231087 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:26.231365 containerd[1503]: time="2025-06-21T02:31:26.231337500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 21 02:31:26.247984 kubelet[2643]: I0621 02:31:26.247924 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7976dd4b49-n988v" podStartSLOduration=2.268237329 podStartE2EDuration="5.247910024s" podCreationTimestamp="2025-06-21 02:31:21 +0000 UTC" firstStartedPulling="2025-06-21 02:31:21.512213572 +0000 UTC m=+21.435682687" lastFinishedPulling="2025-06-21 02:31:24.491886227 +0000 UTC m=+24.415355382" observedRunningTime="2025-06-21 02:31:25.239169481 +0000 UTC m=+25.162638636" watchObservedRunningTime="2025-06-21 02:31:26.247910024 +0000 UTC m=+26.171379179" Jun 21 02:31:27.153309 kubelet[2643]: E0621 02:31:27.153150 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vk5dt" podUID="0f3a877e-df8f-466c-a544-3a7180344d8d" Jun 21 02:31:27.746856 kubelet[2643]: I0621 02:31:27.746801 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:31:27.747234 kubelet[2643]: E0621 02:31:27.747138 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:28.234155 kubelet[2643]: E0621 02:31:28.234122 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:28.385687 containerd[1503]: time="2025-06-21T02:31:28.385082567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:28.386066 containerd[1503]: time="2025-06-21T02:31:28.385787056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=65872909" Jun 21 02:31:28.386635 containerd[1503]: time="2025-06-21T02:31:28.386595507Z" level=info msg="ImageCreate event name:\"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:28.389051 containerd[1503]: time="2025-06-21T02:31:28.389012060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:28.389533 containerd[1503]: time="2025-06-21T02:31:28.389496707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"67242150\" in 2.158124527s" Jun 21 02:31:28.389533 containerd[1503]: time="2025-06-21T02:31:28.389526347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\"" Jun 21 02:31:28.393383 containerd[1503]: time="2025-06-21T02:31:28.393348279Z" level=info msg="CreateContainer within sandbox \"24e4488512bc3fd46f17ab33bed62ba81d5ad8ddeb3140e7d27520ed233fe0f6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 21 02:31:28.401523 containerd[1503]: time="2025-06-21T02:31:28.401483310Z" level=info msg="Container fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:28.405011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1755880260.mount: Deactivated successfully. Jun 21 02:31:28.411390 containerd[1503]: time="2025-06-21T02:31:28.411343124Z" level=info msg="CreateContainer within sandbox \"24e4488512bc3fd46f17ab33bed62ba81d5ad8ddeb3140e7d27520ed233fe0f6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf\"" Jun 21 02:31:28.412050 containerd[1503]: time="2025-06-21T02:31:28.411919772Z" level=info msg="StartContainer for \"fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf\"" Jun 21 02:31:28.413545 containerd[1503]: time="2025-06-21T02:31:28.413506474Z" level=info msg="connecting to shim fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf" address="unix:///run/containerd/s/61cb82622a6d575df21ede975c62f33316f177e1cc4906861084f89922c8234f" protocol=ttrpc version=3 Jun 21 02:31:28.432781 systemd[1]: Started cri-containerd-fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf.scope - libcontainer container fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf. Jun 21 02:31:28.467689 containerd[1503]: time="2025-06-21T02:31:28.467649451Z" level=info msg="StartContainer for \"fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf\" returns successfully" Jun 21 02:31:29.028916 systemd[1]: cri-containerd-fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf.scope: Deactivated successfully. Jun 21 02:31:29.029413 systemd[1]: cri-containerd-fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf.scope: Consumed 475ms CPU time, 175.1M memory peak, 3.3M read from disk, 165.8M written to disk. Jun 21 02:31:29.045322 containerd[1503]: time="2025-06-21T02:31:29.045167255Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf\" id:\"fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf\" pid:3413 exited_at:{seconds:1750473089 nanos:44724930}" Jun 21 02:31:29.051665 containerd[1503]: time="2025-06-21T02:31:29.051601900Z" level=info msg="received exit event container_id:\"fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf\" id:\"fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf\" pid:3413 exited_at:{seconds:1750473089 nanos:44724930}" Jun 21 02:31:29.068463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcd2cf136c82e1388b6e0575d53c7cf3bb123b07cf82899a6597ac0f4cd92baf-rootfs.mount: Deactivated successfully. Jun 21 02:31:29.100680 kubelet[2643]: I0621 02:31:29.100199 2643 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 21 02:31:29.158183 systemd[1]: Created slice kubepods-besteffort-pod0f3a877e_df8f_466c_a544_3a7180344d8d.slice - libcontainer container kubepods-besteffort-pod0f3a877e_df8f_466c_a544_3a7180344d8d.slice. Jun 21 02:31:29.160872 containerd[1503]: time="2025-06-21T02:31:29.160815213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vk5dt,Uid:0f3a877e-df8f-466c-a544-3a7180344d8d,Namespace:calico-system,Attempt:0,}" Jun 21 02:31:29.514731 systemd[1]: Created slice kubepods-burstable-podb6f10c99_ef44_46ac_ab00_4e7306845019.slice - libcontainer container kubepods-burstable-podb6f10c99_ef44_46ac_ab00_4e7306845019.slice. Jun 21 02:31:29.523554 systemd[1]: Created slice kubepods-besteffort-pod7e146eb6_a0ed_4bdb_9df2_50f1ec051e3c.slice - libcontainer container kubepods-besteffort-pod7e146eb6_a0ed_4bdb_9df2_50f1ec051e3c.slice. Jun 21 02:31:29.533776 systemd[1]: Created slice kubepods-besteffort-podff0a7a3b_25a5_45d3_8a68_8a89da6c22aa.slice - libcontainer container kubepods-besteffort-podff0a7a3b_25a5_45d3_8a68_8a89da6c22aa.slice. Jun 21 02:31:29.542527 systemd[1]: Created slice kubepods-burstable-pode883174d_6987_420b_b1e2_4112f48f5a12.slice - libcontainer container kubepods-burstable-pode883174d_6987_420b_b1e2_4112f48f5a12.slice. Jun 21 02:31:29.548058 systemd[1]: Created slice kubepods-besteffort-pod021d921e_3930_4440_b9e8_6b2ebdeb9caa.slice - libcontainer container kubepods-besteffort-pod021d921e_3930_4440_b9e8_6b2ebdeb9caa.slice. Jun 21 02:31:29.554388 systemd[1]: Created slice kubepods-besteffort-pod407d696e_6743_4f48_9ba6_3d9f1e8e2a69.slice - libcontainer container kubepods-besteffort-pod407d696e_6743_4f48_9ba6_3d9f1e8e2a69.slice. Jun 21 02:31:29.559024 kubelet[2643]: I0621 02:31:29.558991 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa-calico-apiserver-certs\") pod \"calico-apiserver-74b6748ff7-mt5f5\" (UID: \"ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa\") " pod="calico-apiserver/calico-apiserver-74b6748ff7-mt5f5" Jun 21 02:31:29.559127 kubelet[2643]: I0621 02:31:29.559030 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa6e677b-8d86-4279-989e-e2870085ea43-whisker-ca-bundle\") pod \"whisker-6f97d8d797-p6hx2\" (UID: \"aa6e677b-8d86-4279-989e-e2870085ea43\") " pod="calico-system/whisker-6f97d8d797-p6hx2" Jun 21 02:31:29.559127 kubelet[2643]: I0621 02:31:29.559055 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjcw\" (UniqueName: \"kubernetes.io/projected/e883174d-6987-420b-b1e2-4112f48f5a12-kube-api-access-qxjcw\") pod \"coredns-674b8bbfcf-txtn6\" (UID: \"e883174d-6987-420b-b1e2-4112f48f5a12\") " pod="kube-system/coredns-674b8bbfcf-txtn6" Jun 21 02:31:29.559127 kubelet[2643]: I0621 02:31:29.559072 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/021d921e-3930-4440-b9e8-6b2ebdeb9caa-goldmane-key-pair\") pod \"goldmane-5bd85449d4-q76kv\" (UID: \"021d921e-3930-4440-b9e8-6b2ebdeb9caa\") " pod="calico-system/goldmane-5bd85449d4-q76kv" Jun 21 02:31:29.559127 kubelet[2643]: I0621 02:31:29.559092 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58pgq\" (UniqueName: \"kubernetes.io/projected/aa6e677b-8d86-4279-989e-e2870085ea43-kube-api-access-58pgq\") pod \"whisker-6f97d8d797-p6hx2\" (UID: \"aa6e677b-8d86-4279-989e-e2870085ea43\") " pod="calico-system/whisker-6f97d8d797-p6hx2" Jun 21 02:31:29.559127 kubelet[2643]: I0621 02:31:29.559106 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/021d921e-3930-4440-b9e8-6b2ebdeb9caa-config\") pod \"goldmane-5bd85449d4-q76kv\" (UID: \"021d921e-3930-4440-b9e8-6b2ebdeb9caa\") " pod="calico-system/goldmane-5bd85449d4-q76kv" Jun 21 02:31:29.559253 kubelet[2643]: I0621 02:31:29.559125 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv9bw\" (UniqueName: \"kubernetes.io/projected/ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa-kube-api-access-tv9bw\") pod \"calico-apiserver-74b6748ff7-mt5f5\" (UID: \"ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa\") " pod="calico-apiserver/calico-apiserver-74b6748ff7-mt5f5" Jun 21 02:31:29.559253 kubelet[2643]: I0621 02:31:29.559140 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqpjg\" (UniqueName: \"kubernetes.io/projected/021d921e-3930-4440-b9e8-6b2ebdeb9caa-kube-api-access-xqpjg\") pod \"goldmane-5bd85449d4-q76kv\" (UID: \"021d921e-3930-4440-b9e8-6b2ebdeb9caa\") " pod="calico-system/goldmane-5bd85449d4-q76kv" Jun 21 02:31:29.559253 kubelet[2643]: I0621 02:31:29.559156 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldbh2\" (UniqueName: \"kubernetes.io/projected/b6f10c99-ef44-46ac-ab00-4e7306845019-kube-api-access-ldbh2\") pod \"coredns-674b8bbfcf-l25ql\" (UID: \"b6f10c99-ef44-46ac-ab00-4e7306845019\") " pod="kube-system/coredns-674b8bbfcf-l25ql" Jun 21 02:31:29.559253 kubelet[2643]: I0621 02:31:29.559172 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e883174d-6987-420b-b1e2-4112f48f5a12-config-volume\") pod \"coredns-674b8bbfcf-txtn6\" (UID: \"e883174d-6987-420b-b1e2-4112f48f5a12\") " pod="kube-system/coredns-674b8bbfcf-txtn6" Jun 21 02:31:29.559253 kubelet[2643]: I0621 02:31:29.559189 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6f10c99-ef44-46ac-ab00-4e7306845019-config-volume\") pod \"coredns-674b8bbfcf-l25ql\" (UID: \"b6f10c99-ef44-46ac-ab00-4e7306845019\") " pod="kube-system/coredns-674b8bbfcf-l25ql" Jun 21 02:31:29.559362 kubelet[2643]: I0621 02:31:29.559204 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/021d921e-3930-4440-b9e8-6b2ebdeb9caa-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-q76kv\" (UID: \"021d921e-3930-4440-b9e8-6b2ebdeb9caa\") " pod="calico-system/goldmane-5bd85449d4-q76kv" Jun 21 02:31:29.559362 kubelet[2643]: I0621 02:31:29.559259 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa6e677b-8d86-4279-989e-e2870085ea43-whisker-backend-key-pair\") pod \"whisker-6f97d8d797-p6hx2\" (UID: \"aa6e677b-8d86-4279-989e-e2870085ea43\") " pod="calico-system/whisker-6f97d8d797-p6hx2" Jun 21 02:31:29.567332 systemd[1]: Created slice kubepods-besteffort-podaa6e677b_8d86_4279_989e_e2870085ea43.slice - libcontainer container kubepods-besteffort-podaa6e677b_8d86_4279_989e_e2870085ea43.slice. Jun 21 02:31:29.626714 containerd[1503]: time="2025-06-21T02:31:29.626670727Z" level=error msg="Failed to destroy network for sandbox \"b2e7e1af4fa5485053822bcf928fde6394656e50a3c70a1ef09241352738a6f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.629618 containerd[1503]: time="2025-06-21T02:31:29.628582152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vk5dt,Uid:0f3a877e-df8f-466c-a544-3a7180344d8d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e7e1af4fa5485053822bcf928fde6394656e50a3c70a1ef09241352738a6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.629274 systemd[1]: run-netns-cni\x2d673a9735\x2d3687\x2d0f8a\x2d9c41\x2ddab215d39926.mount: Deactivated successfully. Jun 21 02:31:29.634242 kubelet[2643]: E0621 02:31:29.634184 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e7e1af4fa5485053822bcf928fde6394656e50a3c70a1ef09241352738a6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.634325 kubelet[2643]: E0621 02:31:29.634267 2643 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e7e1af4fa5485053822bcf928fde6394656e50a3c70a1ef09241352738a6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vk5dt" Jun 21 02:31:29.634325 kubelet[2643]: E0621 02:31:29.634290 2643 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e7e1af4fa5485053822bcf928fde6394656e50a3c70a1ef09241352738a6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vk5dt" Jun 21 02:31:29.634386 kubelet[2643]: E0621 02:31:29.634354 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vk5dt_calico-system(0f3a877e-df8f-466c-a544-3a7180344d8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vk5dt_calico-system(0f3a877e-df8f-466c-a544-3a7180344d8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2e7e1af4fa5485053822bcf928fde6394656e50a3c70a1ef09241352738a6f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vk5dt" podUID="0f3a877e-df8f-466c-a544-3a7180344d8d" Jun 21 02:31:29.662410 kubelet[2643]: I0621 02:31:29.659902 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4jf2\" (UniqueName: \"kubernetes.io/projected/7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c-kube-api-access-c4jf2\") pod \"calico-kube-controllers-7d6b7d797b-sqcm9\" (UID: \"7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c\") " pod="calico-system/calico-kube-controllers-7d6b7d797b-sqcm9" Jun 21 02:31:29.662410 kubelet[2643]: I0621 02:31:29.659960 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c-tigera-ca-bundle\") pod \"calico-kube-controllers-7d6b7d797b-sqcm9\" (UID: \"7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c\") " pod="calico-system/calico-kube-controllers-7d6b7d797b-sqcm9" Jun 21 02:31:29.662410 kubelet[2643]: I0621 02:31:29.660121 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh4sf\" (UniqueName: \"kubernetes.io/projected/407d696e-6743-4f48-9ba6-3d9f1e8e2a69-kube-api-access-xh4sf\") pod \"calico-apiserver-74b6748ff7-9cgwr\" (UID: \"407d696e-6743-4f48-9ba6-3d9f1e8e2a69\") " pod="calico-apiserver/calico-apiserver-74b6748ff7-9cgwr" Jun 21 02:31:29.662410 kubelet[2643]: I0621 02:31:29.660193 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/407d696e-6743-4f48-9ba6-3d9f1e8e2a69-calico-apiserver-certs\") pod \"calico-apiserver-74b6748ff7-9cgwr\" (UID: \"407d696e-6743-4f48-9ba6-3d9f1e8e2a69\") " pod="calico-apiserver/calico-apiserver-74b6748ff7-9cgwr" Jun 21 02:31:29.822839 kubelet[2643]: E0621 02:31:29.822762 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:29.823399 containerd[1503]: time="2025-06-21T02:31:29.823345628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l25ql,Uid:b6f10c99-ef44-46ac-ab00-4e7306845019,Namespace:kube-system,Attempt:0,}" Jun 21 02:31:29.831524 containerd[1503]: time="2025-06-21T02:31:29.830659724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b7d797b-sqcm9,Uid:7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c,Namespace:calico-system,Attempt:0,}" Jun 21 02:31:29.837345 containerd[1503]: time="2025-06-21T02:31:29.837294011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b6748ff7-mt5f5,Uid:ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:31:29.845915 kubelet[2643]: E0621 02:31:29.845581 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:29.846207 containerd[1503]: time="2025-06-21T02:31:29.846160967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-txtn6,Uid:e883174d-6987-420b-b1e2-4112f48f5a12,Namespace:kube-system,Attempt:0,}" Jun 21 02:31:29.854279 containerd[1503]: time="2025-06-21T02:31:29.854240473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-q76kv,Uid:021d921e-3930-4440-b9e8-6b2ebdeb9caa,Namespace:calico-system,Attempt:0,}" Jun 21 02:31:29.867258 containerd[1503]: time="2025-06-21T02:31:29.866706357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b6748ff7-9cgwr,Uid:407d696e-6743-4f48-9ba6-3d9f1e8e2a69,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:31:29.873447 containerd[1503]: time="2025-06-21T02:31:29.873394764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f97d8d797-p6hx2,Uid:aa6e677b-8d86-4279-989e-e2870085ea43,Namespace:calico-system,Attempt:0,}" Jun 21 02:31:29.914940 containerd[1503]: time="2025-06-21T02:31:29.914882189Z" level=error msg="Failed to destroy network for sandbox \"7dd59d5e0950e99a7247c07a61ade0ef4b5802c0cbe271eaca7abd7e56ce5644\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.916683 containerd[1503]: time="2025-06-21T02:31:29.916641012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l25ql,Uid:b6f10c99-ef44-46ac-ab00-4e7306845019,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dd59d5e0950e99a7247c07a61ade0ef4b5802c0cbe271eaca7abd7e56ce5644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.917258 kubelet[2643]: E0621 02:31:29.917221 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dd59d5e0950e99a7247c07a61ade0ef4b5802c0cbe271eaca7abd7e56ce5644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.917339 kubelet[2643]: E0621 02:31:29.917281 2643 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dd59d5e0950e99a7247c07a61ade0ef4b5802c0cbe271eaca7abd7e56ce5644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l25ql" Jun 21 02:31:29.917339 kubelet[2643]: E0621 02:31:29.917310 2643 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dd59d5e0950e99a7247c07a61ade0ef4b5802c0cbe271eaca7abd7e56ce5644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l25ql" Jun 21 02:31:29.917417 kubelet[2643]: E0621 02:31:29.917374 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-l25ql_kube-system(b6f10c99-ef44-46ac-ab00-4e7306845019)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-l25ql_kube-system(b6f10c99-ef44-46ac-ab00-4e7306845019)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dd59d5e0950e99a7247c07a61ade0ef4b5802c0cbe271eaca7abd7e56ce5644\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-l25ql" podUID="b6f10c99-ef44-46ac-ab00-4e7306845019" Jun 21 02:31:29.939619 containerd[1503]: time="2025-06-21T02:31:29.939566113Z" level=error msg="Failed to destroy network for sandbox \"1805347ebfd69555d6a31a14084efb43c1c25490463c93de7290045ba919dcc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.940244 containerd[1503]: time="2025-06-21T02:31:29.940162281Z" level=error msg="Failed to destroy network for sandbox \"a0e54deb4b8f70ac788c190304b38e5dc9e2d2d141be32d41506fcdadf654432\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.940974 containerd[1503]: time="2025-06-21T02:31:29.940811729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b7d797b-sqcm9,Uid:7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1805347ebfd69555d6a31a14084efb43c1c25490463c93de7290045ba919dcc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.941226 kubelet[2643]: E0621 02:31:29.941192 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1805347ebfd69555d6a31a14084efb43c1c25490463c93de7290045ba919dcc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.941331 kubelet[2643]: E0621 02:31:29.941313 2643 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1805347ebfd69555d6a31a14084efb43c1c25490463c93de7290045ba919dcc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6b7d797b-sqcm9" Jun 21 02:31:29.941696 kubelet[2643]: E0621 02:31:29.941404 2643 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1805347ebfd69555d6a31a14084efb43c1c25490463c93de7290045ba919dcc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6b7d797b-sqcm9" Jun 21 02:31:29.941696 kubelet[2643]: E0621 02:31:29.941467 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d6b7d797b-sqcm9_calico-system(7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d6b7d797b-sqcm9_calico-system(7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1805347ebfd69555d6a31a14084efb43c1c25490463c93de7290045ba919dcc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d6b7d797b-sqcm9" podUID="7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c" Jun 21 02:31:29.941808 containerd[1503]: time="2025-06-21T02:31:29.941769822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b6748ff7-mt5f5,Uid:ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0e54deb4b8f70ac788c190304b38e5dc9e2d2d141be32d41506fcdadf654432\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.942479 kubelet[2643]: E0621 02:31:29.942073 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0e54deb4b8f70ac788c190304b38e5dc9e2d2d141be32d41506fcdadf654432\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.942541 kubelet[2643]: E0621 02:31:29.942494 2643 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0e54deb4b8f70ac788c190304b38e5dc9e2d2d141be32d41506fcdadf654432\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b6748ff7-mt5f5" Jun 21 02:31:29.943039 kubelet[2643]: E0621 02:31:29.942514 2643 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0e54deb4b8f70ac788c190304b38e5dc9e2d2d141be32d41506fcdadf654432\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b6748ff7-mt5f5" Jun 21 02:31:29.943115 kubelet[2643]: E0621 02:31:29.943071 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b6748ff7-mt5f5_calico-apiserver(ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b6748ff7-mt5f5_calico-apiserver(ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0e54deb4b8f70ac788c190304b38e5dc9e2d2d141be32d41506fcdadf654432\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b6748ff7-mt5f5" podUID="ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa" Jun 21 02:31:29.952693 containerd[1503]: time="2025-06-21T02:31:29.952617004Z" level=error msg="Failed to destroy network for sandbox \"0ceeb6a99d2d9fc070d21cb373cf7198e77bb14465b887fd5120dc44dad80bcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.957024 containerd[1503]: time="2025-06-21T02:31:29.956393214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-txtn6,Uid:e883174d-6987-420b-b1e2-4112f48f5a12,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ceeb6a99d2d9fc070d21cb373cf7198e77bb14465b887fd5120dc44dad80bcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.957151 kubelet[2643]: E0621 02:31:29.956662 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ceeb6a99d2d9fc070d21cb373cf7198e77bb14465b887fd5120dc44dad80bcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.957151 kubelet[2643]: E0621 02:31:29.956710 2643 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ceeb6a99d2d9fc070d21cb373cf7198e77bb14465b887fd5120dc44dad80bcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-txtn6" Jun 21 02:31:29.957151 kubelet[2643]: E0621 02:31:29.956729 2643 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ceeb6a99d2d9fc070d21cb373cf7198e77bb14465b887fd5120dc44dad80bcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-txtn6" Jun 21 02:31:29.957237 kubelet[2643]: E0621 02:31:29.956787 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-txtn6_kube-system(e883174d-6987-420b-b1e2-4112f48f5a12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-txtn6_kube-system(e883174d-6987-420b-b1e2-4112f48f5a12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ceeb6a99d2d9fc070d21cb373cf7198e77bb14465b887fd5120dc44dad80bcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-txtn6" podUID="e883174d-6987-420b-b1e2-4112f48f5a12" Jun 21 02:31:29.967982 containerd[1503]: time="2025-06-21T02:31:29.967931325Z" level=error msg="Failed to destroy network for sandbox \"3b02bd4a14afb8da893078ac522e12d94fd73963409c9ac86e4b950be10d166c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.968494 containerd[1503]: time="2025-06-21T02:31:29.968065687Z" level=error msg="Failed to destroy network for sandbox \"c43fd96fc381da7095d5512beb13adaa9d8871c8461d5c1fbc28466366ac38ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.969172 containerd[1503]: time="2025-06-21T02:31:29.969137861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-q76kv,Uid:021d921e-3930-4440-b9e8-6b2ebdeb9caa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b02bd4a14afb8da893078ac522e12d94fd73963409c9ac86e4b950be10d166c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.969605 kubelet[2643]: E0621 02:31:29.969428 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b02bd4a14afb8da893078ac522e12d94fd73963409c9ac86e4b950be10d166c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.969605 kubelet[2643]: E0621 02:31:29.969491 2643 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b02bd4a14afb8da893078ac522e12d94fd73963409c9ac86e4b950be10d166c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-q76kv" Jun 21 02:31:29.969605 kubelet[2643]: E0621 02:31:29.969515 2643 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b02bd4a14afb8da893078ac522e12d94fd73963409c9ac86e4b950be10d166c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-q76kv" Jun 21 02:31:29.969767 kubelet[2643]: E0621 02:31:29.969561 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-q76kv_calico-system(021d921e-3930-4440-b9e8-6b2ebdeb9caa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-q76kv_calico-system(021d921e-3930-4440-b9e8-6b2ebdeb9caa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b02bd4a14afb8da893078ac522e12d94fd73963409c9ac86e4b950be10d166c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-q76kv" podUID="021d921e-3930-4440-b9e8-6b2ebdeb9caa" Jun 21 02:31:29.969977 containerd[1503]: time="2025-06-21T02:31:29.969940231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f97d8d797-p6hx2,Uid:aa6e677b-8d86-4279-989e-e2870085ea43,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c43fd96fc381da7095d5512beb13adaa9d8871c8461d5c1fbc28466366ac38ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.970135 kubelet[2643]: E0621 02:31:29.970108 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c43fd96fc381da7095d5512beb13adaa9d8871c8461d5c1fbc28466366ac38ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.970169 kubelet[2643]: E0621 02:31:29.970147 2643 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c43fd96fc381da7095d5512beb13adaa9d8871c8461d5c1fbc28466366ac38ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f97d8d797-p6hx2" Jun 21 02:31:29.970169 kubelet[2643]: E0621 02:31:29.970164 2643 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c43fd96fc381da7095d5512beb13adaa9d8871c8461d5c1fbc28466366ac38ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f97d8d797-p6hx2" Jun 21 02:31:29.970223 kubelet[2643]: E0621 02:31:29.970205 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f97d8d797-p6hx2_calico-system(aa6e677b-8d86-4279-989e-e2870085ea43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f97d8d797-p6hx2_calico-system(aa6e677b-8d86-4279-989e-e2870085ea43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c43fd96fc381da7095d5512beb13adaa9d8871c8461d5c1fbc28466366ac38ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f97d8d797-p6hx2" podUID="aa6e677b-8d86-4279-989e-e2870085ea43" Jun 21 02:31:29.971104 containerd[1503]: time="2025-06-21T02:31:29.971078646Z" level=error msg="Failed to destroy network for sandbox \"09c2c097c11282639326622045bc7f624f53be8914e927be56d332dda9fdc179\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.972360 containerd[1503]: time="2025-06-21T02:31:29.972319503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b6748ff7-9cgwr,Uid:407d696e-6743-4f48-9ba6-3d9f1e8e2a69,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09c2c097c11282639326622045bc7f624f53be8914e927be56d332dda9fdc179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.972676 kubelet[2643]: E0621 02:31:29.972650 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09c2c097c11282639326622045bc7f624f53be8914e927be56d332dda9fdc179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:31:29.972796 kubelet[2643]: E0621 02:31:29.972780 2643 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09c2c097c11282639326622045bc7f624f53be8914e927be56d332dda9fdc179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b6748ff7-9cgwr" Jun 21 02:31:29.972873 kubelet[2643]: E0621 02:31:29.972856 2643 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09c2c097c11282639326622045bc7f624f53be8914e927be56d332dda9fdc179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b6748ff7-9cgwr" Jun 21 02:31:29.972974 kubelet[2643]: E0621 02:31:29.972951 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b6748ff7-9cgwr_calico-apiserver(407d696e-6743-4f48-9ba6-3d9f1e8e2a69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b6748ff7-9cgwr_calico-apiserver(407d696e-6743-4f48-9ba6-3d9f1e8e2a69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09c2c097c11282639326622045bc7f624f53be8914e927be56d332dda9fdc179\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b6748ff7-9cgwr" podUID="407d696e-6743-4f48-9ba6-3d9f1e8e2a69" Jun 21 02:31:30.248431 containerd[1503]: time="2025-06-21T02:31:30.248265529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 21 02:31:34.061103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731660688.mount: Deactivated successfully. Jun 21 02:31:34.531898 containerd[1503]: time="2025-06-21T02:31:34.531789995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:34.532396 containerd[1503]: time="2025-06-21T02:31:34.532346321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=150542367" Jun 21 02:31:34.533356 containerd[1503]: time="2025-06-21T02:31:34.533322212Z" level=info msg="ImageCreate event name:\"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:34.535109 containerd[1503]: time="2025-06-21T02:31:34.535055951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:34.535737 containerd[1503]: time="2025-06-21T02:31:34.535561917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"150542229\" in 4.287197587s" Jun 21 02:31:34.535737 containerd[1503]: time="2025-06-21T02:31:34.535587997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\"" Jun 21 02:31:34.551512 containerd[1503]: time="2025-06-21T02:31:34.551466453Z" level=info msg="CreateContainer within sandbox \"24e4488512bc3fd46f17ab33bed62ba81d5ad8ddeb3140e7d27520ed233fe0f6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 21 02:31:34.581393 containerd[1503]: time="2025-06-21T02:31:34.581329623Z" level=info msg="Container 59cd454da6b27d42750c07321e5823a962ca99546b40755efbb09eaf317e807c: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:34.591567 containerd[1503]: time="2025-06-21T02:31:34.591443575Z" level=info msg="CreateContainer within sandbox \"24e4488512bc3fd46f17ab33bed62ba81d5ad8ddeb3140e7d27520ed233fe0f6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"59cd454da6b27d42750c07321e5823a962ca99546b40755efbb09eaf317e807c\"" Jun 21 02:31:34.592231 containerd[1503]: time="2025-06-21T02:31:34.592188224Z" level=info msg="StartContainer for \"59cd454da6b27d42750c07321e5823a962ca99546b40755efbb09eaf317e807c\"" Jun 21 02:31:34.593790 containerd[1503]: time="2025-06-21T02:31:34.593758001Z" level=info msg="connecting to shim 59cd454da6b27d42750c07321e5823a962ca99546b40755efbb09eaf317e807c" address="unix:///run/containerd/s/61cb82622a6d575df21ede975c62f33316f177e1cc4906861084f89922c8234f" protocol=ttrpc version=3 Jun 21 02:31:34.653818 systemd[1]: Started cri-containerd-59cd454da6b27d42750c07321e5823a962ca99546b40755efbb09eaf317e807c.scope - libcontainer container 59cd454da6b27d42750c07321e5823a962ca99546b40755efbb09eaf317e807c. Jun 21 02:31:34.717897 containerd[1503]: time="2025-06-21T02:31:34.717832134Z" level=info msg="StartContainer for \"59cd454da6b27d42750c07321e5823a962ca99546b40755efbb09eaf317e807c\" returns successfully" Jun 21 02:31:34.901684 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 21 02:31:34.901790 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 21 02:31:35.094300 kubelet[2643]: I0621 02:31:35.094253 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa6e677b-8d86-4279-989e-e2870085ea43-whisker-ca-bundle\") pod \"aa6e677b-8d86-4279-989e-e2870085ea43\" (UID: \"aa6e677b-8d86-4279-989e-e2870085ea43\") " Jun 21 02:31:35.095338 kubelet[2643]: I0621 02:31:35.094431 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa6e677b-8d86-4279-989e-e2870085ea43-whisker-backend-key-pair\") pod \"aa6e677b-8d86-4279-989e-e2870085ea43\" (UID: \"aa6e677b-8d86-4279-989e-e2870085ea43\") " Jun 21 02:31:35.095338 kubelet[2643]: I0621 02:31:35.094484 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58pgq\" (UniqueName: \"kubernetes.io/projected/aa6e677b-8d86-4279-989e-e2870085ea43-kube-api-access-58pgq\") pod \"aa6e677b-8d86-4279-989e-e2870085ea43\" (UID: \"aa6e677b-8d86-4279-989e-e2870085ea43\") " Jun 21 02:31:35.103766 kubelet[2643]: I0621 02:31:35.103719 2643 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa6e677b-8d86-4279-989e-e2870085ea43-kube-api-access-58pgq" (OuterVolumeSpecName: "kube-api-access-58pgq") pod "aa6e677b-8d86-4279-989e-e2870085ea43" (UID: "aa6e677b-8d86-4279-989e-e2870085ea43"). InnerVolumeSpecName "kube-api-access-58pgq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 02:31:35.104431 systemd[1]: var-lib-kubelet-pods-aa6e677b\x2d8d86\x2d4279\x2d989e\x2de2870085ea43-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d58pgq.mount: Deactivated successfully. Jun 21 02:31:35.104522 systemd[1]: var-lib-kubelet-pods-aa6e677b\x2d8d86\x2d4279\x2d989e\x2de2870085ea43-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 21 02:31:35.106281 kubelet[2643]: I0621 02:31:35.106203 2643 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa6e677b-8d86-4279-989e-e2870085ea43-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "aa6e677b-8d86-4279-989e-e2870085ea43" (UID: "aa6e677b-8d86-4279-989e-e2870085ea43"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 21 02:31:35.111879 kubelet[2643]: I0621 02:31:35.111831 2643 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6e677b-8d86-4279-989e-e2870085ea43-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "aa6e677b-8d86-4279-989e-e2870085ea43" (UID: "aa6e677b-8d86-4279-989e-e2870085ea43"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 21 02:31:35.195709 kubelet[2643]: I0621 02:31:35.195529 2643 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa6e677b-8d86-4279-989e-e2870085ea43-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jun 21 02:31:35.195709 kubelet[2643]: I0621 02:31:35.195563 2643 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-58pgq\" (UniqueName: \"kubernetes.io/projected/aa6e677b-8d86-4279-989e-e2870085ea43-kube-api-access-58pgq\") on node \"localhost\" DevicePath \"\"" Jun 21 02:31:35.195709 kubelet[2643]: I0621 02:31:35.195573 2643 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa6e677b-8d86-4279-989e-e2870085ea43-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 21 02:31:35.269168 systemd[1]: Removed slice kubepods-besteffort-podaa6e677b_8d86_4279_989e_e2870085ea43.slice - libcontainer container kubepods-besteffort-podaa6e677b_8d86_4279_989e_e2870085ea43.slice. Jun 21 02:31:35.295424 kubelet[2643]: I0621 02:31:35.294943 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l89ws" podStartSLOduration=1.533596201 podStartE2EDuration="14.294924063s" podCreationTimestamp="2025-06-21 02:31:21 +0000 UTC" firstStartedPulling="2025-06-21 02:31:21.774991503 +0000 UTC m=+21.698460658" lastFinishedPulling="2025-06-21 02:31:34.536319405 +0000 UTC m=+34.459788520" observedRunningTime="2025-06-21 02:31:35.280121624 +0000 UTC m=+35.203590779" watchObservedRunningTime="2025-06-21 02:31:35.294924063 +0000 UTC m=+35.218393258" Jun 21 02:31:35.342739 systemd[1]: Created slice kubepods-besteffort-podbe2fb415_6189_47f7_9eb9_c0e76a8dec87.slice - libcontainer container kubepods-besteffort-podbe2fb415_6189_47f7_9eb9_c0e76a8dec87.slice. Jun 21 02:31:35.396868 kubelet[2643]: I0621 02:31:35.396825 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be2fb415-6189-47f7-9eb9-c0e76a8dec87-whisker-ca-bundle\") pod \"whisker-7df8b759db-hnr2h\" (UID: \"be2fb415-6189-47f7-9eb9-c0e76a8dec87\") " pod="calico-system/whisker-7df8b759db-hnr2h" Jun 21 02:31:35.402223 kubelet[2643]: I0621 02:31:35.396928 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kz6f\" (UniqueName: \"kubernetes.io/projected/be2fb415-6189-47f7-9eb9-c0e76a8dec87-kube-api-access-9kz6f\") pod \"whisker-7df8b759db-hnr2h\" (UID: \"be2fb415-6189-47f7-9eb9-c0e76a8dec87\") " pod="calico-system/whisker-7df8b759db-hnr2h" Jun 21 02:31:35.402362 kubelet[2643]: I0621 02:31:35.402345 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be2fb415-6189-47f7-9eb9-c0e76a8dec87-whisker-backend-key-pair\") pod \"whisker-7df8b759db-hnr2h\" (UID: \"be2fb415-6189-47f7-9eb9-c0e76a8dec87\") " pod="calico-system/whisker-7df8b759db-hnr2h" Jun 21 02:31:35.419560 containerd[1503]: time="2025-06-21T02:31:35.419507240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59cd454da6b27d42750c07321e5823a962ca99546b40755efbb09eaf317e807c\" id:\"acbc044b8f9051b5adc1036550c4807d0c3625d6077d352b3d73f5715ec24add\" pid:3805 exit_status:1 exited_at:{seconds:1750473095 nanos:419079555}" Jun 21 02:31:35.647462 containerd[1503]: time="2025-06-21T02:31:35.647406166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7df8b759db-hnr2h,Uid:be2fb415-6189-47f7-9eb9-c0e76a8dec87,Namespace:calico-system,Attempt:0,}" Jun 21 02:31:35.908110 systemd-networkd[1434]: calia2bfa216bca: Link UP Jun 21 02:31:35.908976 systemd-networkd[1434]: calia2bfa216bca: Gained carrier Jun 21 02:31:35.947150 containerd[1503]: 2025-06-21 02:31:35.669 [INFO][3821] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 02:31:35.947150 containerd[1503]: 2025-06-21 02:31:35.706 [INFO][3821] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7df8b759db--hnr2h-eth0 whisker-7df8b759db- calico-system be2fb415-6189-47f7-9eb9-c0e76a8dec87 922 0 2025-06-21 02:31:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7df8b759db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7df8b759db-hnr2h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia2bfa216bca [] [] }} ContainerID="036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" Namespace="calico-system" Pod="whisker-7df8b759db-hnr2h" WorkloadEndpoint="localhost-k8s-whisker--7df8b759db--hnr2h-" Jun 21 02:31:35.947150 containerd[1503]: 2025-06-21 02:31:35.706 [INFO][3821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" Namespace="calico-system" Pod="whisker-7df8b759db-hnr2h" WorkloadEndpoint="localhost-k8s-whisker--7df8b759db--hnr2h-eth0" Jun 21 02:31:35.947150 containerd[1503]: 2025-06-21 02:31:35.835 [INFO][3835] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" HandleID="k8s-pod-network.036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" Workload="localhost-k8s-whisker--7df8b759db--hnr2h-eth0" Jun 21 02:31:35.947420 containerd[1503]: 2025-06-21 02:31:35.836 [INFO][3835] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" HandleID="k8s-pod-network.036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" Workload="localhost-k8s-whisker--7df8b759db--hnr2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7df8b759db-hnr2h", "timestamp":"2025-06-21 02:31:35.835884948 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:31:35.947420 containerd[1503]: 2025-06-21 02:31:35.836 [INFO][3835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:31:35.947420 containerd[1503]: 2025-06-21 02:31:35.836 [INFO][3835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:31:35.947420 containerd[1503]: 2025-06-21 02:31:35.836 [INFO][3835] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:31:35.947420 containerd[1503]: 2025-06-21 02:31:35.848 [INFO][3835] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" host="localhost" Jun 21 02:31:35.947420 containerd[1503]: 2025-06-21 02:31:35.860 [INFO][3835] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:31:35.947420 containerd[1503]: 2025-06-21 02:31:35.873 [INFO][3835] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:31:35.947420 containerd[1503]: 2025-06-21 02:31:35.876 [INFO][3835] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:35.947420 containerd[1503]: 2025-06-21 02:31:35.879 [INFO][3835] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:35.947420 containerd[1503]: 2025-06-21 02:31:35.879 [INFO][3835] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" host="localhost" Jun 21 02:31:35.947619 containerd[1503]: 2025-06-21 02:31:35.880 [INFO][3835] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8 Jun 21 02:31:35.947619 containerd[1503]: 2025-06-21 02:31:35.885 [INFO][3835] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" host="localhost" Jun 21 02:31:35.947619 containerd[1503]: 2025-06-21 02:31:35.892 [INFO][3835] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" host="localhost" Jun 21 02:31:35.947619 containerd[1503]: 2025-06-21 02:31:35.892 [INFO][3835] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" host="localhost" Jun 21 02:31:35.947619 containerd[1503]: 2025-06-21 02:31:35.893 [INFO][3835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:31:35.947619 containerd[1503]: 2025-06-21 02:31:35.893 [INFO][3835] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" HandleID="k8s-pod-network.036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" Workload="localhost-k8s-whisker--7df8b759db--hnr2h-eth0" Jun 21 02:31:35.947754 containerd[1503]: 2025-06-21 02:31:35.895 [INFO][3821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" Namespace="calico-system" Pod="whisker-7df8b759db-hnr2h" WorkloadEndpoint="localhost-k8s-whisker--7df8b759db--hnr2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7df8b759db--hnr2h-eth0", GenerateName:"whisker-7df8b759db-", Namespace:"calico-system", SelfLink:"", UID:"be2fb415-6189-47f7-9eb9-c0e76a8dec87", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7df8b759db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7df8b759db-hnr2h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia2bfa216bca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:35.947754 containerd[1503]: 2025-06-21 02:31:35.895 [INFO][3821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" Namespace="calico-system" Pod="whisker-7df8b759db-hnr2h" WorkloadEndpoint="localhost-k8s-whisker--7df8b759db--hnr2h-eth0" Jun 21 02:31:35.947821 containerd[1503]: 2025-06-21 02:31:35.896 [INFO][3821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2bfa216bca ContainerID="036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" Namespace="calico-system" Pod="whisker-7df8b759db-hnr2h" WorkloadEndpoint="localhost-k8s-whisker--7df8b759db--hnr2h-eth0" Jun 21 02:31:35.947821 containerd[1503]: 2025-06-21 02:31:35.909 [INFO][3821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" Namespace="calico-system" Pod="whisker-7df8b759db-hnr2h" WorkloadEndpoint="localhost-k8s-whisker--7df8b759db--hnr2h-eth0" Jun 21 02:31:35.947862 containerd[1503]: 2025-06-21 02:31:35.909 [INFO][3821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" Namespace="calico-system" Pod="whisker-7df8b759db-hnr2h" WorkloadEndpoint="localhost-k8s-whisker--7df8b759db--hnr2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7df8b759db--hnr2h-eth0", GenerateName:"whisker-7df8b759db-", Namespace:"calico-system", SelfLink:"", UID:"be2fb415-6189-47f7-9eb9-c0e76a8dec87", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7df8b759db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8", Pod:"whisker-7df8b759db-hnr2h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia2bfa216bca", MAC:"b6:21:09:d5:2b:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:35.947908 containerd[1503]: 2025-06-21 02:31:35.944 [INFO][3821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" Namespace="calico-system" Pod="whisker-7df8b759db-hnr2h" WorkloadEndpoint="localhost-k8s-whisker--7df8b759db--hnr2h-eth0" Jun 21 02:31:36.024578 containerd[1503]: time="2025-06-21T02:31:36.024512685Z" level=info msg="connecting to shim 036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8" address="unix:///run/containerd/s/a7d6c8d2ae0cb54ea3ffb75e8ed8ea7f799988d2d61e74a54e7cce51783e4b26" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:31:36.058850 systemd[1]: Started cri-containerd-036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8.scope - libcontainer container 036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8. Jun 21 02:31:36.073744 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:31:36.094882 containerd[1503]: time="2025-06-21T02:31:36.094813898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7df8b759db-hnr2h,Uid:be2fb415-6189-47f7-9eb9-c0e76a8dec87,Namespace:calico-system,Attempt:0,} returns sandbox id \"036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8\"" Jun 21 02:31:36.096484 containerd[1503]: time="2025-06-21T02:31:36.096454755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 21 02:31:36.155804 kubelet[2643]: I0621 02:31:36.155769 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa6e677b-8d86-4279-989e-e2870085ea43" path="/var/lib/kubelet/pods/aa6e677b-8d86-4279-989e-e2870085ea43/volumes" Jun 21 02:31:36.465729 containerd[1503]: time="2025-06-21T02:31:36.465691200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59cd454da6b27d42750c07321e5823a962ca99546b40755efbb09eaf317e807c\" id:\"dc0169a92b15be8134c8077375729283efd913a6bd8e2ce22cfec08ba5eb0956\" pid:4002 exit_status:1 exited_at:{seconds:1750473096 nanos:465301236}" Jun 21 02:31:36.680526 systemd-networkd[1434]: vxlan.calico: Link UP Jun 21 02:31:36.680532 systemd-networkd[1434]: vxlan.calico: Gained carrier Jun 21 02:31:37.342243 containerd[1503]: time="2025-06-21T02:31:37.342203949Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59cd454da6b27d42750c07321e5823a962ca99546b40755efbb09eaf317e807c\" id:\"f9899e5d1fe1f6573decf97a40493f4d09002234d34cdb2166010ce79eab1f55\" pid:4126 exit_status:1 exited_at:{seconds:1750473097 nanos:341914506}" Jun 21 02:31:37.383539 containerd[1503]: time="2025-06-21T02:31:37.383485926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4605623" Jun 21 02:31:37.387787 containerd[1503]: time="2025-06-21T02:31:37.387731169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"5974856\" in 1.291238294s" Jun 21 02:31:37.387787 containerd[1503]: time="2025-06-21T02:31:37.387778010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\"" Jun 21 02:31:37.395430 containerd[1503]: time="2025-06-21T02:31:37.395385207Z" level=info msg="CreateContainer within sandbox \"036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 21 02:31:37.397683 containerd[1503]: time="2025-06-21T02:31:37.397616789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:37.398301 containerd[1503]: time="2025-06-21T02:31:37.398264956Z" level=info msg="ImageCreate event name:\"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:37.399016 containerd[1503]: time="2025-06-21T02:31:37.398982803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:37.401774 containerd[1503]: time="2025-06-21T02:31:37.401731911Z" level=info msg="Container 2f2dfcc9fede3d83e3ce7b3d66f095e19016e57cc512ef960ef9449bac12d13f: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:37.408324 containerd[1503]: time="2025-06-21T02:31:37.408282577Z" level=info msg="CreateContainer within sandbox \"036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2f2dfcc9fede3d83e3ce7b3d66f095e19016e57cc512ef960ef9449bac12d13f\"" Jun 21 02:31:37.408915 containerd[1503]: time="2025-06-21T02:31:37.408886703Z" level=info msg="StartContainer for \"2f2dfcc9fede3d83e3ce7b3d66f095e19016e57cc512ef960ef9449bac12d13f\"" Jun 21 02:31:37.410898 containerd[1503]: time="2025-06-21T02:31:37.410858163Z" level=info msg="connecting to shim 2f2dfcc9fede3d83e3ce7b3d66f095e19016e57cc512ef960ef9449bac12d13f" address="unix:///run/containerd/s/a7d6c8d2ae0cb54ea3ffb75e8ed8ea7f799988d2d61e74a54e7cce51783e4b26" protocol=ttrpc version=3 Jun 21 02:31:37.435792 systemd[1]: Started cri-containerd-2f2dfcc9fede3d83e3ce7b3d66f095e19016e57cc512ef960ef9449bac12d13f.scope - libcontainer container 2f2dfcc9fede3d83e3ce7b3d66f095e19016e57cc512ef960ef9449bac12d13f. Jun 21 02:31:37.472922 containerd[1503]: time="2025-06-21T02:31:37.471136213Z" level=info msg="StartContainer for \"2f2dfcc9fede3d83e3ce7b3d66f095e19016e57cc512ef960ef9449bac12d13f\" returns successfully" Jun 21 02:31:37.472922 containerd[1503]: time="2025-06-21T02:31:37.472367226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 21 02:31:37.567781 systemd-networkd[1434]: calia2bfa216bca: Gained IPv6LL Jun 21 02:31:38.079775 systemd-networkd[1434]: vxlan.calico: Gained IPv6LL Jun 21 02:31:39.098487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947754045.mount: Deactivated successfully. Jun 21 02:31:39.117578 containerd[1503]: time="2025-06-21T02:31:39.117525013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:39.118342 containerd[1503]: time="2025-06-21T02:31:39.118301460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=30829716" Jun 21 02:31:39.121132 containerd[1503]: time="2025-06-21T02:31:39.121094407Z" level=info msg="ImageCreate event name:\"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:39.123432 containerd[1503]: time="2025-06-21T02:31:39.123381949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:39.124428 containerd[1503]: time="2025-06-21T02:31:39.124290158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"30829546\" in 1.651893171s" Jun 21 02:31:39.124428 containerd[1503]: time="2025-06-21T02:31:39.124327078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\"" Jun 21 02:31:39.128935 containerd[1503]: time="2025-06-21T02:31:39.128884522Z" level=info msg="CreateContainer within sandbox \"036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 21 02:31:39.170337 containerd[1503]: time="2025-06-21T02:31:39.169801914Z" level=info msg="Container 1a7a773fefe263a095b10309c1f23609a66d4e5816eff84491ff107e7842812c: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:39.267201 containerd[1503]: time="2025-06-21T02:31:39.267136126Z" level=info msg="CreateContainer within sandbox \"036844449284c9bb7cf06bcd721922a79674895b0d01110a479dddd9b4dbb5a8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1a7a773fefe263a095b10309c1f23609a66d4e5816eff84491ff107e7842812c\"" Jun 21 02:31:39.267830 containerd[1503]: time="2025-06-21T02:31:39.267770652Z" level=info msg="StartContainer for \"1a7a773fefe263a095b10309c1f23609a66d4e5816eff84491ff107e7842812c\"" Jun 21 02:31:39.268846 containerd[1503]: time="2025-06-21T02:31:39.268814182Z" level=info msg="connecting to shim 1a7a773fefe263a095b10309c1f23609a66d4e5816eff84491ff107e7842812c" address="unix:///run/containerd/s/a7d6c8d2ae0cb54ea3ffb75e8ed8ea7f799988d2d61e74a54e7cce51783e4b26" protocol=ttrpc version=3 Jun 21 02:31:39.297816 systemd[1]: Started cri-containerd-1a7a773fefe263a095b10309c1f23609a66d4e5816eff84491ff107e7842812c.scope - libcontainer container 1a7a773fefe263a095b10309c1f23609a66d4e5816eff84491ff107e7842812c. Jun 21 02:31:39.332832 containerd[1503]: time="2025-06-21T02:31:39.332784235Z" level=info msg="StartContainer for \"1a7a773fefe263a095b10309c1f23609a66d4e5816eff84491ff107e7842812c\" returns successfully" Jun 21 02:31:40.424075 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:48620.service - OpenSSH per-connection server daemon (10.0.0.1:48620). Jun 21 02:31:40.496636 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 48620 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:31:40.498060 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:31:40.503293 systemd-logind[1489]: New session 8 of user core. Jun 21 02:31:40.510789 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 02:31:40.713714 sshd[4230]: Connection closed by 10.0.0.1 port 48620 Jun 21 02:31:40.713822 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Jun 21 02:31:40.717495 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:48620.service: Deactivated successfully. Jun 21 02:31:40.720194 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 02:31:40.721418 systemd-logind[1489]: Session 8 logged out. Waiting for processes to exit. Jun 21 02:31:40.722878 systemd-logind[1489]: Removed session 8. Jun 21 02:31:41.153340 kubelet[2643]: E0621 02:31:41.153289 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:41.153764 containerd[1503]: time="2025-06-21T02:31:41.153729205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l25ql,Uid:b6f10c99-ef44-46ac-ab00-4e7306845019,Namespace:kube-system,Attempt:0,}" Jun 21 02:31:41.153938 containerd[1503]: time="2025-06-21T02:31:41.153768645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b6748ff7-mt5f5,Uid:ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:31:41.340977 systemd-networkd[1434]: calic8b1896bf51: Link UP Jun 21 02:31:41.341142 systemd-networkd[1434]: calic8b1896bf51: Gained carrier Jun 21 02:31:41.359064 kubelet[2643]: I0621 02:31:41.357758 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7df8b759db-hnr2h" podStartSLOduration=3.32887105 podStartE2EDuration="6.357740463s" podCreationTimestamp="2025-06-21 02:31:35 +0000 UTC" firstStartedPulling="2025-06-21 02:31:36.096198032 +0000 UTC m=+36.019667187" lastFinishedPulling="2025-06-21 02:31:39.125067485 +0000 UTC m=+39.048536600" observedRunningTime="2025-06-21 02:31:40.313379074 +0000 UTC m=+40.236848309" watchObservedRunningTime="2025-06-21 02:31:41.357740463 +0000 UTC m=+41.281209618" Jun 21 02:31:41.359485 containerd[1503]: 2025-06-21 02:31:41.219 [INFO][4251] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0 calico-apiserver-74b6748ff7- calico-apiserver ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa 859 0 2025-06-21 02:31:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74b6748ff7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74b6748ff7-mt5f5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic8b1896bf51 [] [] }} ContainerID="ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-mt5f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-" Jun 21 02:31:41.359485 containerd[1503]: 2025-06-21 02:31:41.219 [INFO][4251] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-mt5f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0" Jun 21 02:31:41.359485 containerd[1503]: 2025-06-21 02:31:41.268 [INFO][4275] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" HandleID="k8s-pod-network.ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" Workload="localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0" Jun 21 02:31:41.359648 containerd[1503]: 2025-06-21 02:31:41.269 [INFO][4275] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" HandleID="k8s-pod-network.ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" Workload="localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2c80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74b6748ff7-mt5f5", "timestamp":"2025-06-21 02:31:41.268915454 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:31:41.359648 containerd[1503]: 2025-06-21 02:31:41.269 [INFO][4275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:31:41.359648 containerd[1503]: 2025-06-21 02:31:41.269 [INFO][4275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:31:41.359648 containerd[1503]: 2025-06-21 02:31:41.269 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:31:41.359648 containerd[1503]: 2025-06-21 02:31:41.281 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" host="localhost" Jun 21 02:31:41.359648 containerd[1503]: 2025-06-21 02:31:41.286 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:31:41.359648 containerd[1503]: 2025-06-21 02:31:41.298 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:31:41.359648 containerd[1503]: 2025-06-21 02:31:41.308 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:41.359648 containerd[1503]: 2025-06-21 02:31:41.314 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:41.359648 containerd[1503]: 2025-06-21 02:31:41.318 [INFO][4275] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" host="localhost" Jun 21 02:31:41.359881 containerd[1503]: 2025-06-21 02:31:41.320 [INFO][4275] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305 Jun 21 02:31:41.359881 containerd[1503]: 2025-06-21 02:31:41.328 [INFO][4275] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" host="localhost" Jun 21 02:31:41.359881 containerd[1503]: 2025-06-21 02:31:41.334 [INFO][4275] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" host="localhost" Jun 21 02:31:41.359881 containerd[1503]: 2025-06-21 02:31:41.334 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" host="localhost" Jun 21 02:31:41.359881 containerd[1503]: 2025-06-21 02:31:41.334 [INFO][4275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:31:41.359881 containerd[1503]: 2025-06-21 02:31:41.334 [INFO][4275] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" HandleID="k8s-pod-network.ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" Workload="localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0" Jun 21 02:31:41.360104 containerd[1503]: 2025-06-21 02:31:41.336 [INFO][4251] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-mt5f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0", GenerateName:"calico-apiserver-74b6748ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b6748ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74b6748ff7-mt5f5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic8b1896bf51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:41.360164 containerd[1503]: 2025-06-21 02:31:41.337 [INFO][4251] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-mt5f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0" Jun 21 02:31:41.360164 containerd[1503]: 2025-06-21 02:31:41.337 [INFO][4251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8b1896bf51 ContainerID="ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-mt5f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0" Jun 21 02:31:41.360164 containerd[1503]: 2025-06-21 02:31:41.341 [INFO][4251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-mt5f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0" Jun 21 02:31:41.360231 containerd[1503]: 2025-06-21 02:31:41.342 [INFO][4251] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-mt5f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0", GenerateName:"calico-apiserver-74b6748ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b6748ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305", Pod:"calico-apiserver-74b6748ff7-mt5f5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic8b1896bf51", MAC:"72:f2:0f:62:88:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:41.360283 containerd[1503]: 2025-06-21 02:31:41.353 [INFO][4251] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-mt5f5" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--mt5f5-eth0" Jun 21 02:31:41.404959 containerd[1503]: time="2025-06-21T02:31:41.404804771Z" level=info msg="connecting to shim ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305" address="unix:///run/containerd/s/cd0ab168362a54ade65fcae01c9317c0e2718860dcdb653ed655b81e0dee2dae" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:31:41.417908 systemd-networkd[1434]: caliccd21ecb789: Link UP Jun 21 02:31:41.419320 systemd-networkd[1434]: caliccd21ecb789: Gained carrier Jun 21 02:31:41.434654 containerd[1503]: 2025-06-21 02:31:41.219 [INFO][4244] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--l25ql-eth0 coredns-674b8bbfcf- kube-system b6f10c99-ef44-46ac-ab00-4e7306845019 854 0 2025-06-21 02:31:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-l25ql eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliccd21ecb789 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" Namespace="kube-system" Pod="coredns-674b8bbfcf-l25ql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l25ql-" Jun 21 02:31:41.434654 containerd[1503]: 2025-06-21 02:31:41.219 [INFO][4244] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" Namespace="kube-system" Pod="coredns-674b8bbfcf-l25ql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l25ql-eth0" Jun 21 02:31:41.434654 containerd[1503]: 2025-06-21 02:31:41.271 [INFO][4276] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" HandleID="k8s-pod-network.1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" Workload="localhost-k8s-coredns--674b8bbfcf--l25ql-eth0" Jun 21 02:31:41.434830 containerd[1503]: 2025-06-21 02:31:41.272 [INFO][4276] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" HandleID="k8s-pod-network.1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" Workload="localhost-k8s-coredns--674b8bbfcf--l25ql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d8d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-l25ql", "timestamp":"2025-06-21 02:31:41.271865441 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:31:41.434830 containerd[1503]: 2025-06-21 02:31:41.272 [INFO][4276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:31:41.434830 containerd[1503]: 2025-06-21 02:31:41.334 [INFO][4276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:31:41.434830 containerd[1503]: 2025-06-21 02:31:41.334 [INFO][4276] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:31:41.434830 containerd[1503]: 2025-06-21 02:31:41.382 [INFO][4276] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" host="localhost" Jun 21 02:31:41.434830 containerd[1503]: 2025-06-21 02:31:41.388 [INFO][4276] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:31:41.434830 containerd[1503]: 2025-06-21 02:31:41.394 [INFO][4276] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:31:41.434830 containerd[1503]: 2025-06-21 02:31:41.396 [INFO][4276] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:41.434830 containerd[1503]: 2025-06-21 02:31:41.398 [INFO][4276] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:41.434830 containerd[1503]: 2025-06-21 02:31:41.398 [INFO][4276] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" host="localhost" Jun 21 02:31:41.435189 containerd[1503]: 2025-06-21 02:31:41.399 [INFO][4276] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155 Jun 21 02:31:41.435189 containerd[1503]: 2025-06-21 02:31:41.403 [INFO][4276] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" host="localhost" Jun 21 02:31:41.435189 containerd[1503]: 2025-06-21 02:31:41.412 [INFO][4276] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" host="localhost" Jun 21 02:31:41.435189 containerd[1503]: 2025-06-21 02:31:41.412 [INFO][4276] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" host="localhost" Jun 21 02:31:41.435189 containerd[1503]: 2025-06-21 02:31:41.412 [INFO][4276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:31:41.435189 containerd[1503]: 2025-06-21 02:31:41.412 [INFO][4276] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" HandleID="k8s-pod-network.1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" Workload="localhost-k8s-coredns--674b8bbfcf--l25ql-eth0" Jun 21 02:31:41.435356 containerd[1503]: 2025-06-21 02:31:41.414 [INFO][4244] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" Namespace="kube-system" Pod="coredns-674b8bbfcf-l25ql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l25ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--l25ql-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b6f10c99-ef44-46ac-ab00-4e7306845019", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-l25ql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliccd21ecb789", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:41.435455 containerd[1503]: 2025-06-21 02:31:41.415 [INFO][4244] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" Namespace="kube-system" Pod="coredns-674b8bbfcf-l25ql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l25ql-eth0" Jun 21 02:31:41.435455 containerd[1503]: 2025-06-21 02:31:41.415 [INFO][4244] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccd21ecb789 ContainerID="1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" Namespace="kube-system" Pod="coredns-674b8bbfcf-l25ql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l25ql-eth0" Jun 21 02:31:41.435455 containerd[1503]: 2025-06-21 02:31:41.420 [INFO][4244] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" Namespace="kube-system" Pod="coredns-674b8bbfcf-l25ql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l25ql-eth0" Jun 21 02:31:41.435593 containerd[1503]: 2025-06-21 02:31:41.421 [INFO][4244] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" Namespace="kube-system" Pod="coredns-674b8bbfcf-l25ql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l25ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--l25ql-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b6f10c99-ef44-46ac-ab00-4e7306845019", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155", Pod:"coredns-674b8bbfcf-l25ql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliccd21ecb789", MAC:"ba:df:72:d3:81:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:41.435593 containerd[1503]: 2025-06-21 02:31:41.431 [INFO][4244] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" Namespace="kube-system" Pod="coredns-674b8bbfcf-l25ql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l25ql-eth0" Jun 21 02:31:41.437817 systemd[1]: Started cri-containerd-ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305.scope - libcontainer container ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305. Jun 21 02:31:41.455256 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:31:41.469880 containerd[1503]: time="2025-06-21T02:31:41.469838844Z" level=info msg="connecting to shim 1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155" address="unix:///run/containerd/s/21422a7af64186b2efaaf5ada9315eb1ca8b3d07756a04aa3bf34cb7894f5d78" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:31:41.482020 containerd[1503]: time="2025-06-21T02:31:41.481985594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b6748ff7-mt5f5,Uid:ff0a7a3b-25a5-45d3-8a68-8a89da6c22aa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305\"" Jun 21 02:31:41.485331 containerd[1503]: time="2025-06-21T02:31:41.484979262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 02:31:41.495774 systemd[1]: Started cri-containerd-1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155.scope - libcontainer container 1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155. Jun 21 02:31:41.507057 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:31:41.526961 containerd[1503]: time="2025-06-21T02:31:41.526928324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l25ql,Uid:b6f10c99-ef44-46ac-ab00-4e7306845019,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155\"" Jun 21 02:31:41.527876 kubelet[2643]: E0621 02:31:41.527852 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:41.533016 containerd[1503]: time="2025-06-21T02:31:41.532983459Z" level=info msg="CreateContainer within sandbox \"1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 02:31:41.540394 containerd[1503]: time="2025-06-21T02:31:41.540337526Z" level=info msg="Container 4116e9dca3f1ae074d97fe96eff278e616c6ea8a21e353da97b5c2cda8e6dd34: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:41.544800 containerd[1503]: time="2025-06-21T02:31:41.544761526Z" level=info msg="CreateContainer within sandbox \"1b41abe58b491292a179e04310607cc1bbcb46179178332859a39c4761e2f155\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4116e9dca3f1ae074d97fe96eff278e616c6ea8a21e353da97b5c2cda8e6dd34\"" Jun 21 02:31:41.545431 containerd[1503]: time="2025-06-21T02:31:41.545393012Z" level=info msg="StartContainer for \"4116e9dca3f1ae074d97fe96eff278e616c6ea8a21e353da97b5c2cda8e6dd34\"" Jun 21 02:31:41.546259 containerd[1503]: time="2025-06-21T02:31:41.546234380Z" level=info msg="connecting to shim 4116e9dca3f1ae074d97fe96eff278e616c6ea8a21e353da97b5c2cda8e6dd34" address="unix:///run/containerd/s/21422a7af64186b2efaaf5ada9315eb1ca8b3d07756a04aa3bf34cb7894f5d78" protocol=ttrpc version=3 Jun 21 02:31:41.568775 systemd[1]: Started cri-containerd-4116e9dca3f1ae074d97fe96eff278e616c6ea8a21e353da97b5c2cda8e6dd34.scope - libcontainer container 4116e9dca3f1ae074d97fe96eff278e616c6ea8a21e353da97b5c2cda8e6dd34. Jun 21 02:31:41.598345 containerd[1503]: time="2025-06-21T02:31:41.598308294Z" level=info msg="StartContainer for \"4116e9dca3f1ae074d97fe96eff278e616c6ea8a21e353da97b5c2cda8e6dd34\" returns successfully" Jun 21 02:31:42.154472 containerd[1503]: time="2025-06-21T02:31:42.154423326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vk5dt,Uid:0f3a877e-df8f-466c-a544-3a7180344d8d,Namespace:calico-system,Attempt:0,}" Jun 21 02:31:42.270787 systemd-networkd[1434]: calicc3f1f2071a: Link UP Jun 21 02:31:42.271675 systemd-networkd[1434]: calicc3f1f2071a: Gained carrier Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.194 [INFO][4440] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vk5dt-eth0 csi-node-driver- calico-system 0f3a877e-df8f-466c-a544-3a7180344d8d 744 0 2025-06-21 02:31:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vk5dt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicc3f1f2071a [] [] }} ContainerID="4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" Namespace="calico-system" Pod="csi-node-driver-vk5dt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vk5dt-" Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.194 [INFO][4440] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" Namespace="calico-system" Pod="csi-node-driver-vk5dt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vk5dt-eth0" Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.218 [INFO][4453] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" HandleID="k8s-pod-network.4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" Workload="localhost-k8s-csi--node--driver--vk5dt-eth0" Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.218 [INFO][4453] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" HandleID="k8s-pod-network.4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" Workload="localhost-k8s-csi--node--driver--vk5dt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vk5dt", "timestamp":"2025-06-21 02:31:42.218327454 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.218 [INFO][4453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.218 [INFO][4453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.218 [INFO][4453] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.228 [INFO][4453] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" host="localhost" Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.233 [INFO][4453] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.238 [INFO][4453] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.240 [INFO][4453] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.243 [INFO][4453] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.243 [INFO][4453] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" host="localhost" Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.244 [INFO][4453] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172 Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.253 [INFO][4453] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" host="localhost" Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.266 [INFO][4453] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" host="localhost" Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.266 [INFO][4453] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" host="localhost" Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.266 [INFO][4453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:31:42.291612 containerd[1503]: 2025-06-21 02:31:42.266 [INFO][4453] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" HandleID="k8s-pod-network.4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" Workload="localhost-k8s-csi--node--driver--vk5dt-eth0" Jun 21 02:31:42.292550 containerd[1503]: 2025-06-21 02:31:42.268 [INFO][4440] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" Namespace="calico-system" Pod="csi-node-driver-vk5dt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vk5dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vk5dt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f3a877e-df8f-466c-a544-3a7180344d8d", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vk5dt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicc3f1f2071a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:42.292550 containerd[1503]: 2025-06-21 02:31:42.268 [INFO][4440] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" Namespace="calico-system" Pod="csi-node-driver-vk5dt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vk5dt-eth0" Jun 21 02:31:42.292550 containerd[1503]: 2025-06-21 02:31:42.268 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc3f1f2071a ContainerID="4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" Namespace="calico-system" Pod="csi-node-driver-vk5dt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vk5dt-eth0" Jun 21 02:31:42.292550 containerd[1503]: 2025-06-21 02:31:42.271 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" Namespace="calico-system" Pod="csi-node-driver-vk5dt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vk5dt-eth0" Jun 21 02:31:42.292550 containerd[1503]: 2025-06-21 02:31:42.271 [INFO][4440] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" Namespace="calico-system" Pod="csi-node-driver-vk5dt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vk5dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vk5dt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f3a877e-df8f-466c-a544-3a7180344d8d", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172", Pod:"csi-node-driver-vk5dt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicc3f1f2071a", MAC:"c2:0d:7f:00:25:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:42.292550 containerd[1503]: 2025-06-21 02:31:42.288 [INFO][4440] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" Namespace="calico-system" Pod="csi-node-driver-vk5dt" WorkloadEndpoint="localhost-k8s-csi--node--driver--vk5dt-eth0" Jun 21 02:31:42.313252 containerd[1503]: time="2025-06-21T02:31:42.312925735Z" level=info msg="connecting to shim 4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172" address="unix:///run/containerd/s/0437429c670acebd3a3d7ab986183ae7d05edeaae8ce978e8487f1459e5b619e" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:31:42.325608 kubelet[2643]: E0621 02:31:42.325563 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:42.340220 kubelet[2643]: I0621 02:31:42.339997 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l25ql" podStartSLOduration=36.339983176 podStartE2EDuration="36.339983176s" podCreationTimestamp="2025-06-21 02:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:31:42.339162769 +0000 UTC m=+42.262632124" watchObservedRunningTime="2025-06-21 02:31:42.339983176 +0000 UTC m=+42.263452331" Jun 21 02:31:42.344095 systemd[1]: Started cri-containerd-4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172.scope - libcontainer container 4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172. Jun 21 02:31:42.367763 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:31:42.384700 containerd[1503]: time="2025-06-21T02:31:42.384573253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vk5dt,Uid:0f3a877e-df8f-466c-a544-3a7180344d8d,Namespace:calico-system,Attempt:0,} returns sandbox id \"4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172\"" Jun 21 02:31:43.165074 containerd[1503]: time="2025-06-21T02:31:43.164823558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b7d797b-sqcm9,Uid:7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c,Namespace:calico-system,Attempt:0,}" Jun 21 02:31:43.309992 systemd-networkd[1434]: cali1d0ad391617: Link UP Jun 21 02:31:43.311152 systemd-networkd[1434]: cali1d0ad391617: Gained carrier Jun 21 02:31:43.329093 systemd-networkd[1434]: calic8b1896bf51: Gained IPv6LL Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.231 [INFO][4526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0 calico-kube-controllers-7d6b7d797b- calico-system 7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c 853 0 2025-06-21 02:31:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d6b7d797b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7d6b7d797b-sqcm9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1d0ad391617 [] [] }} ContainerID="fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" Namespace="calico-system" Pod="calico-kube-controllers-7d6b7d797b-sqcm9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-" Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.231 [INFO][4526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" Namespace="calico-system" Pod="calico-kube-controllers-7d6b7d797b-sqcm9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0" Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.266 [INFO][4540] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" HandleID="k8s-pod-network.fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" Workload="localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0" Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.266 [INFO][4540] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" HandleID="k8s-pod-network.fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" Workload="localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d52f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7d6b7d797b-sqcm9", "timestamp":"2025-06-21 02:31:43.266415841 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.266 [INFO][4540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.266 [INFO][4540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.266 [INFO][4540] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.276 [INFO][4540] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" host="localhost" Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.281 [INFO][4540] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.287 [INFO][4540] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.289 [INFO][4540] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.291 [INFO][4540] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.291 [INFO][4540] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" host="localhost" Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.293 [INFO][4540] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.297 [INFO][4540] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" host="localhost" Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.303 [INFO][4540] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" host="localhost" Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.303 [INFO][4540] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" host="localhost" Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.303 [INFO][4540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:31:43.331056 containerd[1503]: 2025-06-21 02:31:43.303 [INFO][4540] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" HandleID="k8s-pod-network.fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" Workload="localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0" Jun 21 02:31:43.332076 containerd[1503]: 2025-06-21 02:31:43.306 [INFO][4526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" Namespace="calico-system" Pod="calico-kube-controllers-7d6b7d797b-sqcm9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0", GenerateName:"calico-kube-controllers-7d6b7d797b-", Namespace:"calico-system", SelfLink:"", UID:"7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d6b7d797b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7d6b7d797b-sqcm9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1d0ad391617", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:43.332076 containerd[1503]: 2025-06-21 02:31:43.306 [INFO][4526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" Namespace="calico-system" Pod="calico-kube-controllers-7d6b7d797b-sqcm9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0" Jun 21 02:31:43.332076 containerd[1503]: 2025-06-21 02:31:43.306 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d0ad391617 ContainerID="fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" Namespace="calico-system" Pod="calico-kube-controllers-7d6b7d797b-sqcm9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0" Jun 21 02:31:43.332076 containerd[1503]: 2025-06-21 02:31:43.311 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" Namespace="calico-system" Pod="calico-kube-controllers-7d6b7d797b-sqcm9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0" Jun 21 02:31:43.332076 containerd[1503]: 2025-06-21 02:31:43.312 [INFO][4526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" Namespace="calico-system" Pod="calico-kube-controllers-7d6b7d797b-sqcm9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0", GenerateName:"calico-kube-controllers-7d6b7d797b-", Namespace:"calico-system", SelfLink:"", UID:"7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d6b7d797b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a", Pod:"calico-kube-controllers-7d6b7d797b-sqcm9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1d0ad391617", MAC:"86:1f:5f:64:39:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:43.332076 containerd[1503]: 2025-06-21 02:31:43.326 [INFO][4526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" Namespace="calico-system" Pod="calico-kube-controllers-7d6b7d797b-sqcm9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b7d797b--sqcm9-eth0" Jun 21 02:31:43.336475 kubelet[2643]: E0621 02:31:43.336438 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:43.391140 containerd[1503]: time="2025-06-21T02:31:43.391096805Z" level=info msg="connecting to shim fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a" address="unix:///run/containerd/s/6deb58ad0bd8a2810cf18dd742cf7946ef819d94b16567a4b4568652fe3ace37" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:31:43.395410 containerd[1503]: time="2025-06-21T02:31:43.395373442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:43.396092 containerd[1503]: time="2025-06-21T02:31:43.396019728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=44514850" Jun 21 02:31:43.396682 containerd[1503]: time="2025-06-21T02:31:43.396655173Z" level=info msg="ImageCreate event name:\"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:43.398685 containerd[1503]: time="2025-06-21T02:31:43.398659471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:43.400354 containerd[1503]: time="2025-06-21T02:31:43.400293485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 1.915274023s" Jun 21 02:31:43.400354 containerd[1503]: time="2025-06-21T02:31:43.400329645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 21 02:31:43.404279 containerd[1503]: time="2025-06-21T02:31:43.404029518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 21 02:31:43.406802 containerd[1503]: time="2025-06-21T02:31:43.406711821Z" level=info msg="CreateContainer within sandbox \"ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 02:31:43.414840 containerd[1503]: time="2025-06-21T02:31:43.414805811Z" level=info msg="Container d41bfbe939b55f1e96b671458096b8e01f2924c70fa93675560a6ad57be4b0ce: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:43.419063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984560775.mount: Deactivated successfully. Jun 21 02:31:43.429822 systemd[1]: Started cri-containerd-fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a.scope - libcontainer container fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a. Jun 21 02:31:43.430077 containerd[1503]: time="2025-06-21T02:31:43.429850742Z" level=info msg="CreateContainer within sandbox \"ef0c60586a87db0db2564276930dabd64221d303a9005e7e2b707b0199399305\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d41bfbe939b55f1e96b671458096b8e01f2924c70fa93675560a6ad57be4b0ce\"" Jun 21 02:31:43.430405 containerd[1503]: time="2025-06-21T02:31:43.430380547Z" level=info msg="StartContainer for \"d41bfbe939b55f1e96b671458096b8e01f2924c70fa93675560a6ad57be4b0ce\"" Jun 21 02:31:43.431891 containerd[1503]: time="2025-06-21T02:31:43.431864719Z" level=info msg="connecting to shim d41bfbe939b55f1e96b671458096b8e01f2924c70fa93675560a6ad57be4b0ce" address="unix:///run/containerd/s/cd0ab168362a54ade65fcae01c9317c0e2718860dcdb653ed655b81e0dee2dae" protocol=ttrpc version=3 Jun 21 02:31:43.444802 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:31:43.451771 systemd[1]: Started cri-containerd-d41bfbe939b55f1e96b671458096b8e01f2924c70fa93675560a6ad57be4b0ce.scope - libcontainer container d41bfbe939b55f1e96b671458096b8e01f2924c70fa93675560a6ad57be4b0ce. Jun 21 02:31:43.456830 systemd-networkd[1434]: caliccd21ecb789: Gained IPv6LL Jun 21 02:31:43.470175 containerd[1503]: time="2025-06-21T02:31:43.470120572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b7d797b-sqcm9,Uid:7e146eb6-a0ed-4bdb-9df2-50f1ec051e3c,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a\"" Jun 21 02:31:43.494475 containerd[1503]: time="2025-06-21T02:31:43.494436423Z" level=info msg="StartContainer for \"d41bfbe939b55f1e96b671458096b8e01f2924c70fa93675560a6ad57be4b0ce\" returns successfully" Jun 21 02:31:44.031797 systemd-networkd[1434]: calicc3f1f2071a: Gained IPv6LL Jun 21 02:31:44.154804 containerd[1503]: time="2025-06-21T02:31:44.154476331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-q76kv,Uid:021d921e-3930-4440-b9e8-6b2ebdeb9caa,Namespace:calico-system,Attempt:0,}" Jun 21 02:31:44.157316 containerd[1503]: time="2025-06-21T02:31:44.155977504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b6748ff7-9cgwr,Uid:407d696e-6743-4f48-9ba6-3d9f1e8e2a69,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:31:44.273950 systemd-networkd[1434]: cali88485595725: Link UP Jun 21 02:31:44.274277 systemd-networkd[1434]: cali88485595725: Gained carrier Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.201 [INFO][4641] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5bd85449d4--q76kv-eth0 goldmane-5bd85449d4- calico-system 021d921e-3930-4440-b9e8-6b2ebdeb9caa 857 0 2025-06-21 02:31:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5bd85449d4-q76kv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali88485595725 [] [] }} ContainerID="0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" Namespace="calico-system" Pod="goldmane-5bd85449d4-q76kv" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--q76kv-" Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.201 [INFO][4641] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" Namespace="calico-system" Pod="goldmane-5bd85449d4-q76kv" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--q76kv-eth0" Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.228 [INFO][4670] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" HandleID="k8s-pod-network.0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" Workload="localhost-k8s-goldmane--5bd85449d4--q76kv-eth0" Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.228 [INFO][4670] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" HandleID="k8s-pod-network.0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" Workload="localhost-k8s-goldmane--5bd85449d4--q76kv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5bd85449d4-q76kv", "timestamp":"2025-06-21 02:31:44.228707722 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.228 [INFO][4670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.229 [INFO][4670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.229 [INFO][4670] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.241 [INFO][4670] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" host="localhost" Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.246 [INFO][4670] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.253 [INFO][4670] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.255 [INFO][4670] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.257 [INFO][4670] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.257 [INFO][4670] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" host="localhost" Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.258 [INFO][4670] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2 Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.261 [INFO][4670] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" host="localhost" Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.267 [INFO][4670] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" host="localhost" Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.267 [INFO][4670] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" host="localhost" Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.267 [INFO][4670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:31:44.289275 containerd[1503]: 2025-06-21 02:31:44.267 [INFO][4670] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" HandleID="k8s-pod-network.0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" Workload="localhost-k8s-goldmane--5bd85449d4--q76kv-eth0" Jun 21 02:31:44.290870 containerd[1503]: 2025-06-21 02:31:44.272 [INFO][4641] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" Namespace="calico-system" Pod="goldmane-5bd85449d4-q76kv" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--q76kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5bd85449d4--q76kv-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"021d921e-3930-4440-b9e8-6b2ebdeb9caa", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5bd85449d4-q76kv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali88485595725", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:44.290870 containerd[1503]: 2025-06-21 02:31:44.272 [INFO][4641] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" Namespace="calico-system" Pod="goldmane-5bd85449d4-q76kv" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--q76kv-eth0" Jun 21 02:31:44.290870 containerd[1503]: 2025-06-21 02:31:44.272 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88485595725 ContainerID="0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" Namespace="calico-system" Pod="goldmane-5bd85449d4-q76kv" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--q76kv-eth0" Jun 21 02:31:44.290870 containerd[1503]: 2025-06-21 02:31:44.274 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" Namespace="calico-system" Pod="goldmane-5bd85449d4-q76kv" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--q76kv-eth0" Jun 21 02:31:44.290870 containerd[1503]: 2025-06-21 02:31:44.274 [INFO][4641] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" Namespace="calico-system" Pod="goldmane-5bd85449d4-q76kv" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--q76kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5bd85449d4--q76kv-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"021d921e-3930-4440-b9e8-6b2ebdeb9caa", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2", Pod:"goldmane-5bd85449d4-q76kv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali88485595725", MAC:"52:00:fa:4e:a9:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:44.290870 containerd[1503]: 2025-06-21 02:31:44.284 [INFO][4641] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" Namespace="calico-system" Pod="goldmane-5bd85449d4-q76kv" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--q76kv-eth0" Jun 21 02:31:44.333742 containerd[1503]: time="2025-06-21T02:31:44.333636255Z" level=info msg="connecting to shim 0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2" address="unix:///run/containerd/s/7430e86c14371c67641ab23bfc88be9b90e7adc1692059d177728dc2fc201015" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:31:44.359226 systemd[1]: Started cri-containerd-0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2.scope - libcontainer container 0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2. Jun 21 02:31:44.392396 systemd-networkd[1434]: cali42bb89fc978: Link UP Jun 21 02:31:44.393001 systemd-networkd[1434]: cali42bb89fc978: Gained carrier Jun 21 02:31:44.401963 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:31:44.407317 kubelet[2643]: I0621 02:31:44.407225 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74b6748ff7-mt5f5" podStartSLOduration=26.488007023 podStartE2EDuration="28.40716052s" podCreationTimestamp="2025-06-21 02:31:16 +0000 UTC" firstStartedPulling="2025-06-21 02:31:41.484741619 +0000 UTC m=+41.408210774" lastFinishedPulling="2025-06-21 02:31:43.403895116 +0000 UTC m=+43.327364271" observedRunningTime="2025-06-21 02:31:44.359855677 +0000 UTC m=+44.283324832" watchObservedRunningTime="2025-06-21 02:31:44.40716052 +0000 UTC m=+44.330629675" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.201 [INFO][4643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0 calico-apiserver-74b6748ff7- calico-apiserver 407d696e-6743-4f48-9ba6-3d9f1e8e2a69 856 0 2025-06-21 02:31:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74b6748ff7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74b6748ff7-9cgwr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali42bb89fc978 [] [] }} ContainerID="a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-9cgwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.202 [INFO][4643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-9cgwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.236 [INFO][4676] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" HandleID="k8s-pod-network.a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" Workload="localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.236 [INFO][4676] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" HandleID="k8s-pod-network.a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" Workload="localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b970), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74b6748ff7-9cgwr", "timestamp":"2025-06-21 02:31:44.236423948 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.236 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.267 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.267 [INFO][4676] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.343 [INFO][4676] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" host="localhost" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.352 [INFO][4676] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.362 [INFO][4676] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.365 [INFO][4676] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.369 [INFO][4676] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.369 [INFO][4676] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" host="localhost" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.372 [INFO][4676] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.378 [INFO][4676] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" host="localhost" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.385 [INFO][4676] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" host="localhost" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.385 [INFO][4676] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" host="localhost" Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.385 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:31:44.410702 containerd[1503]: 2025-06-21 02:31:44.385 [INFO][4676] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" HandleID="k8s-pod-network.a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" Workload="localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0" Jun 21 02:31:44.411296 containerd[1503]: 2025-06-21 02:31:44.387 [INFO][4643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-9cgwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0", GenerateName:"calico-apiserver-74b6748ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"407d696e-6743-4f48-9ba6-3d9f1e8e2a69", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b6748ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74b6748ff7-9cgwr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42bb89fc978", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:44.411296 containerd[1503]: 2025-06-21 02:31:44.388 [INFO][4643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-9cgwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0" Jun 21 02:31:44.411296 containerd[1503]: 2025-06-21 02:31:44.388 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42bb89fc978 ContainerID="a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-9cgwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0" Jun 21 02:31:44.411296 containerd[1503]: 2025-06-21 02:31:44.393 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-9cgwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0" Jun 21 02:31:44.411296 containerd[1503]: 2025-06-21 02:31:44.393 [INFO][4643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-9cgwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0", GenerateName:"calico-apiserver-74b6748ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"407d696e-6743-4f48-9ba6-3d9f1e8e2a69", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b6748ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e", Pod:"calico-apiserver-74b6748ff7-9cgwr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42bb89fc978", MAC:"ea:aa:7f:75:85:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:44.411296 containerd[1503]: 2025-06-21 02:31:44.406 [INFO][4643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" Namespace="calico-apiserver" Pod="calico-apiserver-74b6748ff7-9cgwr" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b6748ff7--9cgwr-eth0" Jun 21 02:31:44.433904 containerd[1503]: time="2025-06-21T02:31:44.433865667Z" level=info msg="connecting to shim a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e" address="unix:///run/containerd/s/27398fe6c78975e8d6d1c5fc5cda506f50f9cf58de5854aecaca2ae8a20383df" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:31:44.434225 containerd[1503]: time="2025-06-21T02:31:44.434046028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-q76kv,Uid:021d921e-3930-4440-b9e8-6b2ebdeb9caa,Namespace:calico-system,Attempt:0,} returns sandbox id \"0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2\"" Jun 21 02:31:44.461791 systemd[1]: Started cri-containerd-a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e.scope - libcontainer container a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e. Jun 21 02:31:44.472480 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:31:44.491982 containerd[1503]: time="2025-06-21T02:31:44.491948081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b6748ff7-9cgwr,Uid:407d696e-6743-4f48-9ba6-3d9f1e8e2a69,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e\"" Jun 21 02:31:44.496566 containerd[1503]: time="2025-06-21T02:31:44.496515840Z" level=info msg="CreateContainer within sandbox \"a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 02:31:44.502665 containerd[1503]: time="2025-06-21T02:31:44.502122167Z" level=info msg="Container bd81b40121434bc5b3464d98fcdc94f66bafc0e2844394965da5866774c2476c: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:44.508768 containerd[1503]: time="2025-06-21T02:31:44.508735903Z" level=info msg="CreateContainer within sandbox \"a656d1e792d474b3c06d0acc19602be94fb1b429490ce9233f9e82b45b20463e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bd81b40121434bc5b3464d98fcdc94f66bafc0e2844394965da5866774c2476c\"" Jun 21 02:31:44.509428 containerd[1503]: time="2025-06-21T02:31:44.509399669Z" level=info msg="StartContainer for \"bd81b40121434bc5b3464d98fcdc94f66bafc0e2844394965da5866774c2476c\"" Jun 21 02:31:44.510431 containerd[1503]: time="2025-06-21T02:31:44.510398638Z" level=info msg="connecting to shim bd81b40121434bc5b3464d98fcdc94f66bafc0e2844394965da5866774c2476c" address="unix:///run/containerd/s/27398fe6c78975e8d6d1c5fc5cda506f50f9cf58de5854aecaca2ae8a20383df" protocol=ttrpc version=3 Jun 21 02:31:44.530810 systemd[1]: Started cri-containerd-bd81b40121434bc5b3464d98fcdc94f66bafc0e2844394965da5866774c2476c.scope - libcontainer container bd81b40121434bc5b3464d98fcdc94f66bafc0e2844394965da5866774c2476c. Jun 21 02:31:44.573244 containerd[1503]: time="2025-06-21T02:31:44.571983641Z" level=info msg="StartContainer for \"bd81b40121434bc5b3464d98fcdc94f66bafc0e2844394965da5866774c2476c\" returns successfully" Jun 21 02:31:44.723708 containerd[1503]: time="2025-06-21T02:31:44.723660971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:44.724394 containerd[1503]: time="2025-06-21T02:31:44.724260336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8226240" Jun 21 02:31:44.725063 containerd[1503]: time="2025-06-21T02:31:44.725025703Z" level=info msg="ImageCreate event name:\"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:44.728029 containerd[1503]: time="2025-06-21T02:31:44.727245761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:44.728029 containerd[1503]: time="2025-06-21T02:31:44.727663525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"9595481\" in 1.323603167s" Jun 21 02:31:44.728029 containerd[1503]: time="2025-06-21T02:31:44.727697325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\"" Jun 21 02:31:44.732642 containerd[1503]: time="2025-06-21T02:31:44.732492086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 21 02:31:44.735536 containerd[1503]: time="2025-06-21T02:31:44.735480751Z" level=info msg="CreateContainer within sandbox \"4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 21 02:31:44.747551 containerd[1503]: time="2025-06-21T02:31:44.747508854Z" level=info msg="Container 901df002624c80d29da9ccfb1c17ff62b98c7a7ad440e5c8e1170450eb572db1: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:44.753891 containerd[1503]: time="2025-06-21T02:31:44.753860108Z" level=info msg="CreateContainer within sandbox \"4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"901df002624c80d29da9ccfb1c17ff62b98c7a7ad440e5c8e1170450eb572db1\"" Jun 21 02:31:44.754378 containerd[1503]: time="2025-06-21T02:31:44.754358072Z" level=info msg="StartContainer for \"901df002624c80d29da9ccfb1c17ff62b98c7a7ad440e5c8e1170450eb572db1\"" Jun 21 02:31:44.756125 containerd[1503]: time="2025-06-21T02:31:44.756060966Z" level=info msg="connecting to shim 901df002624c80d29da9ccfb1c17ff62b98c7a7ad440e5c8e1170450eb572db1" address="unix:///run/containerd/s/0437429c670acebd3a3d7ab986183ae7d05edeaae8ce978e8487f1459e5b619e" protocol=ttrpc version=3 Jun 21 02:31:44.776039 systemd[1]: Started cri-containerd-901df002624c80d29da9ccfb1c17ff62b98c7a7ad440e5c8e1170450eb572db1.scope - libcontainer container 901df002624c80d29da9ccfb1c17ff62b98c7a7ad440e5c8e1170450eb572db1. Jun 21 02:31:44.820027 containerd[1503]: time="2025-06-21T02:31:44.819984190Z" level=info msg="StartContainer for \"901df002624c80d29da9ccfb1c17ff62b98c7a7ad440e5c8e1170450eb572db1\" returns successfully" Jun 21 02:31:45.154027 kubelet[2643]: E0621 02:31:45.153993 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:45.154918 containerd[1503]: time="2025-06-21T02:31:45.154883250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-txtn6,Uid:e883174d-6987-420b-b1e2-4112f48f5a12,Namespace:kube-system,Attempt:0,}" Jun 21 02:31:45.247871 systemd-networkd[1434]: cali1d0ad391617: Gained IPv6LL Jun 21 02:31:45.290288 systemd-networkd[1434]: cali318c1e87f64: Link UP Jun 21 02:31:45.290994 systemd-networkd[1434]: cali318c1e87f64: Gained carrier Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.207 [INFO][4866] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--txtn6-eth0 coredns-674b8bbfcf- kube-system e883174d-6987-420b-b1e2-4112f48f5a12 855 0 2025-06-21 02:31:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-txtn6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali318c1e87f64 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-txtn6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--txtn6-" Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.207 [INFO][4866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-txtn6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--txtn6-eth0" Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.239 [INFO][4880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" HandleID="k8s-pod-network.f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" Workload="localhost-k8s-coredns--674b8bbfcf--txtn6-eth0" Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.239 [INFO][4880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" HandleID="k8s-pod-network.f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" Workload="localhost-k8s-coredns--674b8bbfcf--txtn6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c31d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-txtn6", "timestamp":"2025-06-21 02:31:45.239343994 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.239 [INFO][4880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.239 [INFO][4880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.239 [INFO][4880] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.251 [INFO][4880] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" host="localhost" Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.259 [INFO][4880] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.266 [INFO][4880] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.269 [INFO][4880] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.272 [INFO][4880] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.272 [INFO][4880] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" host="localhost" Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.273 [INFO][4880] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6 Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.277 [INFO][4880] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" host="localhost" Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.283 [INFO][4880] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" host="localhost" Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.283 [INFO][4880] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" host="localhost" Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.283 [INFO][4880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:31:45.308561 containerd[1503]: 2025-06-21 02:31:45.283 [INFO][4880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" HandleID="k8s-pod-network.f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" Workload="localhost-k8s-coredns--674b8bbfcf--txtn6-eth0" Jun 21 02:31:45.310420 containerd[1503]: 2025-06-21 02:31:45.287 [INFO][4866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-txtn6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--txtn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--txtn6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e883174d-6987-420b-b1e2-4112f48f5a12", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-txtn6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali318c1e87f64", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:45.310420 containerd[1503]: 2025-06-21 02:31:45.288 [INFO][4866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-txtn6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--txtn6-eth0" Jun 21 02:31:45.310420 containerd[1503]: 2025-06-21 02:31:45.288 [INFO][4866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali318c1e87f64 ContainerID="f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-txtn6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--txtn6-eth0" Jun 21 02:31:45.310420 containerd[1503]: 2025-06-21 02:31:45.291 [INFO][4866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-txtn6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--txtn6-eth0" Jun 21 02:31:45.310420 containerd[1503]: 2025-06-21 02:31:45.292 [INFO][4866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-txtn6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--txtn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--txtn6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e883174d-6987-420b-b1e2-4112f48f5a12", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 31, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6", Pod:"coredns-674b8bbfcf-txtn6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali318c1e87f64", MAC:"16:5b:15:54:cc:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:31:45.310420 containerd[1503]: 2025-06-21 02:31:45.304 [INFO][4866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-txtn6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--txtn6-eth0" Jun 21 02:31:45.343130 containerd[1503]: time="2025-06-21T02:31:45.343078417Z" level=info msg="connecting to shim f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6" address="unix:///run/containerd/s/43271f3400b725be9d0bab071a324c541f0922968c04ed2f0838e9274af9064f" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:31:45.373804 kubelet[2643]: I0621 02:31:45.373056 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:31:45.384326 systemd[1]: Started cri-containerd-f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6.scope - libcontainer container f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6. Jun 21 02:31:45.401338 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:31:45.444274 containerd[1503]: time="2025-06-21T02:31:45.443422733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-txtn6,Uid:e883174d-6987-420b-b1e2-4112f48f5a12,Namespace:kube-system,Attempt:0,} returns sandbox id \"f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6\"" Jun 21 02:31:45.445510 kubelet[2643]: E0621 02:31:45.445478 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:45.450464 containerd[1503]: time="2025-06-21T02:31:45.450356991Z" level=info msg="CreateContainer within sandbox \"f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 02:31:45.461920 containerd[1503]: time="2025-06-21T02:31:45.460823918Z" level=info msg="Container 36caa44c3479368ab4bc21a9d05b695fb99d828c3934fbb7bff89283fa47fb59: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:45.472950 containerd[1503]: time="2025-06-21T02:31:45.472904498Z" level=info msg="CreateContainer within sandbox \"f888eed289ca56700f83380a1c8dbc6eda2c8fe1e254a60ab2790e788a5dc0a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36caa44c3479368ab4bc21a9d05b695fb99d828c3934fbb7bff89283fa47fb59\"" Jun 21 02:31:45.473591 containerd[1503]: time="2025-06-21T02:31:45.473556704Z" level=info msg="StartContainer for \"36caa44c3479368ab4bc21a9d05b695fb99d828c3934fbb7bff89283fa47fb59\"" Jun 21 02:31:45.475303 containerd[1503]: time="2025-06-21T02:31:45.475267678Z" level=info msg="connecting to shim 36caa44c3479368ab4bc21a9d05b695fb99d828c3934fbb7bff89283fa47fb59" address="unix:///run/containerd/s/43271f3400b725be9d0bab071a324c541f0922968c04ed2f0838e9274af9064f" protocol=ttrpc version=3 Jun 21 02:31:45.506829 systemd[1]: Started cri-containerd-36caa44c3479368ab4bc21a9d05b695fb99d828c3934fbb7bff89283fa47fb59.scope - libcontainer container 36caa44c3479368ab4bc21a9d05b695fb99d828c3934fbb7bff89283fa47fb59. Jun 21 02:31:45.547959 containerd[1503]: time="2025-06-21T02:31:45.547847922Z" level=info msg="StartContainer for \"36caa44c3479368ab4bc21a9d05b695fb99d828c3934fbb7bff89283fa47fb59\" returns successfully" Jun 21 02:31:45.733099 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:55704.service - OpenSSH per-connection server daemon (10.0.0.1:55704). Jun 21 02:31:45.760724 systemd-networkd[1434]: cali42bb89fc978: Gained IPv6LL Jun 21 02:31:45.823921 systemd-networkd[1434]: cali88485595725: Gained IPv6LL Jun 21 02:31:45.825127 sshd[4983]: Accepted publickey for core from 10.0.0.1 port 55704 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:31:45.829149 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:31:45.837869 systemd-logind[1489]: New session 9 of user core. Jun 21 02:31:45.857878 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 02:31:46.168643 sshd[4985]: Connection closed by 10.0.0.1 port 55704 Jun 21 02:31:46.169098 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Jun 21 02:31:46.175200 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:55704.service: Deactivated successfully. Jun 21 02:31:46.179224 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 02:31:46.180479 systemd-logind[1489]: Session 9 logged out. Waiting for processes to exit. Jun 21 02:31:46.182198 systemd-logind[1489]: Removed session 9. Jun 21 02:31:46.375022 kubelet[2643]: I0621 02:31:46.374993 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:31:46.375474 kubelet[2643]: E0621 02:31:46.375445 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:46.390879 kubelet[2643]: I0621 02:31:46.390811 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74b6748ff7-9cgwr" podStartSLOduration=30.390792196 podStartE2EDuration="30.390792196s" podCreationTimestamp="2025-06-21 02:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:31:45.373570671 +0000 UTC m=+45.297039826" watchObservedRunningTime="2025-06-21 02:31:46.390792196 +0000 UTC m=+46.314261311" Jun 21 02:31:46.391138 kubelet[2643]: I0621 02:31:46.391094 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-txtn6" podStartSLOduration=39.391085359 podStartE2EDuration="39.391085359s" podCreationTimestamp="2025-06-21 02:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:31:46.389194103 +0000 UTC m=+46.312663258" watchObservedRunningTime="2025-06-21 02:31:46.391085359 +0000 UTC m=+46.314554514" Jun 21 02:31:46.717670 containerd[1503]: time="2025-06-21T02:31:46.717614503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:46.718811 containerd[1503]: time="2025-06-21T02:31:46.718786793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=48129475" Jun 21 02:31:46.719505 containerd[1503]: time="2025-06-21T02:31:46.719479398Z" level=info msg="ImageCreate event name:\"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:46.721931 containerd[1503]: time="2025-06-21T02:31:46.721893898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:46.722480 containerd[1503]: time="2025-06-21T02:31:46.722302461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"49498684\" in 1.989775335s" Jun 21 02:31:46.722480 containerd[1503]: time="2025-06-21T02:31:46.722331022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\"" Jun 21 02:31:46.724607 containerd[1503]: time="2025-06-21T02:31:46.724574440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 21 02:31:46.737248 containerd[1503]: time="2025-06-21T02:31:46.737210703Z" level=info msg="CreateContainer within sandbox \"fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 21 02:31:46.746493 containerd[1503]: time="2025-06-21T02:31:46.746430698Z" level=info msg="Container 12fa95519e4dc8255526d391b4ac806fd21c04b30a58ef720537103c56da9014: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:46.753610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548687487.mount: Deactivated successfully. Jun 21 02:31:46.758087 containerd[1503]: time="2025-06-21T02:31:46.758012353Z" level=info msg="CreateContainer within sandbox \"fb7b3308dd1efa425c83c81635664dffe142e83b5381102b03889711ad54277a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"12fa95519e4dc8255526d391b4ac806fd21c04b30a58ef720537103c56da9014\"" Jun 21 02:31:46.759195 containerd[1503]: time="2025-06-21T02:31:46.759151802Z" level=info msg="StartContainer for \"12fa95519e4dc8255526d391b4ac806fd21c04b30a58ef720537103c56da9014\"" Jun 21 02:31:46.760537 containerd[1503]: time="2025-06-21T02:31:46.760506373Z" level=info msg="connecting to shim 12fa95519e4dc8255526d391b4ac806fd21c04b30a58ef720537103c56da9014" address="unix:///run/containerd/s/6deb58ad0bd8a2810cf18dd742cf7946ef819d94b16567a4b4568652fe3ace37" protocol=ttrpc version=3 Jun 21 02:31:46.781910 systemd[1]: Started cri-containerd-12fa95519e4dc8255526d391b4ac806fd21c04b30a58ef720537103c56da9014.scope - libcontainer container 12fa95519e4dc8255526d391b4ac806fd21c04b30a58ef720537103c56da9014. Jun 21 02:31:46.849904 containerd[1503]: time="2025-06-21T02:31:46.849783942Z" level=info msg="StartContainer for \"12fa95519e4dc8255526d391b4ac806fd21c04b30a58ef720537103c56da9014\" returns successfully" Jun 21 02:31:47.231794 systemd-networkd[1434]: cali318c1e87f64: Gained IPv6LL Jun 21 02:31:47.378258 kubelet[2643]: E0621 02:31:47.378214 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:47.390884 kubelet[2643]: I0621 02:31:47.390615 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d6b7d797b-sqcm9" podStartSLOduration=23.14013686 podStartE2EDuration="26.390598694s" podCreationTimestamp="2025-06-21 02:31:21 +0000 UTC" firstStartedPulling="2025-06-21 02:31:43.473978845 +0000 UTC m=+43.397448000" lastFinishedPulling="2025-06-21 02:31:46.724440719 +0000 UTC m=+46.647909834" observedRunningTime="2025-06-21 02:31:47.390588974 +0000 UTC m=+47.314058249" watchObservedRunningTime="2025-06-21 02:31:47.390598694 +0000 UTC m=+47.314067849" Jun 21 02:31:48.327990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024308346.mount: Deactivated successfully. Jun 21 02:31:48.381526 kubelet[2643]: E0621 02:31:48.381492 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:31:48.437153 containerd[1503]: time="2025-06-21T02:31:48.437105888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12fa95519e4dc8255526d391b4ac806fd21c04b30a58ef720537103c56da9014\" id:\"33a3b1a06728df63102e35fbab46bcc744fd25b55cd0c92018a7eaa822461ee5\" pid:5084 exited_at:{seconds:1750473108 nanos:436093600}" Jun 21 02:31:48.692092 containerd[1503]: time="2025-06-21T02:31:48.691951091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:48.692605 containerd[1503]: time="2025-06-21T02:31:48.692578296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=61832718" Jun 21 02:31:48.693498 containerd[1503]: time="2025-06-21T02:31:48.693440983Z" level=info msg="ImageCreate event name:\"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:48.696129 containerd[1503]: time="2025-06-21T02:31:48.696092484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:48.696961 containerd[1503]: time="2025-06-21T02:31:48.696864410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"61832564\" in 1.97226101s" Jun 21 02:31:48.696961 containerd[1503]: time="2025-06-21T02:31:48.696899930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\"" Jun 21 02:31:48.698422 containerd[1503]: time="2025-06-21T02:31:48.698284221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 21 02:31:48.702644 containerd[1503]: time="2025-06-21T02:31:48.702596375Z" level=info msg="CreateContainer within sandbox \"0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 21 02:31:48.709887 containerd[1503]: time="2025-06-21T02:31:48.709787231Z" level=info msg="Container 511344a0a7edeef3f7398c2ea31cdc0bdf8287f09427992240759852d465d5aa: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:48.722595 containerd[1503]: time="2025-06-21T02:31:48.722546491Z" level=info msg="CreateContainer within sandbox \"0772947024e51fa419df68568f137d31710114179406503c94a98a32a472ebf2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"511344a0a7edeef3f7398c2ea31cdc0bdf8287f09427992240759852d465d5aa\"" Jun 21 02:31:48.723615 containerd[1503]: time="2025-06-21T02:31:48.723376418Z" level=info msg="StartContainer for \"511344a0a7edeef3f7398c2ea31cdc0bdf8287f09427992240759852d465d5aa\"" Jun 21 02:31:48.726977 containerd[1503]: time="2025-06-21T02:31:48.726467322Z" level=info msg="connecting to shim 511344a0a7edeef3f7398c2ea31cdc0bdf8287f09427992240759852d465d5aa" address="unix:///run/containerd/s/7430e86c14371c67641ab23bfc88be9b90e7adc1692059d177728dc2fc201015" protocol=ttrpc version=3 Jun 21 02:31:48.752038 systemd[1]: Started cri-containerd-511344a0a7edeef3f7398c2ea31cdc0bdf8287f09427992240759852d465d5aa.scope - libcontainer container 511344a0a7edeef3f7398c2ea31cdc0bdf8287f09427992240759852d465d5aa. Jun 21 02:31:48.808129 containerd[1503]: time="2025-06-21T02:31:48.808091164Z" level=info msg="StartContainer for \"511344a0a7edeef3f7398c2ea31cdc0bdf8287f09427992240759852d465d5aa\" returns successfully" Jun 21 02:31:49.398474 kubelet[2643]: I0621 02:31:49.398209 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-q76kv" podStartSLOduration=24.13582417 podStartE2EDuration="28.398191707s" podCreationTimestamp="2025-06-21 02:31:21 +0000 UTC" firstStartedPulling="2025-06-21 02:31:44.435575081 +0000 UTC m=+44.359044236" lastFinishedPulling="2025-06-21 02:31:48.697942618 +0000 UTC m=+48.621411773" observedRunningTime="2025-06-21 02:31:49.397908065 +0000 UTC m=+49.321377220" watchObservedRunningTime="2025-06-21 02:31:49.398191707 +0000 UTC m=+49.321660902" Jun 21 02:31:49.484345 containerd[1503]: time="2025-06-21T02:31:49.484306532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"511344a0a7edeef3f7398c2ea31cdc0bdf8287f09427992240759852d465d5aa\" id:\"8f38f6f73cb08458a300c09f60716852108aa5bd3707b1455136cb7e0465b7ec\" pid:5147 exit_status:1 exited_at:{seconds:1750473109 nanos:483785968}" Jun 21 02:31:49.991579 containerd[1503]: time="2025-06-21T02:31:49.991526169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:50.000185 containerd[1503]: time="2025-06-21T02:31:50.000152156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=13749925" Jun 21 02:31:50.005216 containerd[1503]: time="2025-06-21T02:31:50.005177274Z" level=info msg="ImageCreate event name:\"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:50.017462 containerd[1503]: time="2025-06-21T02:31:50.017411007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:31:50.018266 containerd[1503]: time="2025-06-21T02:31:50.018224093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"15119118\" in 1.319798591s" Jun 21 02:31:50.018304 containerd[1503]: time="2025-06-21T02:31:50.018272094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\"" Jun 21 02:31:50.031099 containerd[1503]: time="2025-06-21T02:31:50.031044191Z" level=info msg="CreateContainer within sandbox \"4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 21 02:31:50.037895 containerd[1503]: time="2025-06-21T02:31:50.037850362Z" level=info msg="Container 4f42e68015ed1e09d66e18b4e687c4cce96edb2656adf077cc17f1568fc82e23: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:31:50.049246 containerd[1503]: time="2025-06-21T02:31:50.049193649Z" level=info msg="CreateContainer within sandbox \"4529ddb6708f4550043ee1b564091d61773058bcfcb594068538a525a8ff3172\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4f42e68015ed1e09d66e18b4e687c4cce96edb2656adf077cc17f1568fc82e23\"" Jun 21 02:31:50.050070 containerd[1503]: time="2025-06-21T02:31:50.050042215Z" level=info msg="StartContainer for \"4f42e68015ed1e09d66e18b4e687c4cce96edb2656adf077cc17f1568fc82e23\"" Jun 21 02:31:50.051611 containerd[1503]: time="2025-06-21T02:31:50.051581947Z" level=info msg="connecting to shim 4f42e68015ed1e09d66e18b4e687c4cce96edb2656adf077cc17f1568fc82e23" address="unix:///run/containerd/s/0437429c670acebd3a3d7ab986183ae7d05edeaae8ce978e8487f1459e5b619e" protocol=ttrpc version=3 Jun 21 02:31:50.076822 systemd[1]: Started cri-containerd-4f42e68015ed1e09d66e18b4e687c4cce96edb2656adf077cc17f1568fc82e23.scope - libcontainer container 4f42e68015ed1e09d66e18b4e687c4cce96edb2656adf077cc17f1568fc82e23. Jun 21 02:31:50.111577 containerd[1503]: time="2025-06-21T02:31:50.111538722Z" level=info msg="StartContainer for \"4f42e68015ed1e09d66e18b4e687c4cce96edb2656adf077cc17f1568fc82e23\" returns successfully" Jun 21 02:31:50.231272 kubelet[2643]: I0621 02:31:50.231189 2643 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 21 02:31:50.239106 kubelet[2643]: I0621 02:31:50.239066 2643 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 21 02:31:50.412795 kubelet[2643]: I0621 02:31:50.412716 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vk5dt" podStartSLOduration=21.77118227 podStartE2EDuration="29.41153692s" podCreationTimestamp="2025-06-21 02:31:21 +0000 UTC" firstStartedPulling="2025-06-21 02:31:42.386026665 +0000 UTC m=+42.309495780" lastFinishedPulling="2025-06-21 02:31:50.026381275 +0000 UTC m=+49.949850430" observedRunningTime="2025-06-21 02:31:50.411384039 +0000 UTC m=+50.334853154" watchObservedRunningTime="2025-06-21 02:31:50.41153692 +0000 UTC m=+50.335006035" Jun 21 02:31:50.463758 containerd[1503]: time="2025-06-21T02:31:50.463712597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"511344a0a7edeef3f7398c2ea31cdc0bdf8287f09427992240759852d465d5aa\" id:\"e34c28351e4ce8f6e820aa57eb39beab8c3ddb4956a8a4f8d5208609749b9948\" pid:5219 exit_status:1 exited_at:{seconds:1750473110 nanos:463116312}" Jun 21 02:31:50.724374 kubelet[2643]: I0621 02:31:50.724017 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:31:51.188347 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:55708.service - OpenSSH per-connection server daemon (10.0.0.1:55708). Jun 21 02:31:51.260773 sshd[5234]: Accepted publickey for core from 10.0.0.1 port 55708 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:31:51.262492 sshd-session[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:31:51.267417 systemd-logind[1489]: New session 10 of user core. Jun 21 02:31:51.278887 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 02:31:51.480789 containerd[1503]: time="2025-06-21T02:31:51.480502181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"511344a0a7edeef3f7398c2ea31cdc0bdf8287f09427992240759852d465d5aa\" id:\"fc3c152a605704b1268886e4cf9f8c72d5b0d95844e60df23c7fed168035a852\" pid:5260 exit_status:1 exited_at:{seconds:1750473111 nanos:480066298}" Jun 21 02:31:51.504681 sshd[5236]: Connection closed by 10.0.0.1 port 55708 Jun 21 02:31:51.504480 sshd-session[5234]: pam_unix(sshd:session): session closed for user core Jun 21 02:31:51.515054 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:55708.service: Deactivated successfully. Jun 21 02:31:51.516593 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 02:31:51.517243 systemd-logind[1489]: Session 10 logged out. Waiting for processes to exit. Jun 21 02:31:51.519377 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:55720.service - OpenSSH per-connection server daemon (10.0.0.1:55720). Jun 21 02:31:51.521793 systemd-logind[1489]: Removed session 10. Jun 21 02:31:51.576887 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 55720 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:31:51.578305 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:31:51.583141 systemd-logind[1489]: New session 11 of user core. Jun 21 02:31:51.593813 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 02:31:51.783925 sshd[5278]: Connection closed by 10.0.0.1 port 55720 Jun 21 02:31:51.783244 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Jun 21 02:31:51.801430 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:55720.service: Deactivated successfully. Jun 21 02:31:51.807285 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 02:31:51.812283 systemd-logind[1489]: Session 11 logged out. Waiting for processes to exit. Jun 21 02:31:51.820827 systemd-logind[1489]: Removed session 11. Jun 21 02:31:51.821793 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:55736.service - OpenSSH per-connection server daemon (10.0.0.1:55736). Jun 21 02:31:51.878285 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 55736 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:31:51.879781 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:31:51.884151 systemd-logind[1489]: New session 12 of user core. Jun 21 02:31:51.888779 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 02:31:52.088153 sshd[5292]: Connection closed by 10.0.0.1 port 55736 Jun 21 02:31:52.088503 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Jun 21 02:31:52.091990 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:55736.service: Deactivated successfully. Jun 21 02:31:52.094302 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 02:31:52.095914 systemd-logind[1489]: Session 12 logged out. Waiting for processes to exit. Jun 21 02:31:52.098396 systemd-logind[1489]: Removed session 12. Jun 21 02:31:57.112354 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:54608.service - OpenSSH per-connection server daemon (10.0.0.1:54608). Jun 21 02:31:57.159439 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 54608 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:31:57.160816 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:31:57.164692 systemd-logind[1489]: New session 13 of user core. Jun 21 02:31:57.173798 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 02:31:57.312058 sshd[5321]: Connection closed by 10.0.0.1 port 54608 Jun 21 02:31:57.312806 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Jun 21 02:31:57.321593 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:54608.service: Deactivated successfully. Jun 21 02:31:57.323444 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 02:31:57.324722 systemd-logind[1489]: Session 13 logged out. Waiting for processes to exit. Jun 21 02:31:57.326816 systemd-logind[1489]: Removed session 13. Jun 21 02:31:57.328775 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:54614.service - OpenSSH per-connection server daemon (10.0.0.1:54614). Jun 21 02:31:57.387194 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 54614 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:31:57.388924 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:31:57.393655 systemd-logind[1489]: New session 14 of user core. Jun 21 02:31:57.404811 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 02:31:57.641067 sshd[5336]: Connection closed by 10.0.0.1 port 54614 Jun 21 02:31:57.641521 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Jun 21 02:31:57.652078 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:54614.service: Deactivated successfully. Jun 21 02:31:57.655268 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 02:31:57.657797 systemd-logind[1489]: Session 14 logged out. Waiting for processes to exit. Jun 21 02:31:57.659839 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:54620.service - OpenSSH per-connection server daemon (10.0.0.1:54620). Jun 21 02:31:57.660853 systemd-logind[1489]: Removed session 14. Jun 21 02:31:57.719679 sshd[5347]: Accepted publickey for core from 10.0.0.1 port 54620 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:31:57.721031 sshd-session[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:31:57.726787 systemd-logind[1489]: New session 15 of user core. Jun 21 02:31:57.735797 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 02:31:58.433518 sshd[5349]: Connection closed by 10.0.0.1 port 54620 Jun 21 02:31:58.434162 sshd-session[5347]: pam_unix(sshd:session): session closed for user core Jun 21 02:31:58.445281 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:54620.service: Deactivated successfully. Jun 21 02:31:58.448747 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 02:31:58.450503 systemd-logind[1489]: Session 15 logged out. Waiting for processes to exit. Jun 21 02:31:58.452917 systemd-logind[1489]: Removed session 15. Jun 21 02:31:58.457290 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:54628.service - OpenSSH per-connection server daemon (10.0.0.1:54628). Jun 21 02:31:58.519165 sshd[5368]: Accepted publickey for core from 10.0.0.1 port 54628 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:31:58.520570 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:31:58.525139 systemd-logind[1489]: New session 16 of user core. Jun 21 02:31:58.535832 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 02:31:58.846361 sshd[5371]: Connection closed by 10.0.0.1 port 54628 Jun 21 02:31:58.846266 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Jun 21 02:31:58.858665 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:54628.service: Deactivated successfully. Jun 21 02:31:58.861204 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 02:31:58.863439 systemd-logind[1489]: Session 16 logged out. Waiting for processes to exit. Jun 21 02:31:58.870589 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:54638.service - OpenSSH per-connection server daemon (10.0.0.1:54638). Jun 21 02:31:58.872656 systemd-logind[1489]: Removed session 16. Jun 21 02:31:58.933310 sshd[5383]: Accepted publickey for core from 10.0.0.1 port 54638 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:31:58.934882 sshd-session[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:31:58.940502 systemd-logind[1489]: New session 17 of user core. Jun 21 02:31:58.947839 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 02:31:59.096744 sshd[5386]: Connection closed by 10.0.0.1 port 54638 Jun 21 02:31:59.096831 sshd-session[5383]: pam_unix(sshd:session): session closed for user core Jun 21 02:31:59.101503 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:54638.service: Deactivated successfully. Jun 21 02:31:59.103315 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 02:31:59.104310 systemd-logind[1489]: Session 17 logged out. Waiting for processes to exit. Jun 21 02:31:59.105690 systemd-logind[1489]: Removed session 17. Jun 21 02:32:00.512937 kubelet[2643]: I0621 02:32:00.512719 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:32:04.111491 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:56206.service - OpenSSH per-connection server daemon (10.0.0.1:56206). Jun 21 02:32:04.168696 sshd[5406]: Accepted publickey for core from 10.0.0.1 port 56206 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:32:04.169963 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:32:04.174341 systemd-logind[1489]: New session 18 of user core. Jun 21 02:32:04.184803 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 02:32:04.313770 sshd[5408]: Connection closed by 10.0.0.1 port 56206 Jun 21 02:32:04.314080 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Jun 21 02:32:04.317540 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:56206.service: Deactivated successfully. Jun 21 02:32:04.319241 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 02:32:04.320091 systemd-logind[1489]: Session 18 logged out. Waiting for processes to exit. Jun 21 02:32:04.321387 systemd-logind[1489]: Removed session 18. Jun 21 02:32:07.345445 containerd[1503]: time="2025-06-21T02:32:07.345409201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59cd454da6b27d42750c07321e5823a962ca99546b40755efbb09eaf317e807c\" id:\"838632bf5dbd9245469dee4df15f3f208aa1517b13f277a960e3792074d4b795\" pid:5437 exited_at:{seconds:1750473127 nanos:345142160}" Jun 21 02:32:08.153439 kubelet[2643]: E0621 02:32:08.153395 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:32:09.328346 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:56212.service - OpenSSH per-connection server daemon (10.0.0.1:56212). Jun 21 02:32:09.393406 sshd[5452]: Accepted publickey for core from 10.0.0.1 port 56212 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:32:09.394849 sshd-session[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:32:09.399253 systemd-logind[1489]: New session 19 of user core. Jun 21 02:32:09.406778 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 02:32:09.543430 sshd[5454]: Connection closed by 10.0.0.1 port 56212 Jun 21 02:32:09.543837 sshd-session[5452]: pam_unix(sshd:session): session closed for user core Jun 21 02:32:09.547680 systemd-logind[1489]: Session 19 logged out. Waiting for processes to exit. Jun 21 02:32:09.547884 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:56212.service: Deactivated successfully. Jun 21 02:32:09.549847 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 02:32:09.553204 systemd-logind[1489]: Removed session 19. Jun 21 02:32:14.555427 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:51552.service - OpenSSH per-connection server daemon (10.0.0.1:51552). Jun 21 02:32:14.629885 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 51552 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:32:14.631326 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:32:14.637964 systemd-logind[1489]: New session 20 of user core. Jun 21 02:32:14.649801 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 02:32:14.779532 sshd[5471]: Connection closed by 10.0.0.1 port 51552 Jun 21 02:32:14.779891 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Jun 21 02:32:14.783705 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:51552.service: Deactivated successfully. Jun 21 02:32:14.786332 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 02:32:14.787203 systemd-logind[1489]: Session 20 logged out. Waiting for processes to exit. Jun 21 02:32:14.788484 systemd-logind[1489]: Removed session 20.