Jun 21 02:17:19.779594 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 21 02:17:19.779616 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sat Jun 21 00:00:47 -00 2025 Jun 21 02:17:19.779626 kernel: KASLR enabled Jun 21 02:17:19.779633 kernel: efi: EFI v2.7 by EDK II Jun 21 02:17:19.779639 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jun 21 02:17:19.779644 kernel: random: crng init done Jun 21 02:17:19.779652 kernel: secureboot: Secure boot disabled Jun 21 02:17:19.779658 kernel: ACPI: Early table checksum verification disabled Jun 21 02:17:19.779665 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jun 21 02:17:19.779672 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jun 21 02:17:19.779679 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:17:19.779685 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:17:19.779691 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:17:19.779697 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:17:19.779705 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:17:19.779712 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:17:19.779719 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:17:19.779725 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:17:19.779731 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:17:19.779738 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jun 21 02:17:19.779744 kernel: ACPI: Use ACPI SPCR as default console: Yes Jun 21 02:17:19.779751 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jun 21 02:17:19.779757 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jun 21 02:17:19.779764 kernel: Zone ranges: Jun 21 02:17:19.779779 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jun 21 02:17:19.779788 kernel: DMA32 empty Jun 21 02:17:19.779794 kernel: Normal empty Jun 21 02:17:19.779800 kernel: Device empty Jun 21 02:17:19.779806 kernel: Movable zone start for each node Jun 21 02:17:19.779813 kernel: Early memory node ranges Jun 21 02:17:19.779819 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jun 21 02:17:19.779825 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jun 21 02:17:19.779832 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jun 21 02:17:19.779838 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jun 21 02:17:19.779844 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jun 21 02:17:19.779850 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jun 21 02:17:19.779857 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jun 21 02:17:19.779864 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jun 21 02:17:19.779871 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jun 21 02:17:19.779877 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jun 21 02:17:19.779886 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jun 21 02:17:19.779893 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jun 21 02:17:19.779899 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jun 21 02:17:19.779908 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jun 21 02:17:19.779914 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jun 21 02:17:19.779921 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jun 21 02:17:19.779928 kernel: psci: probing for conduit method from ACPI. Jun 21 02:17:19.779934 kernel: psci: PSCIv1.1 detected in firmware. Jun 21 02:17:19.779941 kernel: psci: Using standard PSCI v0.2 function IDs Jun 21 02:17:19.779947 kernel: psci: Trusted OS migration not required Jun 21 02:17:19.779954 kernel: psci: SMC Calling Convention v1.1 Jun 21 02:17:19.779961 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jun 21 02:17:19.779968 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jun 21 02:17:19.779976 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jun 21 02:17:19.779983 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jun 21 02:17:19.779989 kernel: Detected PIPT I-cache on CPU0 Jun 21 02:17:19.779996 kernel: CPU features: detected: GIC system register CPU interface Jun 21 02:17:19.780003 kernel: CPU features: detected: Spectre-v4 Jun 21 02:17:19.780009 kernel: CPU features: detected: Spectre-BHB Jun 21 02:17:19.780016 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 21 02:17:19.780023 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 21 02:17:19.780029 kernel: CPU features: detected: ARM erratum 1418040 Jun 21 02:17:19.780036 kernel: CPU features: detected: SSBS not fully self-synchronizing Jun 21 02:17:19.780043 kernel: alternatives: applying boot alternatives Jun 21 02:17:19.780051 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb99487be08e9decec94bac26681ba79a4365c210ec86e0c6fe47991cb7f77db Jun 21 02:17:19.780059 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 02:17:19.780066 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 21 02:17:19.780073 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 21 02:17:19.780079 kernel: Fallback order for Node 0: 0 Jun 21 02:17:19.780086 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jun 21 02:17:19.780093 kernel: Policy zone: DMA Jun 21 02:17:19.780099 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 02:17:19.780106 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jun 21 02:17:19.780113 kernel: software IO TLB: area num 4. Jun 21 02:17:19.780119 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jun 21 02:17:19.780126 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jun 21 02:17:19.780134 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 21 02:17:19.780141 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 02:17:19.780148 kernel: rcu: RCU event tracing is enabled. Jun 21 02:17:19.780155 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 21 02:17:19.780162 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 02:17:19.780169 kernel: Tracing variant of Tasks RCU enabled. Jun 21 02:17:19.780176 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 02:17:19.780182 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 21 02:17:19.780189 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 21 02:17:19.780196 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 21 02:17:19.780217 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 21 02:17:19.780228 kernel: GICv3: 256 SPIs implemented Jun 21 02:17:19.780235 kernel: GICv3: 0 Extended SPIs implemented Jun 21 02:17:19.780242 kernel: Root IRQ handler: gic_handle_irq Jun 21 02:17:19.780248 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 21 02:17:19.780255 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jun 21 02:17:19.780262 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jun 21 02:17:19.780269 kernel: ITS [mem 0x08080000-0x0809ffff] Jun 21 02:17:19.780275 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jun 21 02:17:19.780282 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jun 21 02:17:19.780289 kernel: GICv3: using LPI property table @0x00000000400f0000 Jun 21 02:17:19.780296 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 Jun 21 02:17:19.780302 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 02:17:19.780310 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 21 02:17:19.780317 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 21 02:17:19.780324 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 21 02:17:19.780331 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 21 02:17:19.780338 kernel: arm-pv: using stolen time PV Jun 21 02:17:19.780345 kernel: Console: colour dummy device 80x25 Jun 21 02:17:19.780352 kernel: ACPI: Core revision 20240827 Jun 21 02:17:19.780360 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 21 02:17:19.780367 kernel: pid_max: default: 32768 minimum: 301 Jun 21 02:17:19.780374 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 02:17:19.780382 kernel: landlock: Up and running. Jun 21 02:17:19.780389 kernel: SELinux: Initializing. Jun 21 02:17:19.780396 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 02:17:19.780403 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 02:17:19.780410 kernel: rcu: Hierarchical SRCU implementation. Jun 21 02:17:19.780417 kernel: rcu: Max phase no-delay instances is 400. Jun 21 02:17:19.780424 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 02:17:19.780431 kernel: Remapping and enabling EFI services. Jun 21 02:17:19.780438 kernel: smp: Bringing up secondary CPUs ... Jun 21 02:17:19.780450 kernel: Detected PIPT I-cache on CPU1 Jun 21 02:17:19.780458 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jun 21 02:17:19.780465 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 Jun 21 02:17:19.780474 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 21 02:17:19.780481 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 21 02:17:19.780493 kernel: Detected PIPT I-cache on CPU2 Jun 21 02:17:19.780502 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jun 21 02:17:19.780509 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 Jun 21 02:17:19.780519 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 21 02:17:19.780526 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jun 21 02:17:19.780533 kernel: Detected PIPT I-cache on CPU3 Jun 21 02:17:19.780541 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jun 21 02:17:19.780548 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 Jun 21 02:17:19.780555 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 21 02:17:19.780562 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jun 21 02:17:19.780569 kernel: smp: Brought up 1 node, 4 CPUs Jun 21 02:17:19.780577 kernel: SMP: Total of 4 processors activated. Jun 21 02:17:19.780585 kernel: CPU: All CPU(s) started at EL1 Jun 21 02:17:19.780593 kernel: CPU features: detected: 32-bit EL0 Support Jun 21 02:17:19.780600 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 21 02:17:19.780607 kernel: CPU features: detected: Common not Private translations Jun 21 02:17:19.780615 kernel: CPU features: detected: CRC32 instructions Jun 21 02:17:19.780622 kernel: CPU features: detected: Enhanced Virtualization Traps Jun 21 02:17:19.780629 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 21 02:17:19.780637 kernel: CPU features: detected: LSE atomic instructions Jun 21 02:17:19.780644 kernel: CPU features: detected: Privileged Access Never Jun 21 02:17:19.780652 kernel: CPU features: detected: RAS Extension Support Jun 21 02:17:19.780660 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 21 02:17:19.780667 kernel: alternatives: applying system-wide alternatives Jun 21 02:17:19.780674 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jun 21 02:17:19.780682 kernel: Memory: 2424408K/2572288K available (11136K kernel code, 2284K rwdata, 8980K rodata, 39488K init, 1037K bss, 125728K reserved, 16384K cma-reserved) Jun 21 02:17:19.780689 kernel: devtmpfs: initialized Jun 21 02:17:19.780696 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 02:17:19.780704 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 21 02:17:19.780711 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 21 02:17:19.780719 kernel: 0 pages in range for non-PLT usage Jun 21 02:17:19.780727 kernel: 508496 pages in range for PLT usage Jun 21 02:17:19.780734 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 02:17:19.780741 kernel: SMBIOS 3.0.0 present. Jun 21 02:17:19.780748 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jun 21 02:17:19.780756 kernel: DMI: Memory slots populated: 1/1 Jun 21 02:17:19.780763 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 02:17:19.780775 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 21 02:17:19.780783 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 21 02:17:19.780792 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 21 02:17:19.780800 kernel: audit: initializing netlink subsys (disabled) Jun 21 02:17:19.780807 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Jun 21 02:17:19.780814 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 02:17:19.780822 kernel: cpuidle: using governor menu Jun 21 02:17:19.780829 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 21 02:17:19.780836 kernel: ASID allocator initialised with 32768 entries Jun 21 02:17:19.780843 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 02:17:19.780850 kernel: Serial: AMBA PL011 UART driver Jun 21 02:17:19.780859 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 21 02:17:19.780866 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 21 02:17:19.780873 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 21 02:17:19.780881 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 21 02:17:19.780888 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 02:17:19.780896 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 02:17:19.780903 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 21 02:17:19.780910 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 21 02:17:19.780917 kernel: ACPI: Added _OSI(Module Device) Jun 21 02:17:19.780926 kernel: ACPI: Added _OSI(Processor Device) Jun 21 02:17:19.780933 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 02:17:19.780940 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 02:17:19.780947 kernel: ACPI: Interpreter enabled Jun 21 02:17:19.780955 kernel: ACPI: Using GIC for interrupt routing Jun 21 02:17:19.780962 kernel: ACPI: MCFG table detected, 1 entries Jun 21 02:17:19.780969 kernel: ACPI: CPU0 has been hot-added Jun 21 02:17:19.780976 kernel: ACPI: CPU1 has been hot-added Jun 21 02:17:19.780983 kernel: ACPI: CPU2 has been hot-added Jun 21 02:17:19.780990 kernel: ACPI: CPU3 has been hot-added Jun 21 02:17:19.780999 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jun 21 02:17:19.781006 kernel: printk: legacy console [ttyAMA0] enabled Jun 21 02:17:19.781014 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 21 02:17:19.781148 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 21 02:17:19.781226 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 21 02:17:19.781290 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 21 02:17:19.781348 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jun 21 02:17:19.781409 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jun 21 02:17:19.781419 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jun 21 02:17:19.781427 kernel: PCI host bridge to bus 0000:00 Jun 21 02:17:19.781500 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jun 21 02:17:19.781555 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 21 02:17:19.781607 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jun 21 02:17:19.781658 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 21 02:17:19.781802 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jun 21 02:17:19.781878 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 02:17:19.781943 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jun 21 02:17:19.782002 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jun 21 02:17:19.782062 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jun 21 02:17:19.782121 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jun 21 02:17:19.782180 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jun 21 02:17:19.782263 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jun 21 02:17:19.782319 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jun 21 02:17:19.782371 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 21 02:17:19.782423 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jun 21 02:17:19.782432 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 21 02:17:19.782440 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 21 02:17:19.782447 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 21 02:17:19.782457 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 21 02:17:19.782464 kernel: iommu: Default domain type: Translated Jun 21 02:17:19.782472 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 21 02:17:19.782479 kernel: efivars: Registered efivars operations Jun 21 02:17:19.782486 kernel: vgaarb: loaded Jun 21 02:17:19.782494 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 21 02:17:19.782501 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 02:17:19.782508 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 02:17:19.782516 kernel: pnp: PnP ACPI init Jun 21 02:17:19.782583 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jun 21 02:17:19.782593 kernel: pnp: PnP ACPI: found 1 devices Jun 21 02:17:19.782601 kernel: NET: Registered PF_INET protocol family Jun 21 02:17:19.782608 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 21 02:17:19.782616 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 21 02:17:19.782623 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 02:17:19.782631 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 21 02:17:19.782638 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 21 02:17:19.782648 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 21 02:17:19.782655 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 02:17:19.782662 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 02:17:19.782670 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 02:17:19.782677 kernel: PCI: CLS 0 bytes, default 64 Jun 21 02:17:19.782684 kernel: kvm [1]: HYP mode not available Jun 21 02:17:19.782692 kernel: Initialise system trusted keyrings Jun 21 02:17:19.782699 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 21 02:17:19.782706 kernel: Key type asymmetric registered Jun 21 02:17:19.782715 kernel: Asymmetric key parser 'x509' registered Jun 21 02:17:19.782722 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 21 02:17:19.782730 kernel: io scheduler mq-deadline registered Jun 21 02:17:19.782737 kernel: io scheduler kyber registered Jun 21 02:17:19.782744 kernel: io scheduler bfq registered Jun 21 02:17:19.782751 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 21 02:17:19.782759 kernel: ACPI: button: Power Button [PWRB] Jun 21 02:17:19.782772 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 21 02:17:19.782836 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jun 21 02:17:19.782848 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 02:17:19.782855 kernel: thunder_xcv, ver 1.0 Jun 21 02:17:19.782862 kernel: thunder_bgx, ver 1.0 Jun 21 02:17:19.782870 kernel: nicpf, ver 1.0 Jun 21 02:17:19.782877 kernel: nicvf, ver 1.0 Jun 21 02:17:19.782943 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 21 02:17:19.782999 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-21T02:17:19 UTC (1750472239) Jun 21 02:17:19.783008 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 21 02:17:19.783017 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jun 21 02:17:19.783025 kernel: watchdog: NMI not fully supported Jun 21 02:17:19.783032 kernel: watchdog: Hard watchdog permanently disabled Jun 21 02:17:19.783040 kernel: NET: Registered PF_INET6 protocol family Jun 21 02:17:19.783047 kernel: Segment Routing with IPv6 Jun 21 02:17:19.783054 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 02:17:19.783062 kernel: NET: Registered PF_PACKET protocol family Jun 21 02:17:19.783069 kernel: Key type dns_resolver registered Jun 21 02:17:19.783076 kernel: registered taskstats version 1 Jun 21 02:17:19.783083 kernel: Loading compiled-in X.509 certificates Jun 21 02:17:19.783092 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 0d4b619b81572779adc2f9dd5f1325c23c2a41ec' Jun 21 02:17:19.783099 kernel: Demotion targets for Node 0: null Jun 21 02:17:19.783106 kernel: Key type .fscrypt registered Jun 21 02:17:19.783113 kernel: Key type fscrypt-provisioning registered Jun 21 02:17:19.783121 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 02:17:19.783128 kernel: ima: Allocated hash algorithm: sha1 Jun 21 02:17:19.783135 kernel: ima: No architecture policies found Jun 21 02:17:19.783143 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 21 02:17:19.783151 kernel: clk: Disabling unused clocks Jun 21 02:17:19.783158 kernel: PM: genpd: Disabling unused power domains Jun 21 02:17:19.783166 kernel: Warning: unable to open an initial console. Jun 21 02:17:19.783173 kernel: Freeing unused kernel memory: 39488K Jun 21 02:17:19.783180 kernel: Run /init as init process Jun 21 02:17:19.783188 kernel: with arguments: Jun 21 02:17:19.783195 kernel: /init Jun 21 02:17:19.783202 kernel: with environment: Jun 21 02:17:19.783269 kernel: HOME=/ Jun 21 02:17:19.783279 kernel: TERM=linux Jun 21 02:17:19.783287 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 02:17:19.783295 systemd[1]: Successfully made /usr/ read-only. Jun 21 02:17:19.783306 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 02:17:19.783314 systemd[1]: Detected virtualization kvm. Jun 21 02:17:19.783322 systemd[1]: Detected architecture arm64. Jun 21 02:17:19.783330 systemd[1]: Running in initrd. Jun 21 02:17:19.783337 systemd[1]: No hostname configured, using default hostname. Jun 21 02:17:19.783347 systemd[1]: Hostname set to . Jun 21 02:17:19.783355 systemd[1]: Initializing machine ID from VM UUID. Jun 21 02:17:19.783363 systemd[1]: Queued start job for default target initrd.target. Jun 21 02:17:19.783371 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 02:17:19.783378 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 02:17:19.783387 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 02:17:19.783395 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 02:17:19.783403 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 02:17:19.783413 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 02:17:19.783422 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 02:17:19.783430 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 02:17:19.783438 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 02:17:19.783446 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 02:17:19.783453 systemd[1]: Reached target paths.target - Path Units. Jun 21 02:17:19.783462 systemd[1]: Reached target slices.target - Slice Units. Jun 21 02:17:19.783470 systemd[1]: Reached target swap.target - Swaps. Jun 21 02:17:19.783478 systemd[1]: Reached target timers.target - Timer Units. Jun 21 02:17:19.783486 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 02:17:19.783494 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 02:17:19.783502 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 02:17:19.783510 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 02:17:19.783517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 02:17:19.783525 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 02:17:19.783535 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 02:17:19.783543 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 02:17:19.783551 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 02:17:19.783559 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 02:17:19.783567 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 02:17:19.783575 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 02:17:19.783583 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 02:17:19.783591 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 02:17:19.783599 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 02:17:19.783607 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 02:17:19.783615 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 02:17:19.783624 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 02:17:19.783631 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 02:17:19.783641 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 02:17:19.783693 systemd-journald[242]: Collecting audit messages is disabled. Jun 21 02:17:19.783714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:17:19.783724 systemd-journald[242]: Journal started Jun 21 02:17:19.783745 systemd-journald[242]: Runtime Journal (/run/log/journal/ceff544ea4514b23b7a5db0089bb5c43) is 6M, max 48.5M, 42.4M free. Jun 21 02:17:19.772672 systemd-modules-load[244]: Inserted module 'overlay' Jun 21 02:17:19.789231 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 02:17:19.790388 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 02:17:19.793029 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 02:17:19.795908 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 02:17:19.795573 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 02:17:19.797970 systemd-modules-load[244]: Inserted module 'br_netfilter' Jun 21 02:17:19.799604 kernel: Bridge firewalling registered Jun 21 02:17:19.802053 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 02:17:19.803164 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 02:17:19.808331 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 02:17:19.810910 systemd-tmpfiles[269]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 02:17:19.821455 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 02:17:19.823576 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 02:17:19.824594 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 02:17:19.827365 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 02:17:19.828299 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 02:17:19.836482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 02:17:19.847369 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb99487be08e9decec94bac26681ba79a4365c210ec86e0c6fe47991cb7f77db Jun 21 02:17:19.865775 systemd-resolved[291]: Positive Trust Anchors: Jun 21 02:17:19.865793 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 02:17:19.865824 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 02:17:19.870713 systemd-resolved[291]: Defaulting to hostname 'linux'. Jun 21 02:17:19.871667 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 02:17:19.872825 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 02:17:19.928233 kernel: SCSI subsystem initialized Jun 21 02:17:19.932232 kernel: Loading iSCSI transport class v2.0-870. Jun 21 02:17:19.941221 kernel: iscsi: registered transport (tcp) Jun 21 02:17:19.954226 kernel: iscsi: registered transport (qla4xxx) Jun 21 02:17:19.954247 kernel: QLogic iSCSI HBA Driver Jun 21 02:17:19.969850 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 02:17:19.983895 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 02:17:19.985345 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 02:17:20.033882 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 02:17:20.036125 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 02:17:20.108235 kernel: raid6: neonx8 gen() 15793 MB/s Jun 21 02:17:20.125223 kernel: raid6: neonx4 gen() 15814 MB/s Jun 21 02:17:20.142230 kernel: raid6: neonx2 gen() 13224 MB/s Jun 21 02:17:20.159225 kernel: raid6: neonx1 gen() 10428 MB/s Jun 21 02:17:20.176234 kernel: raid6: int64x8 gen() 6900 MB/s Jun 21 02:17:20.193220 kernel: raid6: int64x4 gen() 7344 MB/s Jun 21 02:17:20.210223 kernel: raid6: int64x2 gen() 6098 MB/s Jun 21 02:17:20.227224 kernel: raid6: int64x1 gen() 5059 MB/s Jun 21 02:17:20.227238 kernel: raid6: using algorithm neonx4 gen() 15814 MB/s Jun 21 02:17:20.244241 kernel: raid6: .... xor() 12326 MB/s, rmw enabled Jun 21 02:17:20.244265 kernel: raid6: using neon recovery algorithm Jun 21 02:17:20.249236 kernel: xor: measuring software checksum speed Jun 21 02:17:20.249273 kernel: 8regs : 20839 MB/sec Jun 21 02:17:20.250258 kernel: 32regs : 19533 MB/sec Jun 21 02:17:20.250271 kernel: arm64_neon : 27268 MB/sec Jun 21 02:17:20.250281 kernel: xor: using function: arm64_neon (27268 MB/sec) Jun 21 02:17:20.302242 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 02:17:20.310246 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 02:17:20.314459 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 02:17:20.350002 systemd-udevd[503]: Using default interface naming scheme 'v255'. Jun 21 02:17:20.354071 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 02:17:20.355708 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 02:17:20.379626 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Jun 21 02:17:20.403148 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 02:17:20.405227 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 02:17:20.464832 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 02:17:20.467397 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 02:17:20.520233 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jun 21 02:17:20.522222 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 21 02:17:20.525475 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 21 02:17:20.525511 kernel: GPT:9289727 != 19775487 Jun 21 02:17:20.525522 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 21 02:17:20.528470 kernel: GPT:9289727 != 19775487 Jun 21 02:17:20.528500 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 21 02:17:20.529255 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 02:17:20.536124 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 02:17:20.536287 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:17:20.539145 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 02:17:20.541075 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 02:17:20.560964 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 21 02:17:20.568103 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 21 02:17:20.570051 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:17:20.571125 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 02:17:20.587556 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 21 02:17:20.588573 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 21 02:17:20.597406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 02:17:20.598359 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 02:17:20.599869 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 02:17:20.601459 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 02:17:20.603768 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 02:17:20.605414 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 02:17:20.632274 disk-uuid[596]: Primary Header is updated. Jun 21 02:17:20.632274 disk-uuid[596]: Secondary Entries is updated. Jun 21 02:17:20.632274 disk-uuid[596]: Secondary Header is updated. Jun 21 02:17:20.636451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 02:17:20.636739 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 02:17:21.648243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 02:17:21.649676 disk-uuid[599]: The operation has completed successfully. Jun 21 02:17:21.673173 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 02:17:21.673295 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 02:17:21.699660 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 02:17:21.718054 sh[616]: Success Jun 21 02:17:21.730751 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 02:17:21.732258 kernel: device-mapper: uevent: version 1.0.3 Jun 21 02:17:21.732287 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 02:17:21.740232 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jun 21 02:17:21.764709 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 02:17:21.767149 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 02:17:21.776956 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 02:17:21.782549 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 02:17:21.782585 kernel: BTRFS: device fsid 750e5bb7-0e5c-4b2e-87f6-233588ea3c64 devid 1 transid 51 /dev/mapper/usr (253:0) scanned by mount (628) Jun 21 02:17:21.784659 kernel: BTRFS info (device dm-0): first mount of filesystem 750e5bb7-0e5c-4b2e-87f6-233588ea3c64 Jun 21 02:17:21.784688 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 21 02:17:21.784699 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 02:17:21.789858 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 02:17:21.790885 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 02:17:21.791865 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 02:17:21.792573 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 02:17:21.794971 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 02:17:21.816281 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (657) Jun 21 02:17:21.818660 kernel: BTRFS info (device vda6): first mount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:17:21.818691 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 21 02:17:21.818702 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 02:17:21.824237 kernel: BTRFS info (device vda6): last unmount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:17:21.824393 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 02:17:21.826774 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 02:17:21.894476 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 02:17:21.898768 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 02:17:21.940525 systemd-networkd[803]: lo: Link UP Jun 21 02:17:21.940539 systemd-networkd[803]: lo: Gained carrier Jun 21 02:17:21.941302 systemd-networkd[803]: Enumeration completed Jun 21 02:17:21.941574 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 02:17:21.941823 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 02:17:21.941827 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 02:17:21.942721 systemd-networkd[803]: eth0: Link UP Jun 21 02:17:21.942724 systemd-networkd[803]: eth0: Gained carrier Jun 21 02:17:21.942733 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 02:17:21.943303 systemd[1]: Reached target network.target - Network. Jun 21 02:17:21.962265 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 21 02:17:21.965681 ignition[705]: Ignition 2.21.0 Jun 21 02:17:21.965694 ignition[705]: Stage: fetch-offline Jun 21 02:17:21.965726 ignition[705]: no configs at "/usr/lib/ignition/base.d" Jun 21 02:17:21.965734 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:17:21.965921 ignition[705]: parsed url from cmdline: "" Jun 21 02:17:21.965924 ignition[705]: no config URL provided Jun 21 02:17:21.965929 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 02:17:21.965935 ignition[705]: no config at "/usr/lib/ignition/user.ign" Jun 21 02:17:21.965953 ignition[705]: op(1): [started] loading QEMU firmware config module Jun 21 02:17:21.965956 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 21 02:17:21.980184 ignition[705]: op(1): [finished] loading QEMU firmware config module Jun 21 02:17:22.020442 ignition[705]: parsing config with SHA512: 661b9ece1bb2cf1ebc2ab77276fe029ba37f4dbd0c1cef81bc43e2aed0b9af20796ed8149321af2d08f7589b542f47a3f62bb18ab55548735b6b023f9cb22931 Jun 21 02:17:22.025186 unknown[705]: fetched base config from "system" Jun 21 02:17:22.025202 unknown[705]: fetched user config from "qemu" Jun 21 02:17:22.025851 ignition[705]: fetch-offline: fetch-offline passed Jun 21 02:17:22.025448 systemd-resolved[291]: Detected conflict on linux IN A 10.0.0.75 Jun 21 02:17:22.025913 ignition[705]: Ignition finished successfully Jun 21 02:17:22.025456 systemd-resolved[291]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Jun 21 02:17:22.027808 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 02:17:22.029404 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 21 02:17:22.030133 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 02:17:22.064478 ignition[816]: Ignition 2.21.0 Jun 21 02:17:22.064495 ignition[816]: Stage: kargs Jun 21 02:17:22.064976 ignition[816]: no configs at "/usr/lib/ignition/base.d" Jun 21 02:17:22.064987 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:17:22.066126 ignition[816]: kargs: kargs passed Jun 21 02:17:22.066419 ignition[816]: Ignition finished successfully Jun 21 02:17:22.068749 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 02:17:22.070515 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 02:17:22.100502 ignition[824]: Ignition 2.21.0 Jun 21 02:17:22.100521 ignition[824]: Stage: disks Jun 21 02:17:22.100677 ignition[824]: no configs at "/usr/lib/ignition/base.d" Jun 21 02:17:22.100686 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:17:22.102591 ignition[824]: disks: disks passed Jun 21 02:17:22.102650 ignition[824]: Ignition finished successfully Jun 21 02:17:22.104270 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 02:17:22.105399 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 02:17:22.106586 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 02:17:22.108114 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 02:17:22.109677 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 02:17:22.110995 systemd[1]: Reached target basic.target - Basic System. Jun 21 02:17:22.113104 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 02:17:22.137136 systemd-fsck[834]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 21 02:17:22.141911 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 02:17:22.144055 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 02:17:22.211173 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 02:17:22.212412 kernel: EXT4-fs (vda9): mounted filesystem 9ad072e4-7680-4e5b-adc0-72c770c20c86 r/w with ordered data mode. Quota mode: none. Jun 21 02:17:22.212265 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 02:17:22.214270 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 02:17:22.215681 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 02:17:22.216487 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 21 02:17:22.216524 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 02:17:22.216545 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 02:17:22.228405 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 02:17:22.230560 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 02:17:22.233225 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (842) Jun 21 02:17:22.235349 kernel: BTRFS info (device vda6): first mount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:17:22.235387 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 21 02:17:22.235399 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 02:17:22.238930 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 02:17:22.275637 initrd-setup-root[866]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 02:17:22.278638 initrd-setup-root[873]: cut: /sysroot/etc/group: No such file or directory Jun 21 02:17:22.281938 initrd-setup-root[880]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 02:17:22.284651 initrd-setup-root[887]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 02:17:22.351275 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 02:17:22.352916 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 02:17:22.354199 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 02:17:22.376393 kernel: BTRFS info (device vda6): last unmount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:17:22.390454 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 02:17:22.393121 ignition[956]: INFO : Ignition 2.21.0 Jun 21 02:17:22.393121 ignition[956]: INFO : Stage: mount Jun 21 02:17:22.394275 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 02:17:22.394275 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:17:22.394275 ignition[956]: INFO : mount: mount passed Jun 21 02:17:22.394275 ignition[956]: INFO : Ignition finished successfully Jun 21 02:17:22.395574 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 02:17:22.397730 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 02:17:22.781985 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 02:17:22.783562 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 02:17:22.807841 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (968) Jun 21 02:17:22.807877 kernel: BTRFS info (device vda6): first mount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:17:22.807888 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 21 02:17:22.808543 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 02:17:22.811335 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 02:17:22.846321 ignition[985]: INFO : Ignition 2.21.0 Jun 21 02:17:22.846321 ignition[985]: INFO : Stage: files Jun 21 02:17:22.847572 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 02:17:22.847572 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:17:22.847572 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Jun 21 02:17:22.850036 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 02:17:22.850036 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 02:17:22.850036 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 02:17:22.850036 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 02:17:22.854154 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 02:17:22.854154 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jun 21 02:17:22.854154 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jun 21 02:17:22.850341 unknown[985]: wrote ssh authorized keys file for user: core Jun 21 02:17:22.893712 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 02:17:23.093673 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jun 21 02:17:23.093673 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 21 02:17:23.097065 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 02:17:23.097065 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 02:17:23.097065 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 02:17:23.097065 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 02:17:23.097065 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 02:17:23.097065 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 02:17:23.097065 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 02:17:23.097065 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 02:17:23.097065 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 02:17:23.097065 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 21 02:17:23.110945 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 21 02:17:23.110945 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 21 02:17:23.110945 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jun 21 02:17:23.575684 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 21 02:17:23.776581 systemd-networkd[803]: eth0: Gained IPv6LL Jun 21 02:17:30.784306 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 21 02:17:30.784306 ignition[985]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 21 02:17:30.787140 ignition[985]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 02:17:30.788484 ignition[985]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 02:17:30.788484 ignition[985]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 21 02:17:30.788484 ignition[985]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 21 02:17:30.788484 ignition[985]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 21 02:17:30.788484 ignition[985]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 21 02:17:30.788484 ignition[985]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 21 02:17:30.788484 ignition[985]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 21 02:17:30.804273 ignition[985]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 21 02:17:30.807969 ignition[985]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 21 02:17:30.809096 ignition[985]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 21 02:17:30.809096 ignition[985]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 21 02:17:30.809096 ignition[985]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 02:17:30.809096 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 02:17:30.809096 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 02:17:30.809096 ignition[985]: INFO : files: files passed Jun 21 02:17:30.809096 ignition[985]: INFO : Ignition finished successfully Jun 21 02:17:30.810713 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 02:17:30.814339 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 02:17:30.823485 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 02:17:30.826077 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 02:17:30.826186 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 02:17:30.831110 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory Jun 21 02:17:30.833803 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 02:17:30.833803 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 02:17:30.836029 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 02:17:30.835844 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 02:17:30.837037 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 02:17:30.839353 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 02:17:30.868058 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 02:17:30.868183 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 02:17:30.869791 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 02:17:30.870507 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 02:17:30.871219 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 02:17:30.871923 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 02:17:30.885679 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 02:17:30.887750 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 02:17:30.905904 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 02:17:30.906874 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 02:17:30.908312 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 02:17:30.909576 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 02:17:30.909692 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 02:17:30.911531 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 02:17:30.912964 systemd[1]: Stopped target basic.target - Basic System. Jun 21 02:17:30.914134 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 02:17:30.915335 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 02:17:30.916683 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 02:17:30.918024 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 02:17:30.919380 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 02:17:30.920664 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 02:17:30.922052 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 02:17:30.923476 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 02:17:30.924683 systemd[1]: Stopped target swap.target - Swaps. Jun 21 02:17:30.925758 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 02:17:30.925875 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 02:17:30.927534 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 02:17:30.928909 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 02:17:30.930260 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 02:17:30.931303 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 02:17:30.932439 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 02:17:30.932550 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 02:17:30.934561 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 02:17:30.934672 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 02:17:30.936044 systemd[1]: Stopped target paths.target - Path Units. Jun 21 02:17:30.937112 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 02:17:30.940265 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 02:17:30.941176 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 02:17:30.942691 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 02:17:30.943810 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 02:17:30.943889 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 02:17:30.944947 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 02:17:30.945024 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 02:17:30.946097 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 02:17:30.946222 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 02:17:30.947442 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 02:17:30.947542 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 02:17:30.949344 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 02:17:30.951121 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 02:17:30.951805 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 02:17:30.951924 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 02:17:30.953188 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 02:17:30.953303 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 02:17:30.957647 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 02:17:30.962365 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 02:17:30.970183 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 02:17:30.973735 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 02:17:30.973861 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 02:17:30.975546 ignition[1040]: INFO : Ignition 2.21.0 Jun 21 02:17:30.975546 ignition[1040]: INFO : Stage: umount Jun 21 02:17:30.975546 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 02:17:30.975546 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:17:30.978156 ignition[1040]: INFO : umount: umount passed Jun 21 02:17:30.978156 ignition[1040]: INFO : Ignition finished successfully Jun 21 02:17:30.978850 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 02:17:30.978936 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 02:17:30.980128 systemd[1]: Stopped target network.target - Network. Jun 21 02:17:30.981119 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 02:17:30.981164 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 02:17:30.982385 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 02:17:30.982422 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 02:17:30.983498 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 02:17:30.983540 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 02:17:30.984681 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 02:17:30.984713 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 02:17:30.985923 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 02:17:30.985962 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 02:17:30.987264 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 02:17:30.988477 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 02:17:30.995254 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 02:17:30.995357 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 02:17:30.998824 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 02:17:30.999031 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 02:17:30.999064 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 02:17:31.002004 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 02:17:31.004802 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 02:17:31.005541 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 02:17:31.007646 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 02:17:31.007801 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 02:17:31.009179 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 02:17:31.009254 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 02:17:31.011192 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 02:17:31.012447 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 02:17:31.012493 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 02:17:31.013873 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 02:17:31.013910 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 02:17:31.015992 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 02:17:31.016030 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 02:17:31.017329 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 02:17:31.019338 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 02:17:31.026745 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 02:17:31.026910 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 02:17:31.029415 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 02:17:31.029518 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 02:17:31.031170 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 02:17:31.031249 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 02:17:31.032010 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 02:17:31.032039 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 02:17:31.033286 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 02:17:31.033328 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 02:17:31.035178 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 02:17:31.035279 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 02:17:31.037075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 02:17:31.037126 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 02:17:31.039786 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 02:17:31.041057 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 02:17:31.041108 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 02:17:31.043475 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 02:17:31.043521 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 02:17:31.045892 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 21 02:17:31.045932 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 02:17:31.047503 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 02:17:31.047540 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 02:17:31.049107 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 02:17:31.049146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:17:31.052092 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 02:17:31.052167 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 02:17:31.054200 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 02:17:31.055858 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 02:17:31.070455 systemd[1]: Switching root. Jun 21 02:17:31.108047 systemd-journald[242]: Journal stopped Jun 21 02:17:31.817912 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). Jun 21 02:17:31.819716 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 02:17:31.819733 kernel: SELinux: policy capability open_perms=1 Jun 21 02:17:31.819744 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 02:17:31.819754 kernel: SELinux: policy capability always_check_network=0 Jun 21 02:17:31.819767 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 02:17:31.819780 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 02:17:31.819790 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 02:17:31.819799 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 02:17:31.819809 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 02:17:31.819818 kernel: audit: type=1403 audit(1750472251.237:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 02:17:31.819831 systemd[1]: Successfully loaded SELinux policy in 38.451ms. Jun 21 02:17:31.819848 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.333ms. Jun 21 02:17:31.819860 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 02:17:31.819870 systemd[1]: Detected virtualization kvm. Jun 21 02:17:31.819880 systemd[1]: Detected architecture arm64. Jun 21 02:17:31.819892 systemd[1]: Detected first boot. Jun 21 02:17:31.819902 systemd[1]: Initializing machine ID from VM UUID. Jun 21 02:17:31.819912 kernel: NET: Registered PF_VSOCK protocol family Jun 21 02:17:31.819921 zram_generator::config[1085]: No configuration found. Jun 21 02:17:31.819932 systemd[1]: Populated /etc with preset unit settings. Jun 21 02:17:31.819942 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 02:17:31.819952 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 02:17:31.819962 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 02:17:31.819973 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 02:17:31.819983 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 02:17:31.819992 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 02:17:31.820002 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 02:17:31.820011 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 02:17:31.820021 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 02:17:31.820031 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 02:17:31.820042 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 02:17:31.820052 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 02:17:31.820064 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 02:17:31.820074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 02:17:31.820084 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 02:17:31.820094 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 02:17:31.820104 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 02:17:31.820114 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 02:17:31.820124 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 21 02:17:31.820134 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 02:17:31.820146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 02:17:31.820156 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 02:17:31.820166 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 02:17:31.820176 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 02:17:31.820186 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 02:17:31.820196 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 02:17:31.820215 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 02:17:31.820228 systemd[1]: Reached target slices.target - Slice Units. Jun 21 02:17:31.820238 systemd[1]: Reached target swap.target - Swaps. Jun 21 02:17:31.820250 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 02:17:31.820260 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 02:17:31.820269 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 02:17:31.820279 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 02:17:31.820289 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 02:17:31.820299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 02:17:31.820309 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 02:17:31.820319 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 02:17:31.820328 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 02:17:31.820339 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 02:17:31.820349 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 02:17:31.820359 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 02:17:31.820369 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 02:17:31.820379 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 02:17:31.820389 systemd[1]: Reached target machines.target - Containers. Jun 21 02:17:31.820399 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 02:17:31.820409 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 02:17:31.820420 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 02:17:31.820430 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 02:17:31.820440 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 02:17:31.820450 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 02:17:31.820459 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 02:17:31.820469 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 02:17:31.820482 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 02:17:31.820493 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 02:17:31.820504 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 02:17:31.820514 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 02:17:31.820525 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 02:17:31.820534 kernel: fuse: init (API version 7.41) Jun 21 02:17:31.820544 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 02:17:31.820556 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 02:17:31.820566 kernel: loop: module loaded Jun 21 02:17:31.820575 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 02:17:31.820585 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 02:17:31.820596 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 02:17:31.820606 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 02:17:31.820616 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 02:17:31.820626 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 02:17:31.820636 kernel: ACPI: bus type drm_connector registered Jun 21 02:17:31.820647 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 02:17:31.820657 systemd[1]: Stopped verity-setup.service. Jun 21 02:17:31.820667 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 02:17:31.820677 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 02:17:31.820686 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 02:17:31.820696 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 02:17:31.820706 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 02:17:31.820721 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 02:17:31.820734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 02:17:31.820746 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 02:17:31.820756 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 02:17:31.820766 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 02:17:31.820801 systemd-journald[1153]: Collecting audit messages is disabled. Jun 21 02:17:31.820824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 02:17:31.820834 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 02:17:31.820844 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 02:17:31.820853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 02:17:31.820863 systemd-journald[1153]: Journal started Jun 21 02:17:31.820883 systemd-journald[1153]: Runtime Journal (/run/log/journal/ceff544ea4514b23b7a5db0089bb5c43) is 6M, max 48.5M, 42.4M free. Jun 21 02:17:31.621991 systemd[1]: Queued start job for default target multi-user.target. Jun 21 02:17:31.822847 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 02:17:31.635000 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 21 02:17:31.635375 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 02:17:31.824307 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 02:17:31.824470 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 02:17:31.825636 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 02:17:31.825809 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 02:17:31.826849 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 02:17:31.826995 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 02:17:31.828068 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 02:17:31.829229 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 02:17:31.830510 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 02:17:31.831667 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 02:17:31.844042 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 02:17:31.846263 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 02:17:31.847947 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 02:17:31.848887 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 02:17:31.848939 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 02:17:31.850553 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 02:17:31.861267 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 02:17:31.862128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 02:17:31.863360 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 02:17:31.865153 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 02:17:31.866279 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 02:17:31.868367 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 02:17:31.869327 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 02:17:31.870416 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 02:17:31.873422 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 02:17:31.881114 systemd-journald[1153]: Time spent on flushing to /var/log/journal/ceff544ea4514b23b7a5db0089bb5c43 is 26.901ms for 885 entries. Jun 21 02:17:31.881114 systemd-journald[1153]: System Journal (/var/log/journal/ceff544ea4514b23b7a5db0089bb5c43) is 8M, max 195.6M, 187.6M free. Jun 21 02:17:31.911934 systemd-journald[1153]: Received client request to flush runtime journal. Jun 21 02:17:31.911971 kernel: loop0: detected capacity change from 0 to 138376 Jun 21 02:17:31.886347 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 02:17:31.888956 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 02:17:31.890128 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 02:17:31.891316 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 02:17:31.892580 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 02:17:31.895995 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 02:17:31.898752 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 02:17:31.917266 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 02:17:31.919304 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jun 21 02:17:31.919565 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jun 21 02:17:31.920198 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 02:17:31.929843 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 02:17:31.933372 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 02:17:31.937006 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 02:17:31.945441 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 02:17:31.953260 kernel: loop1: detected capacity change from 0 to 207008 Jun 21 02:17:31.963100 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 02:17:31.965510 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 02:17:31.981300 kernel: loop2: detected capacity change from 0 to 107312 Jun 21 02:17:31.988841 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jun 21 02:17:31.988859 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jun 21 02:17:31.992833 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 02:17:32.006241 kernel: loop3: detected capacity change from 0 to 138376 Jun 21 02:17:32.014231 kernel: loop4: detected capacity change from 0 to 207008 Jun 21 02:17:32.019228 kernel: loop5: detected capacity change from 0 to 107312 Jun 21 02:17:32.022348 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 21 02:17:32.022756 (sd-merge)[1227]: Merged extensions into '/usr'. Jun 21 02:17:32.026474 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 02:17:32.026495 systemd[1]: Reloading... Jun 21 02:17:32.095338 zram_generator::config[1257]: No configuration found. Jun 21 02:17:32.142190 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 02:17:32.171686 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 02:17:32.234793 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 02:17:32.234867 systemd[1]: Reloading finished in 208 ms. Jun 21 02:17:32.271939 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 02:17:32.273239 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 02:17:32.288708 systemd[1]: Starting ensure-sysext.service... Jun 21 02:17:32.290572 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 02:17:32.301813 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Jun 21 02:17:32.301828 systemd[1]: Reloading... Jun 21 02:17:32.306815 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 02:17:32.306846 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 02:17:32.307074 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 02:17:32.307274 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 02:17:32.307872 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 02:17:32.308077 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jun 21 02:17:32.308123 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jun 21 02:17:32.310521 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 02:17:32.310535 systemd-tmpfiles[1289]: Skipping /boot Jun 21 02:17:32.318855 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 02:17:32.318870 systemd-tmpfiles[1289]: Skipping /boot Jun 21 02:17:32.341234 zram_generator::config[1316]: No configuration found. Jun 21 02:17:32.406540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 02:17:32.468074 systemd[1]: Reloading finished in 165 ms. Jun 21 02:17:32.492617 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 02:17:32.507266 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 02:17:32.516507 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 02:17:32.518475 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 02:17:32.535561 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 02:17:32.538278 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 02:17:32.541413 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 02:17:32.543516 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 02:17:32.546593 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 02:17:32.560425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 02:17:32.562595 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 02:17:32.564491 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 02:17:32.566397 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 02:17:32.566510 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 02:17:32.567370 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 02:17:32.569943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 02:17:32.571238 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 02:17:32.574752 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 02:17:32.575092 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 02:17:32.578910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 02:17:32.581425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 02:17:32.583183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 02:17:32.584029 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 02:17:32.584127 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 02:17:32.587478 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 02:17:32.589576 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 02:17:32.592857 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 02:17:32.594302 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Jun 21 02:17:32.596011 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 02:17:32.597460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 02:17:32.597609 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 02:17:32.599021 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 02:17:32.599172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 02:17:32.600680 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 02:17:32.600836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 02:17:32.602307 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 02:17:32.610428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 02:17:32.613467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 02:17:32.615462 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 02:17:32.620518 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 02:17:32.625481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 02:17:32.626841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 02:17:32.626963 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 02:17:32.627076 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 02:17:32.627978 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 02:17:32.629878 augenrules[1396]: No rules Jun 21 02:17:32.630083 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 02:17:32.632272 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 02:17:32.634020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 02:17:32.634166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 02:17:32.637773 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 02:17:32.639776 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 02:17:32.639923 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 02:17:32.641086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 02:17:32.641247 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 02:17:32.642476 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 02:17:32.642627 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 02:17:32.646570 systemd[1]: Finished ensure-sysext.service. Jun 21 02:17:32.658933 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 02:17:32.660517 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 02:17:32.660581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 02:17:32.662281 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 21 02:17:32.678760 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 21 02:17:32.761185 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 02:17:32.763820 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 02:17:32.802092 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 02:17:32.804592 systemd-networkd[1439]: lo: Link UP Jun 21 02:17:32.804606 systemd-networkd[1439]: lo: Gained carrier Jun 21 02:17:32.805423 systemd-networkd[1439]: Enumeration completed Jun 21 02:17:32.805532 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 02:17:32.805834 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 02:17:32.805845 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 02:17:32.806614 systemd-networkd[1439]: eth0: Link UP Jun 21 02:17:32.806741 systemd-networkd[1439]: eth0: Gained carrier Jun 21 02:17:32.806761 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 02:17:32.810388 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 02:17:32.812655 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 02:17:32.813612 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 21 02:17:32.815452 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 02:17:32.821186 systemd-resolved[1355]: Positive Trust Anchors: Jun 21 02:17:32.821500 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 02:17:32.821577 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 02:17:32.824276 systemd-networkd[1439]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 21 02:17:32.824807 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. Jun 21 02:17:32.827535 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 21 02:17:32.827591 systemd-timesyncd[1441]: Initial clock synchronization to Sat 2025-06-21 02:17:32.591255 UTC. Jun 21 02:17:32.832488 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 02:17:32.833800 systemd-resolved[1355]: Defaulting to hostname 'linux'. Jun 21 02:17:32.836305 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 02:17:32.837222 systemd[1]: Reached target network.target - Network. Jun 21 02:17:32.837849 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 02:17:32.839338 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 02:17:32.840705 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 02:17:32.842396 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 02:17:32.844428 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 02:17:32.845269 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 02:17:32.846113 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 02:17:32.848311 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 02:17:32.848346 systemd[1]: Reached target paths.target - Path Units. Jun 21 02:17:32.848976 systemd[1]: Reached target timers.target - Timer Units. Jun 21 02:17:32.850533 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 02:17:32.853383 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 02:17:32.857179 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 02:17:32.861472 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 02:17:32.863373 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 02:17:32.875874 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 02:17:32.877580 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 02:17:32.878995 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 02:17:32.885462 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 02:17:32.886157 systemd[1]: Reached target basic.target - Basic System. Jun 21 02:17:32.886889 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 02:17:32.886920 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 02:17:32.887836 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 02:17:32.889498 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 02:17:32.891031 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 02:17:32.905995 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 02:17:32.907670 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 02:17:32.908417 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 02:17:32.909313 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 02:17:32.913238 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 02:17:32.915243 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 02:17:32.917857 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 02:17:32.920540 jq[1478]: false Jun 21 02:17:32.925905 extend-filesystems[1479]: Found /dev/vda6 Jun 21 02:17:32.928226 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 02:17:32.931329 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 02:17:32.932993 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 02:17:32.933410 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 02:17:32.934493 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 02:17:32.938192 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 02:17:32.943491 jq[1499]: true Jun 21 02:17:32.944563 extend-filesystems[1479]: Found /dev/vda9 Jun 21 02:17:32.947257 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 02:17:32.948788 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 02:17:32.948951 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 02:17:32.949166 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 02:17:32.949363 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 02:17:32.951068 extend-filesystems[1479]: Checking size of /dev/vda9 Jun 21 02:17:32.952485 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 02:17:32.952653 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 02:17:32.977249 jq[1507]: true Jun 21 02:17:32.975640 (ntainerd)[1518]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 02:17:32.982318 extend-filesystems[1479]: Resized partition /dev/vda9 Jun 21 02:17:32.991566 extend-filesystems[1521]: resize2fs 1.47.2 (1-Jan-2025) Jun 21 02:17:33.008221 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 21 02:17:33.015645 update_engine[1497]: I20250621 02:17:33.014892 1497 main.cc:92] Flatcar Update Engine starting Jun 21 02:17:33.021997 dbus-daemon[1476]: [system] SELinux support is enabled Jun 21 02:17:33.022167 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 02:17:33.024902 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 02:17:33.024934 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 02:17:33.025699 tar[1503]: linux-arm64/LICENSE Jun 21 02:17:33.025890 tar[1503]: linux-arm64/helm Jun 21 02:17:33.026069 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 02:17:33.026092 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 02:17:33.029388 systemd[1]: Started update-engine.service - Update Engine. Jun 21 02:17:33.031281 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 21 02:17:33.033421 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 02:17:33.042139 update_engine[1497]: I20250621 02:17:33.034365 1497 update_check_scheduler.cc:74] Next update check in 5m23s Jun 21 02:17:33.042373 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (Power Button) Jun 21 02:17:33.044354 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 21 02:17:33.044354 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 21 02:17:33.044354 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 21 02:17:33.042732 systemd-logind[1493]: New seat seat0. Jun 21 02:17:33.054151 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Jun 21 02:17:33.047850 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 02:17:33.048864 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 02:17:33.049162 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 02:17:33.051630 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:17:33.055070 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 02:17:33.059556 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Jun 21 02:17:33.061323 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 02:17:33.062746 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 21 02:17:33.108455 locksmithd[1532]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 02:17:33.205166 containerd[1518]: time="2025-06-21T02:17:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 02:17:33.206744 containerd[1518]: time="2025-06-21T02:17:33.206692732Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 02:17:33.215208 containerd[1518]: time="2025-06-21T02:17:33.215094202Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.046µs" Jun 21 02:17:33.215208 containerd[1518]: time="2025-06-21T02:17:33.215135001Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 02:17:33.215208 containerd[1518]: time="2025-06-21T02:17:33.215160660Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 02:17:33.215355 containerd[1518]: time="2025-06-21T02:17:33.215331114Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 02:17:33.215392 containerd[1518]: time="2025-06-21T02:17:33.215357938Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 02:17:33.215392 containerd[1518]: time="2025-06-21T02:17:33.215386392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 02:17:33.216394 containerd[1518]: time="2025-06-21T02:17:33.215439846Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 02:17:33.216394 containerd[1518]: time="2025-06-21T02:17:33.215458984Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 02:17:33.216394 containerd[1518]: time="2025-06-21T02:17:33.215691859Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 02:17:33.216394 containerd[1518]: time="2025-06-21T02:17:33.215707347Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 02:17:33.216394 containerd[1518]: time="2025-06-21T02:17:33.215723729Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 02:17:33.216394 containerd[1518]: time="2025-06-21T02:17:33.215736306Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 02:17:33.216394 containerd[1518]: time="2025-06-21T02:17:33.215807927Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 02:17:33.216394 containerd[1518]: time="2025-06-21T02:17:33.215995501Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 02:17:33.216394 containerd[1518]: time="2025-06-21T02:17:33.216026944Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 02:17:33.216394 containerd[1518]: time="2025-06-21T02:17:33.216037270Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 02:17:33.216394 containerd[1518]: time="2025-06-21T02:17:33.216076399Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 02:17:33.216853 containerd[1518]: time="2025-06-21T02:17:33.216471460Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 02:17:33.216853 containerd[1518]: time="2025-06-21T02:17:33.216577630Z" level=info msg="metadata content store policy set" policy=shared Jun 21 02:17:33.219594 containerd[1518]: time="2025-06-21T02:17:33.219565098Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 02:17:33.219663 containerd[1518]: time="2025-06-21T02:17:33.219613233Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 02:17:33.219663 containerd[1518]: time="2025-06-21T02:17:33.219634506Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 02:17:33.219663 containerd[1518]: time="2025-06-21T02:17:33.219646268Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 02:17:33.219663 containerd[1518]: time="2025-06-21T02:17:33.219657293Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 02:17:33.219729 containerd[1518]: time="2025-06-21T02:17:33.219668511Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 02:17:33.219729 containerd[1518]: time="2025-06-21T02:17:33.219682525Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 02:17:33.219729 containerd[1518]: time="2025-06-21T02:17:33.219693355Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 02:17:33.219729 containerd[1518]: time="2025-06-21T02:17:33.219704147Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 02:17:33.219729 containerd[1518]: time="2025-06-21T02:17:33.219714434Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 02:17:33.219729 containerd[1518]: time="2025-06-21T02:17:33.219723130Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 02:17:33.219729 containerd[1518]: time="2025-06-21T02:17:33.219734232Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 02:17:33.219946 containerd[1518]: time="2025-06-21T02:17:33.219849408Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 02:17:33.219946 containerd[1518]: time="2025-06-21T02:17:33.219894244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 02:17:33.219946 containerd[1518]: time="2025-06-21T02:17:33.219912294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 02:17:33.219946 containerd[1518]: time="2025-06-21T02:17:33.219922659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 02:17:33.219946 containerd[1518]: time="2025-06-21T02:17:33.219932325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 02:17:33.219946 containerd[1518]: time="2025-06-21T02:17:33.219941719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 02:17:33.220119 containerd[1518]: time="2025-06-21T02:17:33.219952666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 02:17:33.220119 containerd[1518]: time="2025-06-21T02:17:33.219963070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 02:17:33.220119 containerd[1518]: time="2025-06-21T02:17:33.219973240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 02:17:33.220119 containerd[1518]: time="2025-06-21T02:17:33.219983488Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 02:17:33.220119 containerd[1518]: time="2025-06-21T02:17:33.219992844Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 02:17:33.220266 containerd[1518]: time="2025-06-21T02:17:33.220187133Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 02:17:33.220266 containerd[1518]: time="2025-06-21T02:17:33.220220323Z" level=info msg="Start snapshots syncer" Jun 21 02:17:33.220266 containerd[1518]: time="2025-06-21T02:17:33.220257007Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 02:17:33.220535 containerd[1518]: time="2025-06-21T02:17:33.220461738Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 02:17:33.220535 containerd[1518]: time="2025-06-21T02:17:33.220512513Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 02:17:33.220655 containerd[1518]: time="2025-06-21T02:17:33.220577923Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 02:17:33.220763 containerd[1518]: time="2025-06-21T02:17:33.220676640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 02:17:33.220763 containerd[1518]: time="2025-06-21T02:17:33.220711072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 02:17:33.220763 containerd[1518]: time="2025-06-21T02:17:33.220722329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 02:17:33.220763 containerd[1518]: time="2025-06-21T02:17:33.220735722Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 02:17:33.220763 containerd[1518]: time="2025-06-21T02:17:33.220747407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 02:17:33.220763 containerd[1518]: time="2025-06-21T02:17:33.220757422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 02:17:33.220874 containerd[1518]: time="2025-06-21T02:17:33.220767088Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 02:17:33.220874 containerd[1518]: time="2025-06-21T02:17:33.220791699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 02:17:33.220874 containerd[1518]: time="2025-06-21T02:17:33.220802491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 02:17:33.220874 containerd[1518]: time="2025-06-21T02:17:33.220813942Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 02:17:33.220874 containerd[1518]: time="2025-06-21T02:17:33.220847288Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 02:17:33.220874 containerd[1518]: time="2025-06-21T02:17:33.220859787Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 02:17:33.220874 containerd[1518]: time="2025-06-21T02:17:33.220868250Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 02:17:33.220980 containerd[1518]: time="2025-06-21T02:17:33.220877994Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 02:17:33.220980 containerd[1518]: time="2025-06-21T02:17:33.220885641Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 02:17:33.220980 containerd[1518]: time="2025-06-21T02:17:33.220897869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 02:17:33.220980 containerd[1518]: time="2025-06-21T02:17:33.220908272Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 02:17:33.221061 containerd[1518]: time="2025-06-21T02:17:33.220981679Z" level=info msg="runtime interface created" Jun 21 02:17:33.221061 containerd[1518]: time="2025-06-21T02:17:33.220986648Z" level=info msg="created NRI interface" Jun 21 02:17:33.221061 containerd[1518]: time="2025-06-21T02:17:33.221003883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 02:17:33.221061 containerd[1518]: time="2025-06-21T02:17:33.221016111Z" level=info msg="Connect containerd service" Jun 21 02:17:33.221061 containerd[1518]: time="2025-06-21T02:17:33.221041305Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 02:17:33.221827 containerd[1518]: time="2025-06-21T02:17:33.221797964Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 02:17:33.332935 containerd[1518]: time="2025-06-21T02:17:33.332887838Z" level=info msg="Start subscribing containerd event" Jun 21 02:17:33.333113 containerd[1518]: time="2025-06-21T02:17:33.333078051Z" level=info msg="Start recovering state" Jun 21 02:17:33.333289 containerd[1518]: time="2025-06-21T02:17:33.333242643Z" level=info msg="Start event monitor" Jun 21 02:17:33.333289 containerd[1518]: time="2025-06-21T02:17:33.333268924Z" level=info msg="Start cni network conf syncer for default" Jun 21 02:17:33.333340 containerd[1518]: time="2025-06-21T02:17:33.333315895Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 02:17:33.333376 containerd[1518]: time="2025-06-21T02:17:33.333280065Z" level=info msg="Start streaming server" Jun 21 02:17:33.333481 containerd[1518]: time="2025-06-21T02:17:33.333422647Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 02:17:33.333481 containerd[1518]: time="2025-06-21T02:17:33.333432507Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 02:17:33.333481 containerd[1518]: time="2025-06-21T02:17:33.333434215Z" level=info msg="runtime interface starting up..." Jun 21 02:17:33.333481 containerd[1518]: time="2025-06-21T02:17:33.333459641Z" level=info msg="starting plugins..." Jun 21 02:17:33.333481 containerd[1518]: time="2025-06-21T02:17:33.333477770Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 02:17:33.334554 containerd[1518]: time="2025-06-21T02:17:33.333614257Z" level=info msg="containerd successfully booted in 0.128812s" Jun 21 02:17:33.333699 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 02:17:33.407682 sshd_keygen[1500]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 02:17:33.426247 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 02:17:33.429367 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 02:17:33.431131 systemd[1]: Started sshd@0-10.0.0.75:22-10.0.0.1:40730.service - OpenSSH per-connection server daemon (10.0.0.1:40730). Jun 21 02:17:33.437341 tar[1503]: linux-arm64/README.md Jun 21 02:17:33.443916 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 02:17:33.444119 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 02:17:33.447427 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 02:17:33.450344 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 02:17:33.476344 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 02:17:33.479042 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 02:17:33.480963 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 21 02:17:33.481942 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 02:17:33.502254 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 40730 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:17:33.504000 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:17:33.511875 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 02:17:33.513664 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 02:17:33.520165 systemd-logind[1493]: New session 1 of user core. Jun 21 02:17:33.534347 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 02:17:33.537405 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 02:17:33.555758 (systemd)[1597]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 02:17:33.557832 systemd-logind[1493]: New session c1 of user core. Jun 21 02:17:33.648762 systemd[1597]: Queued start job for default target default.target. Jun 21 02:17:33.666087 systemd[1597]: Created slice app.slice - User Application Slice. Jun 21 02:17:33.666116 systemd[1597]: Reached target paths.target - Paths. Jun 21 02:17:33.666153 systemd[1597]: Reached target timers.target - Timers. Jun 21 02:17:33.667355 systemd[1597]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 02:17:33.675945 systemd[1597]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 02:17:33.676001 systemd[1597]: Reached target sockets.target - Sockets. Jun 21 02:17:33.676036 systemd[1597]: Reached target basic.target - Basic System. Jun 21 02:17:33.676060 systemd[1597]: Reached target default.target - Main User Target. Jun 21 02:17:33.676083 systemd[1597]: Startup finished in 113ms. Jun 21 02:17:33.676338 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 02:17:33.679568 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 02:17:33.740750 systemd[1]: Started sshd@1-10.0.0.75:22-10.0.0.1:46310.service - OpenSSH per-connection server daemon (10.0.0.1:46310). Jun 21 02:17:33.782091 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 46310 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:17:33.783290 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:17:33.786992 systemd-logind[1493]: New session 2 of user core. Jun 21 02:17:33.798363 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 02:17:33.848615 sshd[1610]: Connection closed by 10.0.0.1 port 46310 Jun 21 02:17:33.848897 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Jun 21 02:17:33.861380 systemd[1]: sshd@1-10.0.0.75:22-10.0.0.1:46310.service: Deactivated successfully. Jun 21 02:17:33.863530 systemd[1]: session-2.scope: Deactivated successfully. Jun 21 02:17:33.864968 systemd-logind[1493]: Session 2 logged out. Waiting for processes to exit. Jun 21 02:17:33.866266 systemd[1]: Started sshd@2-10.0.0.75:22-10.0.0.1:46322.service - OpenSSH per-connection server daemon (10.0.0.1:46322). Jun 21 02:17:33.867961 systemd-logind[1493]: Removed session 2. Jun 21 02:17:33.913496 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 46322 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:17:33.914819 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:17:33.918712 systemd-logind[1493]: New session 3 of user core. Jun 21 02:17:33.934391 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 02:17:33.984475 sshd[1618]: Connection closed by 10.0.0.1 port 46322 Jun 21 02:17:33.984740 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Jun 21 02:17:33.987811 systemd[1]: sshd@2-10.0.0.75:22-10.0.0.1:46322.service: Deactivated successfully. Jun 21 02:17:33.990517 systemd[1]: session-3.scope: Deactivated successfully. Jun 21 02:17:33.991111 systemd-logind[1493]: Session 3 logged out. Waiting for processes to exit. Jun 21 02:17:33.992574 systemd-logind[1493]: Removed session 3. Jun 21 02:17:34.080360 systemd-networkd[1439]: eth0: Gained IPv6LL Jun 21 02:17:34.082543 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 02:17:34.083859 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 02:17:34.085938 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 21 02:17:34.088141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:17:34.089853 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 02:17:34.110257 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 21 02:17:34.110473 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 21 02:17:34.112360 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 02:17:34.118031 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 02:17:34.626173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:17:34.627336 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 02:17:34.628288 systemd[1]: Startup finished in 2.097s (kernel) + 11.626s (initrd) + 3.431s (userspace) = 17.155s. Jun 21 02:17:34.629416 (kubelet)[1646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 02:17:35.031706 kubelet[1646]: E0621 02:17:35.031573 1646 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 02:17:35.034062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 02:17:35.034196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 02:17:35.034483 systemd[1]: kubelet.service: Consumed 815ms CPU time, 256M memory peak. Jun 21 02:17:43.844254 systemd[1]: Started sshd@3-10.0.0.75:22-10.0.0.1:46056.service - OpenSSH per-connection server daemon (10.0.0.1:46056). Jun 21 02:17:43.909220 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 46056 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:17:43.910357 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:17:43.914256 systemd-logind[1493]: New session 4 of user core. Jun 21 02:17:43.928412 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 02:17:43.978397 sshd[1661]: Connection closed by 10.0.0.1 port 46056 Jun 21 02:17:43.978751 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Jun 21 02:17:43.989153 systemd[1]: sshd@3-10.0.0.75:22-10.0.0.1:46056.service: Deactivated successfully. Jun 21 02:17:43.992423 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 02:17:43.993155 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. Jun 21 02:17:43.995113 systemd[1]: Started sshd@4-10.0.0.75:22-10.0.0.1:46072.service - OpenSSH per-connection server daemon (10.0.0.1:46072). Jun 21 02:17:43.996311 systemd-logind[1493]: Removed session 4. Jun 21 02:17:44.054164 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 46072 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:17:44.055959 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:17:44.060725 systemd-logind[1493]: New session 5 of user core. Jun 21 02:17:44.068370 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 02:17:44.116366 sshd[1669]: Connection closed by 10.0.0.1 port 46072 Jun 21 02:17:44.117627 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Jun 21 02:17:44.127323 systemd[1]: sshd@4-10.0.0.75:22-10.0.0.1:46072.service: Deactivated successfully. Jun 21 02:17:44.129539 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 02:17:44.133261 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. Jun 21 02:17:44.135422 systemd[1]: Started sshd@5-10.0.0.75:22-10.0.0.1:46080.service - OpenSSH per-connection server daemon (10.0.0.1:46080). Jun 21 02:17:44.138342 systemd-logind[1493]: Removed session 5. Jun 21 02:17:44.192341 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 46080 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:17:44.193560 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:17:44.198252 systemd-logind[1493]: New session 6 of user core. Jun 21 02:17:44.210429 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 02:17:44.261343 sshd[1677]: Connection closed by 10.0.0.1 port 46080 Jun 21 02:17:44.261274 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Jun 21 02:17:44.287092 systemd[1]: sshd@5-10.0.0.75:22-10.0.0.1:46080.service: Deactivated successfully. Jun 21 02:17:44.290478 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 02:17:44.291186 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. Jun 21 02:17:44.293476 systemd[1]: Started sshd@6-10.0.0.75:22-10.0.0.1:46082.service - OpenSSH per-connection server daemon (10.0.0.1:46082). Jun 21 02:17:44.294126 systemd-logind[1493]: Removed session 6. Jun 21 02:17:44.354018 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 46082 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:17:44.354763 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:17:44.358701 systemd-logind[1493]: New session 7 of user core. Jun 21 02:17:44.374349 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 02:17:44.432702 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 02:17:44.432968 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 02:17:44.443787 sudo[1686]: pam_unix(sudo:session): session closed for user root Jun 21 02:17:44.445107 sshd[1685]: Connection closed by 10.0.0.1 port 46082 Jun 21 02:17:44.445595 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Jun 21 02:17:44.458436 systemd[1]: sshd@6-10.0.0.75:22-10.0.0.1:46082.service: Deactivated successfully. Jun 21 02:17:44.461416 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 02:17:44.462188 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. Jun 21 02:17:44.464533 systemd[1]: Started sshd@7-10.0.0.75:22-10.0.0.1:46088.service - OpenSSH per-connection server daemon (10.0.0.1:46088). Jun 21 02:17:44.465166 systemd-logind[1493]: Removed session 7. Jun 21 02:17:44.531358 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 46088 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:17:44.529768 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:17:44.535419 systemd-logind[1493]: New session 8 of user core. Jun 21 02:17:44.542359 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 02:17:44.592283 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 02:17:44.592606 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 02:17:44.599118 sudo[1696]: pam_unix(sudo:session): session closed for user root Jun 21 02:17:44.603883 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 02:17:44.604189 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 02:17:44.613019 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 02:17:44.648073 augenrules[1718]: No rules Jun 21 02:17:44.649388 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 02:17:44.650288 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 02:17:44.651221 sudo[1695]: pam_unix(sudo:session): session closed for user root Jun 21 02:17:44.652361 sshd[1694]: Connection closed by 10.0.0.1 port 46088 Jun 21 02:17:44.652707 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Jun 21 02:17:44.667070 systemd[1]: sshd@7-10.0.0.75:22-10.0.0.1:46088.service: Deactivated successfully. Jun 21 02:17:44.669734 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 02:17:44.673114 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. Jun 21 02:17:44.677392 systemd[1]: Started sshd@8-10.0.0.75:22-10.0.0.1:46092.service - OpenSSH per-connection server daemon (10.0.0.1:46092). Jun 21 02:17:44.678591 systemd-logind[1493]: Removed session 8. Jun 21 02:17:44.729509 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 46092 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:17:44.732995 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:17:44.736857 systemd-logind[1493]: New session 9 of user core. Jun 21 02:17:44.743362 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 02:17:44.792764 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 02:17:44.793025 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 02:17:45.173134 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 02:17:45.174395 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 02:17:45.175397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:17:45.189510 (dockerd)[1751]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 02:17:45.339819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:17:45.351524 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 02:17:45.399821 kubelet[1765]: E0621 02:17:45.399754 1765 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 02:17:45.402918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 02:17:45.403046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 02:17:45.403330 systemd[1]: kubelet.service: Consumed 143ms CPU time, 108.2M memory peak. Jun 21 02:17:45.456677 dockerd[1751]: time="2025-06-21T02:17:45.456559084Z" level=info msg="Starting up" Jun 21 02:17:45.457856 dockerd[1751]: time="2025-06-21T02:17:45.457823997Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 02:17:45.569638 dockerd[1751]: time="2025-06-21T02:17:45.569457572Z" level=info msg="Loading containers: start." Jun 21 02:17:45.579750 kernel: Initializing XFRM netlink socket Jun 21 02:17:45.774765 systemd-networkd[1439]: docker0: Link UP Jun 21 02:17:45.777648 dockerd[1751]: time="2025-06-21T02:17:45.777605457Z" level=info msg="Loading containers: done." Jun 21 02:17:45.794910 dockerd[1751]: time="2025-06-21T02:17:45.794854462Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 02:17:45.795032 dockerd[1751]: time="2025-06-21T02:17:45.794946352Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 02:17:45.795058 dockerd[1751]: time="2025-06-21T02:17:45.795048183Z" level=info msg="Initializing buildkit" Jun 21 02:17:45.817749 dockerd[1751]: time="2025-06-21T02:17:45.817700343Z" level=info msg="Completed buildkit initialization" Jun 21 02:17:45.824985 dockerd[1751]: time="2025-06-21T02:17:45.824935021Z" level=info msg="Daemon has completed initialization" Jun 21 02:17:45.825122 dockerd[1751]: time="2025-06-21T02:17:45.825036016Z" level=info msg="API listen on /run/docker.sock" Jun 21 02:17:45.825227 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 02:17:46.604388 containerd[1518]: time="2025-06-21T02:17:46.604339959Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 21 02:17:47.311745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3282762202.mount: Deactivated successfully. Jun 21 02:17:48.562773 containerd[1518]: time="2025-06-21T02:17:48.562724100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:48.563679 containerd[1518]: time="2025-06-21T02:17:48.563491630Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jun 21 02:17:48.564253 containerd[1518]: time="2025-06-21T02:17:48.564224499Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:48.567003 containerd[1518]: time="2025-06-21T02:17:48.566966212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:48.567901 containerd[1518]: time="2025-06-21T02:17:48.567867765Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.963484303s" Jun 21 02:17:48.567999 containerd[1518]: time="2025-06-21T02:17:48.567984140Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jun 21 02:17:48.568719 containerd[1518]: time="2025-06-21T02:17:48.568693941Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 21 02:17:50.041871 containerd[1518]: time="2025-06-21T02:17:50.041818167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:50.043570 containerd[1518]: time="2025-06-21T02:17:50.043540816Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jun 21 02:17:50.044141 containerd[1518]: time="2025-06-21T02:17:50.044098231Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:50.050289 containerd[1518]: time="2025-06-21T02:17:50.050237208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:50.051335 containerd[1518]: time="2025-06-21T02:17:50.051223590Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.48246871s" Jun 21 02:17:50.051335 containerd[1518]: time="2025-06-21T02:17:50.051254296Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jun 21 02:17:50.051723 containerd[1518]: time="2025-06-21T02:17:50.051696981Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 21 02:17:51.189982 containerd[1518]: time="2025-06-21T02:17:51.189934999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:51.190878 containerd[1518]: time="2025-06-21T02:17:51.190684713Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jun 21 02:17:51.191491 containerd[1518]: time="2025-06-21T02:17:51.191462272Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:51.194010 containerd[1518]: time="2025-06-21T02:17:51.193948378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:51.195268 containerd[1518]: time="2025-06-21T02:17:51.195228034Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.143497353s" Jun 21 02:17:51.195268 containerd[1518]: time="2025-06-21T02:17:51.195264496Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jun 21 02:17:51.195688 containerd[1518]: time="2025-06-21T02:17:51.195637737Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 21 02:17:52.214520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount909540446.mount: Deactivated successfully. Jun 21 02:17:52.649469 containerd[1518]: time="2025-06-21T02:17:52.649350551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:52.650050 containerd[1518]: time="2025-06-21T02:17:52.650019984Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jun 21 02:17:52.650909 containerd[1518]: time="2025-06-21T02:17:52.650864767Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:52.652977 containerd[1518]: time="2025-06-21T02:17:52.652940228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:52.653888 containerd[1518]: time="2025-06-21T02:17:52.653855725Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.458189302s" Jun 21 02:17:52.653926 containerd[1518]: time="2025-06-21T02:17:52.653886453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jun 21 02:17:52.654446 containerd[1518]: time="2025-06-21T02:17:52.654405239Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 21 02:17:53.346097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200986722.mount: Deactivated successfully. Jun 21 02:17:54.270881 containerd[1518]: time="2025-06-21T02:17:54.270827574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:54.271523 containerd[1518]: time="2025-06-21T02:17:54.271492982Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jun 21 02:17:54.272079 containerd[1518]: time="2025-06-21T02:17:54.272053099Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:54.275150 containerd[1518]: time="2025-06-21T02:17:54.275097286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:54.276084 containerd[1518]: time="2025-06-21T02:17:54.275992043Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.621552359s" Jun 21 02:17:54.276084 containerd[1518]: time="2025-06-21T02:17:54.276025104Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jun 21 02:17:54.276730 containerd[1518]: time="2025-06-21T02:17:54.276588814Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 02:17:54.763995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3519472241.mount: Deactivated successfully. Jun 21 02:17:54.768307 containerd[1518]: time="2025-06-21T02:17:54.767670372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 02:17:54.768307 containerd[1518]: time="2025-06-21T02:17:54.768301681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jun 21 02:17:54.768810 containerd[1518]: time="2025-06-21T02:17:54.768770641Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 02:17:54.770648 containerd[1518]: time="2025-06-21T02:17:54.770621366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 02:17:54.771526 containerd[1518]: time="2025-06-21T02:17:54.771498315Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 494.878956ms" Jun 21 02:17:54.771632 containerd[1518]: time="2025-06-21T02:17:54.771613429Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 21 02:17:54.772359 containerd[1518]: time="2025-06-21T02:17:54.772333898Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 21 02:17:55.330793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3177189305.mount: Deactivated successfully. Jun 21 02:17:55.641817 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 21 02:17:55.645420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:17:55.801069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:17:55.809024 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 02:17:55.843783 kubelet[2163]: E0621 02:17:55.843717 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 02:17:55.846114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 02:17:55.846263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 02:17:55.846572 systemd[1]: kubelet.service: Consumed 135ms CPU time, 106.3M memory peak. Jun 21 02:17:57.605355 containerd[1518]: time="2025-06-21T02:17:57.605306968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:57.606049 containerd[1518]: time="2025-06-21T02:17:57.606018355Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jun 21 02:17:57.607103 containerd[1518]: time="2025-06-21T02:17:57.607026905Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:57.609629 containerd[1518]: time="2025-06-21T02:17:57.609568218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:17:57.610766 containerd[1518]: time="2025-06-21T02:17:57.610724431Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.838359505s" Jun 21 02:17:57.610766 containerd[1518]: time="2025-06-21T02:17:57.610753676Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jun 21 02:18:03.164910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:18:03.165049 systemd[1]: kubelet.service: Consumed 135ms CPU time, 106.3M memory peak. Jun 21 02:18:03.167026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:18:03.190136 systemd[1]: Reload requested from client PID 2209 ('systemctl') (unit session-9.scope)... Jun 21 02:18:03.190157 systemd[1]: Reloading... Jun 21 02:18:03.258273 zram_generator::config[2253]: No configuration found. Jun 21 02:18:03.356543 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 02:18:03.443371 systemd[1]: Reloading finished in 252 ms. Jun 21 02:18:03.505738 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 02:18:03.505990 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 02:18:03.507303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:18:03.507359 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95M memory peak. Jun 21 02:18:03.509310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:18:03.629645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:18:03.633796 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 02:18:03.666141 kubelet[2297]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 02:18:03.666141 kubelet[2297]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 02:18:03.666141 kubelet[2297]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 02:18:03.666495 kubelet[2297]: I0621 02:18:03.666233 2297 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 02:18:04.758955 kubelet[2297]: I0621 02:18:04.758916 2297 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 21 02:18:04.760229 kubelet[2297]: I0621 02:18:04.759316 2297 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 02:18:04.760229 kubelet[2297]: I0621 02:18:04.759627 2297 server.go:954] "Client rotation is on, will bootstrap in background" Jun 21 02:18:04.813460 kubelet[2297]: E0621 02:18:04.813385 2297 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jun 21 02:18:04.814791 kubelet[2297]: I0621 02:18:04.814550 2297 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 02:18:04.820057 kubelet[2297]: I0621 02:18:04.820017 2297 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 02:18:04.822751 kubelet[2297]: I0621 02:18:04.822720 2297 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 02:18:04.823362 kubelet[2297]: I0621 02:18:04.823314 2297 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 02:18:04.823541 kubelet[2297]: I0621 02:18:04.823356 2297 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 02:18:04.823625 kubelet[2297]: I0621 02:18:04.823609 2297 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 02:18:04.823625 kubelet[2297]: I0621 02:18:04.823620 2297 container_manager_linux.go:304] "Creating device plugin manager" Jun 21 02:18:04.823830 kubelet[2297]: I0621 02:18:04.823803 2297 state_mem.go:36] "Initialized new in-memory state store" Jun 21 02:18:04.827861 kubelet[2297]: I0621 02:18:04.827830 2297 kubelet.go:446] "Attempting to sync node with API server" Jun 21 02:18:04.827861 kubelet[2297]: I0621 02:18:04.827860 2297 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 02:18:04.827908 kubelet[2297]: I0621 02:18:04.827883 2297 kubelet.go:352] "Adding apiserver pod source" Jun 21 02:18:04.827908 kubelet[2297]: I0621 02:18:04.827893 2297 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 02:18:04.831457 kubelet[2297]: W0621 02:18:04.831386 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Jun 21 02:18:04.831539 kubelet[2297]: E0621 02:18:04.831470 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jun 21 02:18:04.833217 kubelet[2297]: W0621 02:18:04.832658 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Jun 21 02:18:04.833217 kubelet[2297]: E0621 02:18:04.832711 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jun 21 02:18:04.833217 kubelet[2297]: I0621 02:18:04.833090 2297 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 02:18:04.834049 kubelet[2297]: I0621 02:18:04.833799 2297 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 02:18:04.834049 kubelet[2297]: W0621 02:18:04.833916 2297 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 02:18:04.834919 kubelet[2297]: I0621 02:18:04.834869 2297 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 02:18:04.837676 kubelet[2297]: I0621 02:18:04.837624 2297 server.go:1287] "Started kubelet" Jun 21 02:18:04.838912 kubelet[2297]: I0621 02:18:04.838574 2297 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 02:18:04.839531 kubelet[2297]: I0621 02:18:04.839508 2297 server.go:479] "Adding debug handlers to kubelet server" Jun 21 02:18:04.839858 kubelet[2297]: I0621 02:18:04.839807 2297 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 02:18:04.840135 kubelet[2297]: I0621 02:18:04.840114 2297 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 02:18:04.841333 kubelet[2297]: I0621 02:18:04.841302 2297 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 02:18:04.842254 kubelet[2297]: I0621 02:18:04.841522 2297 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 02:18:04.842254 kubelet[2297]: E0621 02:18:04.841814 2297 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 02:18:04.842254 kubelet[2297]: I0621 02:18:04.841843 2297 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 02:18:04.842254 kubelet[2297]: I0621 02:18:04.842022 2297 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 02:18:04.842254 kubelet[2297]: I0621 02:18:04.842089 2297 reconciler.go:26] "Reconciler: start to sync state" Jun 21 02:18:04.843413 kubelet[2297]: W0621 02:18:04.842611 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Jun 21 02:18:04.843413 kubelet[2297]: E0621 02:18:04.842941 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jun 21 02:18:04.843413 kubelet[2297]: I0621 02:18:04.843391 2297 factory.go:221] Registration of the systemd container factory successfully Jun 21 02:18:04.843529 kubelet[2297]: I0621 02:18:04.843477 2297 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 02:18:04.843715 kubelet[2297]: E0621 02:18:04.843699 2297 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 02:18:04.844059 kubelet[2297]: E0621 02:18:04.844028 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="200ms" Jun 21 02:18:04.844112 kubelet[2297]: E0621 02:18:04.843767 2297 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184aed44c6e6358b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-21 02:18:04.837393803 +0000 UTC m=+1.200240269,LastTimestamp:2025-06-21 02:18:04.837393803 +0000 UTC m=+1.200240269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 21 02:18:04.844728 kubelet[2297]: I0621 02:18:04.844699 2297 factory.go:221] Registration of the containerd container factory successfully Jun 21 02:18:04.855764 kubelet[2297]: I0621 02:18:04.855495 2297 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 02:18:04.855764 kubelet[2297]: I0621 02:18:04.855515 2297 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 02:18:04.855764 kubelet[2297]: I0621 02:18:04.855535 2297 state_mem.go:36] "Initialized new in-memory state store" Jun 21 02:18:04.856234 kubelet[2297]: I0621 02:18:04.856177 2297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 02:18:04.857210 kubelet[2297]: I0621 02:18:04.857174 2297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 02:18:04.857210 kubelet[2297]: I0621 02:18:04.857197 2297 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 21 02:18:04.857276 kubelet[2297]: I0621 02:18:04.857228 2297 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 02:18:04.857276 kubelet[2297]: I0621 02:18:04.857236 2297 kubelet.go:2382] "Starting kubelet main sync loop" Jun 21 02:18:04.857314 kubelet[2297]: E0621 02:18:04.857279 2297 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 02:18:04.889144 kubelet[2297]: I0621 02:18:04.889016 2297 policy_none.go:49] "None policy: Start" Jun 21 02:18:04.889144 kubelet[2297]: I0621 02:18:04.889057 2297 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 02:18:04.889144 kubelet[2297]: I0621 02:18:04.889071 2297 state_mem.go:35] "Initializing new in-memory state store" Jun 21 02:18:04.890110 kubelet[2297]: W0621 02:18:04.890060 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Jun 21 02:18:04.890240 kubelet[2297]: E0621 02:18:04.890220 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jun 21 02:18:04.894299 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 02:18:04.910372 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 02:18:04.913340 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 02:18:04.923264 kubelet[2297]: I0621 02:18:04.923201 2297 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 02:18:04.923508 kubelet[2297]: I0621 02:18:04.923481 2297 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 02:18:04.923556 kubelet[2297]: I0621 02:18:04.923498 2297 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 02:18:04.923793 kubelet[2297]: I0621 02:18:04.923775 2297 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 02:18:04.925721 kubelet[2297]: E0621 02:18:04.925648 2297 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 02:18:04.925721 kubelet[2297]: E0621 02:18:04.925690 2297 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 21 02:18:04.966092 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jun 21 02:18:04.989870 kubelet[2297]: E0621 02:18:04.989837 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:18:04.994594 systemd[1]: Created slice kubepods-burstable-pod856677ffb957ad85ee1026e14b97d83d.slice - libcontainer container kubepods-burstable-pod856677ffb957ad85ee1026e14b97d83d.slice. Jun 21 02:18:05.016712 kubelet[2297]: E0621 02:18:05.016626 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:18:05.020487 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jun 21 02:18:05.022329 kubelet[2297]: E0621 02:18:05.022296 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:18:05.025245 kubelet[2297]: I0621 02:18:05.025177 2297 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 02:18:05.025694 kubelet[2297]: E0621 02:18:05.025665 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Jun 21 02:18:05.045168 kubelet[2297]: E0621 02:18:05.045128 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="400ms" Jun 21 02:18:05.143657 kubelet[2297]: I0621 02:18:05.143615 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:05.143657 kubelet[2297]: I0621 02:18:05.143651 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:05.143735 kubelet[2297]: I0621 02:18:05.143672 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:05.143735 kubelet[2297]: I0621 02:18:05.143692 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:05.143735 kubelet[2297]: I0621 02:18:05.143709 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/856677ffb957ad85ee1026e14b97d83d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"856677ffb957ad85ee1026e14b97d83d\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:18:05.143735 kubelet[2297]: I0621 02:18:05.143725 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/856677ffb957ad85ee1026e14b97d83d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"856677ffb957ad85ee1026e14b97d83d\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:18:05.143817 kubelet[2297]: I0621 02:18:05.143739 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jun 21 02:18:05.143817 kubelet[2297]: I0621 02:18:05.143753 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/856677ffb957ad85ee1026e14b97d83d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"856677ffb957ad85ee1026e14b97d83d\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:18:05.143817 kubelet[2297]: I0621 02:18:05.143768 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:05.227243 kubelet[2297]: I0621 02:18:05.227188 2297 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 02:18:05.227563 kubelet[2297]: E0621 02:18:05.227538 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Jun 21 02:18:05.291522 kubelet[2297]: E0621 02:18:05.291413 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:05.292550 containerd[1518]: time="2025-06-21T02:18:05.292500734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jun 21 02:18:05.317616 kubelet[2297]: E0621 02:18:05.317574 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:05.318156 containerd[1518]: time="2025-06-21T02:18:05.318120209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:856677ffb957ad85ee1026e14b97d83d,Namespace:kube-system,Attempt:0,}" Jun 21 02:18:05.323552 kubelet[2297]: E0621 02:18:05.323515 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:05.324029 containerd[1518]: time="2025-06-21T02:18:05.323910850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jun 21 02:18:05.445991 kubelet[2297]: E0621 02:18:05.445952 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="800ms" Jun 21 02:18:05.448724 containerd[1518]: time="2025-06-21T02:18:05.448601123Z" level=info msg="connecting to shim 0d0863785bfd2dfc9515468fcce9e3b4f3e557665de8ee9f560a4da9c042a84a" address="unix:///run/containerd/s/f8c4f4927eaa836e04b9d94da010a16d38994c966fa79768797148470f1d44c4" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:05.449814 containerd[1518]: time="2025-06-21T02:18:05.449767492Z" level=info msg="connecting to shim c9d6c963511007f1aded1f2ad3a96a369ad57c3adbabb39cc4290edfd27ca8c3" address="unix:///run/containerd/s/87b0fcacd4f080fda631498212d66ec0bb25cee8d696e6c82163990b4c46f851" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:05.453424 containerd[1518]: time="2025-06-21T02:18:05.453369806Z" level=info msg="connecting to shim a165594f297e11d7afa6d95ea2170ddd17ae4dfd4ca193550ec3ac124897db87" address="unix:///run/containerd/s/eed503d3ed7ba204fff66ce5828a0f3751deea69a94607622c7be567a6380312" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:05.479443 systemd[1]: Started cri-containerd-c9d6c963511007f1aded1f2ad3a96a369ad57c3adbabb39cc4290edfd27ca8c3.scope - libcontainer container c9d6c963511007f1aded1f2ad3a96a369ad57c3adbabb39cc4290edfd27ca8c3. Jun 21 02:18:05.483482 systemd[1]: Started cri-containerd-0d0863785bfd2dfc9515468fcce9e3b4f3e557665de8ee9f560a4da9c042a84a.scope - libcontainer container 0d0863785bfd2dfc9515468fcce9e3b4f3e557665de8ee9f560a4da9c042a84a. Jun 21 02:18:05.484725 systemd[1]: Started cri-containerd-a165594f297e11d7afa6d95ea2170ddd17ae4dfd4ca193550ec3ac124897db87.scope - libcontainer container a165594f297e11d7afa6d95ea2170ddd17ae4dfd4ca193550ec3ac124897db87. Jun 21 02:18:05.529003 containerd[1518]: time="2025-06-21T02:18:05.528954532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d0863785bfd2dfc9515468fcce9e3b4f3e557665de8ee9f560a4da9c042a84a\"" Jun 21 02:18:05.529982 kubelet[2297]: E0621 02:18:05.529956 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:05.531832 containerd[1518]: time="2025-06-21T02:18:05.531797789Z" level=info msg="CreateContainer within sandbox \"0d0863785bfd2dfc9515468fcce9e3b4f3e557665de8ee9f560a4da9c042a84a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 02:18:05.539359 containerd[1518]: time="2025-06-21T02:18:05.539318283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:856677ffb957ad85ee1026e14b97d83d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9d6c963511007f1aded1f2ad3a96a369ad57c3adbabb39cc4290edfd27ca8c3\"" Jun 21 02:18:05.540273 kubelet[2297]: E0621 02:18:05.540246 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:05.541190 containerd[1518]: time="2025-06-21T02:18:05.541158423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a165594f297e11d7afa6d95ea2170ddd17ae4dfd4ca193550ec3ac124897db87\"" Jun 21 02:18:05.541686 kubelet[2297]: E0621 02:18:05.541616 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:05.543045 containerd[1518]: time="2025-06-21T02:18:05.543011085Z" level=info msg="CreateContainer within sandbox \"c9d6c963511007f1aded1f2ad3a96a369ad57c3adbabb39cc4290edfd27ca8c3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 02:18:05.543593 containerd[1518]: time="2025-06-21T02:18:05.543570527Z" level=info msg="CreateContainer within sandbox \"a165594f297e11d7afa6d95ea2170ddd17ae4dfd4ca193550ec3ac124897db87\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 02:18:05.546982 containerd[1518]: time="2025-06-21T02:18:05.546944505Z" level=info msg="Container de699c55036be097d95ae52eb4c286e040beeac17f4a88319a8193758af3a188: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:05.556387 containerd[1518]: time="2025-06-21T02:18:05.556308779Z" level=info msg="Container 1d980bc7d13900a735f4f0dc49c5b49630a2839cdc2a0940acd709100b53b230: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:05.558068 containerd[1518]: time="2025-06-21T02:18:05.558027950Z" level=info msg="Container 236f302c403d236ca6b16222d9db7648c9760817b07568561f456a00de1b969e: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:05.560485 containerd[1518]: time="2025-06-21T02:18:05.560434374Z" level=info msg="CreateContainer within sandbox \"0d0863785bfd2dfc9515468fcce9e3b4f3e557665de8ee9f560a4da9c042a84a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"de699c55036be097d95ae52eb4c286e040beeac17f4a88319a8193758af3a188\"" Jun 21 02:18:05.561113 containerd[1518]: time="2025-06-21T02:18:05.561074743Z" level=info msg="StartContainer for \"de699c55036be097d95ae52eb4c286e040beeac17f4a88319a8193758af3a188\"" Jun 21 02:18:05.562271 containerd[1518]: time="2025-06-21T02:18:05.562239152Z" level=info msg="connecting to shim de699c55036be097d95ae52eb4c286e040beeac17f4a88319a8193758af3a188" address="unix:///run/containerd/s/f8c4f4927eaa836e04b9d94da010a16d38994c966fa79768797148470f1d44c4" protocol=ttrpc version=3 Jun 21 02:18:05.563218 containerd[1518]: time="2025-06-21T02:18:05.563100057Z" level=info msg="CreateContainer within sandbox \"a165594f297e11d7afa6d95ea2170ddd17ae4dfd4ca193550ec3ac124897db87\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1d980bc7d13900a735f4f0dc49c5b49630a2839cdc2a0940acd709100b53b230\"" Jun 21 02:18:05.563735 containerd[1518]: time="2025-06-21T02:18:05.563694023Z" level=info msg="StartContainer for \"1d980bc7d13900a735f4f0dc49c5b49630a2839cdc2a0940acd709100b53b230\"" Jun 21 02:18:05.565006 containerd[1518]: time="2025-06-21T02:18:05.564969680Z" level=info msg="connecting to shim 1d980bc7d13900a735f4f0dc49c5b49630a2839cdc2a0940acd709100b53b230" address="unix:///run/containerd/s/eed503d3ed7ba204fff66ce5828a0f3751deea69a94607622c7be567a6380312" protocol=ttrpc version=3 Jun 21 02:18:05.566827 containerd[1518]: time="2025-06-21T02:18:05.566774338Z" level=info msg="CreateContainer within sandbox \"c9d6c963511007f1aded1f2ad3a96a369ad57c3adbabb39cc4290edfd27ca8c3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"236f302c403d236ca6b16222d9db7648c9760817b07568561f456a00de1b969e\"" Jun 21 02:18:05.567402 containerd[1518]: time="2025-06-21T02:18:05.567360542Z" level=info msg="StartContainer for \"236f302c403d236ca6b16222d9db7648c9760817b07568561f456a00de1b969e\"" Jun 21 02:18:05.568737 containerd[1518]: time="2025-06-21T02:18:05.568687244Z" level=info msg="connecting to shim 236f302c403d236ca6b16222d9db7648c9760817b07568561f456a00de1b969e" address="unix:///run/containerd/s/87b0fcacd4f080fda631498212d66ec0bb25cee8d696e6c82163990b4c46f851" protocol=ttrpc version=3 Jun 21 02:18:05.584424 systemd[1]: Started cri-containerd-1d980bc7d13900a735f4f0dc49c5b49630a2839cdc2a0940acd709100b53b230.scope - libcontainer container 1d980bc7d13900a735f4f0dc49c5b49630a2839cdc2a0940acd709100b53b230. Jun 21 02:18:05.585836 systemd[1]: Started cri-containerd-de699c55036be097d95ae52eb4c286e040beeac17f4a88319a8193758af3a188.scope - libcontainer container de699c55036be097d95ae52eb4c286e040beeac17f4a88319a8193758af3a188. Jun 21 02:18:05.589353 systemd[1]: Started cri-containerd-236f302c403d236ca6b16222d9db7648c9760817b07568561f456a00de1b969e.scope - libcontainer container 236f302c403d236ca6b16222d9db7648c9760817b07568561f456a00de1b969e. Jun 21 02:18:05.629904 kubelet[2297]: I0621 02:18:05.629862 2297 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 02:18:05.630724 kubelet[2297]: E0621 02:18:05.630555 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Jun 21 02:18:05.669498 containerd[1518]: time="2025-06-21T02:18:05.669259156Z" level=info msg="StartContainer for \"de699c55036be097d95ae52eb4c286e040beeac17f4a88319a8193758af3a188\" returns successfully" Jun 21 02:18:05.676945 containerd[1518]: time="2025-06-21T02:18:05.671036331Z" level=info msg="StartContainer for \"236f302c403d236ca6b16222d9db7648c9760817b07568561f456a00de1b969e\" returns successfully" Jun 21 02:18:05.676945 containerd[1518]: time="2025-06-21T02:18:05.671148020Z" level=info msg="StartContainer for \"1d980bc7d13900a735f4f0dc49c5b49630a2839cdc2a0940acd709100b53b230\" returns successfully" Jun 21 02:18:05.711756 kubelet[2297]: W0621 02:18:05.709302 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused Jun 21 02:18:05.711756 kubelet[2297]: E0621 02:18:05.709369 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" Jun 21 02:18:05.873341 kubelet[2297]: E0621 02:18:05.873235 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:18:05.873652 kubelet[2297]: E0621 02:18:05.873396 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:05.875531 kubelet[2297]: E0621 02:18:05.875501 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:18:05.875647 kubelet[2297]: E0621 02:18:05.875628 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:05.878509 kubelet[2297]: E0621 02:18:05.878483 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:18:05.878625 kubelet[2297]: E0621 02:18:05.878607 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:06.433474 kubelet[2297]: I0621 02:18:06.433421 2297 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 02:18:06.882197 kubelet[2297]: E0621 02:18:06.881989 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:18:06.882197 kubelet[2297]: E0621 02:18:06.882118 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 02:18:06.882197 kubelet[2297]: E0621 02:18:06.882145 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:06.882617 kubelet[2297]: E0621 02:18:06.882589 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:07.452126 kubelet[2297]: E0621 02:18:07.452086 2297 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 21 02:18:07.488731 kubelet[2297]: I0621 02:18:07.488690 2297 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 21 02:18:07.488731 kubelet[2297]: E0621 02:18:07.488730 2297 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jun 21 02:18:07.543935 kubelet[2297]: I0621 02:18:07.543897 2297 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 21 02:18:07.555500 kubelet[2297]: E0621 02:18:07.555466 2297 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jun 21 02:18:07.555500 kubelet[2297]: I0621 02:18:07.555496 2297 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 21 02:18:07.558300 kubelet[2297]: E0621 02:18:07.558273 2297 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jun 21 02:18:07.558300 kubelet[2297]: I0621 02:18:07.558302 2297 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:07.561066 kubelet[2297]: E0621 02:18:07.561043 2297 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:07.830058 kubelet[2297]: I0621 02:18:07.829714 2297 apiserver.go:52] "Watching apiserver" Jun 21 02:18:07.842320 kubelet[2297]: I0621 02:18:07.842275 2297 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 02:18:08.895919 kubelet[2297]: I0621 02:18:08.895883 2297 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:08.901154 kubelet[2297]: E0621 02:18:08.901125 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:09.608560 systemd[1]: Reload requested from client PID 2574 ('systemctl') (unit session-9.scope)... Jun 21 02:18:09.608576 systemd[1]: Reloading... Jun 21 02:18:09.676238 zram_generator::config[2620]: No configuration found. Jun 21 02:18:09.808061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 02:18:09.885337 kubelet[2297]: E0621 02:18:09.885003 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:09.905501 systemd[1]: Reloading finished in 296 ms. Jun 21 02:18:09.935983 kubelet[2297]: I0621 02:18:09.935920 2297 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 02:18:09.936137 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:18:09.959565 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 02:18:09.959844 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:18:09.959926 systemd[1]: kubelet.service: Consumed 1.630s CPU time, 128.5M memory peak. Jun 21 02:18:09.962035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:18:10.112974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:18:10.126654 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 02:18:10.168298 kubelet[2659]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 02:18:10.168298 kubelet[2659]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 02:18:10.168298 kubelet[2659]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 02:18:10.168628 kubelet[2659]: I0621 02:18:10.168354 2659 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 02:18:10.174231 kubelet[2659]: I0621 02:18:10.174058 2659 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 21 02:18:10.174231 kubelet[2659]: I0621 02:18:10.174088 2659 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 02:18:10.174521 kubelet[2659]: I0621 02:18:10.174503 2659 server.go:954] "Client rotation is on, will bootstrap in background" Jun 21 02:18:10.175913 kubelet[2659]: I0621 02:18:10.175894 2659 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 21 02:18:10.178442 kubelet[2659]: I0621 02:18:10.178402 2659 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 02:18:10.181944 kubelet[2659]: I0621 02:18:10.181915 2659 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 02:18:10.185029 kubelet[2659]: I0621 02:18:10.184776 2659 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 02:18:10.185029 kubelet[2659]: I0621 02:18:10.184960 2659 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 02:18:10.185421 kubelet[2659]: I0621 02:18:10.184982 2659 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 02:18:10.185517 kubelet[2659]: I0621 02:18:10.185438 2659 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 02:18:10.185517 kubelet[2659]: I0621 02:18:10.185450 2659 container_manager_linux.go:304] "Creating device plugin manager" Jun 21 02:18:10.185517 kubelet[2659]: I0621 02:18:10.185502 2659 state_mem.go:36] "Initialized new in-memory state store" Jun 21 02:18:10.186031 kubelet[2659]: I0621 02:18:10.186014 2659 kubelet.go:446] "Attempting to sync node with API server" Jun 21 02:18:10.186074 kubelet[2659]: I0621 02:18:10.186036 2659 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 02:18:10.186074 kubelet[2659]: I0621 02:18:10.186060 2659 kubelet.go:352] "Adding apiserver pod source" Jun 21 02:18:10.186074 kubelet[2659]: I0621 02:18:10.186071 2659 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 02:18:10.187044 kubelet[2659]: I0621 02:18:10.186990 2659 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 02:18:10.187481 kubelet[2659]: I0621 02:18:10.187466 2659 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 02:18:10.187936 kubelet[2659]: I0621 02:18:10.187871 2659 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 02:18:10.187936 kubelet[2659]: I0621 02:18:10.187906 2659 server.go:1287] "Started kubelet" Jun 21 02:18:10.188297 kubelet[2659]: I0621 02:18:10.188252 2659 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 02:18:10.189072 kubelet[2659]: I0621 02:18:10.189034 2659 server.go:479] "Adding debug handlers to kubelet server" Jun 21 02:18:10.189697 kubelet[2659]: I0621 02:18:10.189638 2659 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 02:18:10.190309 kubelet[2659]: I0621 02:18:10.190201 2659 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 02:18:10.195226 kubelet[2659]: I0621 02:18:10.194746 2659 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 02:18:10.195226 kubelet[2659]: I0621 02:18:10.194788 2659 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 02:18:10.195375 kubelet[2659]: I0621 02:18:10.195363 2659 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 02:18:10.196963 kubelet[2659]: I0621 02:18:10.196903 2659 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 02:18:10.197195 kubelet[2659]: E0621 02:18:10.197168 2659 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 02:18:10.197531 kubelet[2659]: I0621 02:18:10.197173 2659 reconciler.go:26] "Reconciler: start to sync state" Jun 21 02:18:10.199367 kubelet[2659]: E0621 02:18:10.199325 2659 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 02:18:10.209370 kubelet[2659]: I0621 02:18:10.209332 2659 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 02:18:10.214539 kubelet[2659]: I0621 02:18:10.214510 2659 factory.go:221] Registration of the containerd container factory successfully Jun 21 02:18:10.215519 kubelet[2659]: I0621 02:18:10.215496 2659 factory.go:221] Registration of the systemd container factory successfully Jun 21 02:18:10.219477 kubelet[2659]: I0621 02:18:10.219417 2659 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 02:18:10.224629 kubelet[2659]: I0621 02:18:10.224553 2659 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 02:18:10.225223 kubelet[2659]: I0621 02:18:10.225183 2659 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 21 02:18:10.225289 kubelet[2659]: I0621 02:18:10.225237 2659 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 02:18:10.225289 kubelet[2659]: I0621 02:18:10.225246 2659 kubelet.go:2382] "Starting kubelet main sync loop" Jun 21 02:18:10.225334 kubelet[2659]: E0621 02:18:10.225295 2659 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 02:18:10.250590 kubelet[2659]: I0621 02:18:10.250546 2659 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 02:18:10.250590 kubelet[2659]: I0621 02:18:10.250583 2659 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 02:18:10.250738 kubelet[2659]: I0621 02:18:10.250604 2659 state_mem.go:36] "Initialized new in-memory state store" Jun 21 02:18:10.250873 kubelet[2659]: I0621 02:18:10.250854 2659 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 02:18:10.250904 kubelet[2659]: I0621 02:18:10.250875 2659 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 02:18:10.250904 kubelet[2659]: I0621 02:18:10.250896 2659 policy_none.go:49] "None policy: Start" Jun 21 02:18:10.250904 kubelet[2659]: I0621 02:18:10.250904 2659 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 02:18:10.250980 kubelet[2659]: I0621 02:18:10.250914 2659 state_mem.go:35] "Initializing new in-memory state store" Jun 21 02:18:10.251033 kubelet[2659]: I0621 02:18:10.251015 2659 state_mem.go:75] "Updated machine memory state" Jun 21 02:18:10.255569 kubelet[2659]: I0621 02:18:10.255546 2659 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 02:18:10.256074 kubelet[2659]: I0621 02:18:10.256055 2659 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 02:18:10.256145 kubelet[2659]: I0621 02:18:10.256073 2659 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 02:18:10.256620 kubelet[2659]: I0621 02:18:10.256387 2659 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 02:18:10.258955 kubelet[2659]: E0621 02:18:10.258437 2659 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 02:18:10.326792 kubelet[2659]: I0621 02:18:10.326757 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:10.327190 kubelet[2659]: I0621 02:18:10.327059 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 21 02:18:10.327348 kubelet[2659]: I0621 02:18:10.327058 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 21 02:18:10.332669 kubelet[2659]: E0621 02:18:10.332623 2659 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:10.360195 kubelet[2659]: I0621 02:18:10.360170 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 02:18:10.366171 kubelet[2659]: I0621 02:18:10.366137 2659 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jun 21 02:18:10.366333 kubelet[2659]: I0621 02:18:10.366266 2659 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 21 02:18:10.397888 kubelet[2659]: I0621 02:18:10.397835 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/856677ffb957ad85ee1026e14b97d83d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"856677ffb957ad85ee1026e14b97d83d\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:18:10.397888 kubelet[2659]: I0621 02:18:10.397884 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:10.398036 kubelet[2659]: I0621 02:18:10.397906 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:10.398036 kubelet[2659]: I0621 02:18:10.397925 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:10.398036 kubelet[2659]: I0621 02:18:10.397941 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/856677ffb957ad85ee1026e14b97d83d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"856677ffb957ad85ee1026e14b97d83d\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:18:10.398036 kubelet[2659]: I0621 02:18:10.397955 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/856677ffb957ad85ee1026e14b97d83d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"856677ffb957ad85ee1026e14b97d83d\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:18:10.398036 kubelet[2659]: I0621 02:18:10.397969 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:10.398136 kubelet[2659]: I0621 02:18:10.397984 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:18:10.398136 kubelet[2659]: I0621 02:18:10.398000 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jun 21 02:18:10.633105 kubelet[2659]: E0621 02:18:10.632987 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:10.633503 kubelet[2659]: E0621 02:18:10.633461 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:10.633975 kubelet[2659]: E0621 02:18:10.633637 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:11.187281 kubelet[2659]: I0621 02:18:11.187237 2659 apiserver.go:52] "Watching apiserver" Jun 21 02:18:11.197152 kubelet[2659]: I0621 02:18:11.197118 2659 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 02:18:11.236478 kubelet[2659]: I0621 02:18:11.236376 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 21 02:18:11.236815 kubelet[2659]: E0621 02:18:11.236514 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:11.237063 kubelet[2659]: E0621 02:18:11.237040 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:11.246437 kubelet[2659]: E0621 02:18:11.246403 2659 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 21 02:18:11.247270 kubelet[2659]: E0621 02:18:11.246577 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:11.287592 kubelet[2659]: I0621 02:18:11.287354 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.287318197 podStartE2EDuration="3.287318197s" podCreationTimestamp="2025-06-21 02:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:18:11.286036048 +0000 UTC m=+1.155212967" watchObservedRunningTime="2025-06-21 02:18:11.287318197 +0000 UTC m=+1.156495076" Jun 21 02:18:11.305322 kubelet[2659]: I0621 02:18:11.305265 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.305246415 podStartE2EDuration="1.305246415s" podCreationTimestamp="2025-06-21 02:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:18:11.296242404 +0000 UTC m=+1.165419323" watchObservedRunningTime="2025-06-21 02:18:11.305246415 +0000 UTC m=+1.174423334" Jun 21 02:18:11.315398 kubelet[2659]: I0621 02:18:11.315297 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.315279162 podStartE2EDuration="1.315279162s" podCreationTimestamp="2025-06-21 02:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:18:11.305760883 +0000 UTC m=+1.174937802" watchObservedRunningTime="2025-06-21 02:18:11.315279162 +0000 UTC m=+1.184456081" Jun 21 02:18:12.251995 kubelet[2659]: E0621 02:18:12.247948 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:12.251995 kubelet[2659]: E0621 02:18:12.248504 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:13.249429 kubelet[2659]: E0621 02:18:13.249387 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:16.407761 kubelet[2659]: E0621 02:18:16.407728 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:16.518035 kubelet[2659]: I0621 02:18:16.518009 2659 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 02:18:16.518562 containerd[1518]: time="2025-06-21T02:18:16.518527575Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 02:18:16.519012 kubelet[2659]: I0621 02:18:16.518991 2659 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 02:18:17.141917 kubelet[2659]: I0621 02:18:17.141875 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20092b10-28ca-4f9f-863c-00fb9729772d-xtables-lock\") pod \"kube-proxy-r5g8r\" (UID: \"20092b10-28ca-4f9f-863c-00fb9729772d\") " pod="kube-system/kube-proxy-r5g8r" Jun 21 02:18:17.141917 kubelet[2659]: I0621 02:18:17.141914 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/20092b10-28ca-4f9f-863c-00fb9729772d-kube-proxy\") pod \"kube-proxy-r5g8r\" (UID: \"20092b10-28ca-4f9f-863c-00fb9729772d\") " pod="kube-system/kube-proxy-r5g8r" Jun 21 02:18:17.142353 kubelet[2659]: I0621 02:18:17.141933 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20092b10-28ca-4f9f-863c-00fb9729772d-lib-modules\") pod \"kube-proxy-r5g8r\" (UID: \"20092b10-28ca-4f9f-863c-00fb9729772d\") " pod="kube-system/kube-proxy-r5g8r" Jun 21 02:18:17.142353 kubelet[2659]: I0621 02:18:17.142077 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gx9d\" (UniqueName: \"kubernetes.io/projected/20092b10-28ca-4f9f-863c-00fb9729772d-kube-api-access-4gx9d\") pod \"kube-proxy-r5g8r\" (UID: \"20092b10-28ca-4f9f-863c-00fb9729772d\") " pod="kube-system/kube-proxy-r5g8r" Jun 21 02:18:17.142616 systemd[1]: Created slice kubepods-besteffort-pod20092b10_28ca_4f9f_863c_00fb9729772d.slice - libcontainer container kubepods-besteffort-pod20092b10_28ca_4f9f_863c_00fb9729772d.slice. Jun 21 02:18:17.257015 kubelet[2659]: E0621 02:18:17.256537 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:17.455283 kubelet[2659]: E0621 02:18:17.455163 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:17.457476 containerd[1518]: time="2025-06-21T02:18:17.457404647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r5g8r,Uid:20092b10-28ca-4f9f-863c-00fb9729772d,Namespace:kube-system,Attempt:0,}" Jun 21 02:18:17.475596 containerd[1518]: time="2025-06-21T02:18:17.475289318Z" level=info msg="connecting to shim 335657170200e32f0c4d3725feb3466845c50f62dc8e073509bd55bf18a4cdfb" address="unix:///run/containerd/s/b801e344813683608558b6c363f1dea7195453647ac4a09f49fa91351975c7f9" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:17.505372 systemd[1]: Started cri-containerd-335657170200e32f0c4d3725feb3466845c50f62dc8e073509bd55bf18a4cdfb.scope - libcontainer container 335657170200e32f0c4d3725feb3466845c50f62dc8e073509bd55bf18a4cdfb. Jun 21 02:18:17.552018 kubelet[2659]: I0621 02:18:17.551899 2659 status_manager.go:890] "Failed to get status for pod" podUID="239fbf90-ed91-49b6-9465-32948ce01e78" pod="tigera-operator/tigera-operator-68f7c7984d-p7xhc" err="pods \"tigera-operator-68f7c7984d-p7xhc\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" Jun 21 02:18:17.552805 containerd[1518]: time="2025-06-21T02:18:17.552763356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r5g8r,Uid:20092b10-28ca-4f9f-863c-00fb9729772d,Namespace:kube-system,Attempt:0,} returns sandbox id \"335657170200e32f0c4d3725feb3466845c50f62dc8e073509bd55bf18a4cdfb\"" Jun 21 02:18:17.555105 kubelet[2659]: E0621 02:18:17.554804 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:17.559433 containerd[1518]: time="2025-06-21T02:18:17.559394580Z" level=info msg="CreateContainer within sandbox \"335657170200e32f0c4d3725feb3466845c50f62dc8e073509bd55bf18a4cdfb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 02:18:17.561108 systemd[1]: Created slice kubepods-besteffort-pod239fbf90_ed91_49b6_9465_32948ce01e78.slice - libcontainer container kubepods-besteffort-pod239fbf90_ed91_49b6_9465_32948ce01e78.slice. Jun 21 02:18:17.574311 containerd[1518]: time="2025-06-21T02:18:17.574273131Z" level=info msg="Container 6234ffc3ec5e93b4a08c1ef766384a1588f857e509b46ab387d666ace99565ca: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:17.584292 containerd[1518]: time="2025-06-21T02:18:17.584053080Z" level=info msg="CreateContainer within sandbox \"335657170200e32f0c4d3725feb3466845c50f62dc8e073509bd55bf18a4cdfb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6234ffc3ec5e93b4a08c1ef766384a1588f857e509b46ab387d666ace99565ca\"" Jun 21 02:18:17.584621 containerd[1518]: time="2025-06-21T02:18:17.584595701Z" level=info msg="StartContainer for \"6234ffc3ec5e93b4a08c1ef766384a1588f857e509b46ab387d666ace99565ca\"" Jun 21 02:18:17.586137 containerd[1518]: time="2025-06-21T02:18:17.586104681Z" level=info msg="connecting to shim 6234ffc3ec5e93b4a08c1ef766384a1588f857e509b46ab387d666ace99565ca" address="unix:///run/containerd/s/b801e344813683608558b6c363f1dea7195453647ac4a09f49fa91351975c7f9" protocol=ttrpc version=3 Jun 21 02:18:17.606452 systemd[1]: Started cri-containerd-6234ffc3ec5e93b4a08c1ef766384a1588f857e509b46ab387d666ace99565ca.scope - libcontainer container 6234ffc3ec5e93b4a08c1ef766384a1588f857e509b46ab387d666ace99565ca. Jun 21 02:18:17.641925 containerd[1518]: time="2025-06-21T02:18:17.640504003Z" level=info msg="StartContainer for \"6234ffc3ec5e93b4a08c1ef766384a1588f857e509b46ab387d666ace99565ca\" returns successfully" Jun 21 02:18:17.644636 kubelet[2659]: I0621 02:18:17.644559 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9blqp\" (UniqueName: \"kubernetes.io/projected/239fbf90-ed91-49b6-9465-32948ce01e78-kube-api-access-9blqp\") pod \"tigera-operator-68f7c7984d-p7xhc\" (UID: \"239fbf90-ed91-49b6-9465-32948ce01e78\") " pod="tigera-operator/tigera-operator-68f7c7984d-p7xhc" Jun 21 02:18:17.644636 kubelet[2659]: I0621 02:18:17.644597 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/239fbf90-ed91-49b6-9465-32948ce01e78-var-lib-calico\") pod \"tigera-operator-68f7c7984d-p7xhc\" (UID: \"239fbf90-ed91-49b6-9465-32948ce01e78\") " pod="tigera-operator/tigera-operator-68f7c7984d-p7xhc" Jun 21 02:18:17.868258 containerd[1518]: time="2025-06-21T02:18:17.868016323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-p7xhc,Uid:239fbf90-ed91-49b6-9465-32948ce01e78,Namespace:tigera-operator,Attempt:0,}" Jun 21 02:18:17.888509 containerd[1518]: time="2025-06-21T02:18:17.888462935Z" level=info msg="connecting to shim 264f8057e21b8c3ae588381dc760ebb7fe54cf742da557ce15fdc84ffc7886f1" address="unix:///run/containerd/s/1d53e569be24f0c4d071e5a0d26a53af04ae6ed2d576039ca017666b4f370216" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:17.911382 systemd[1]: Started cri-containerd-264f8057e21b8c3ae588381dc760ebb7fe54cf742da557ce15fdc84ffc7886f1.scope - libcontainer container 264f8057e21b8c3ae588381dc760ebb7fe54cf742da557ce15fdc84ffc7886f1. Jun 21 02:18:17.945309 containerd[1518]: time="2025-06-21T02:18:17.945264352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-p7xhc,Uid:239fbf90-ed91-49b6-9465-32948ce01e78,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"264f8057e21b8c3ae588381dc760ebb7fe54cf742da557ce15fdc84ffc7886f1\"" Jun 21 02:18:17.947369 containerd[1518]: time="2025-06-21T02:18:17.947311834Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 21 02:18:18.262925 kubelet[2659]: E0621 02:18:18.262804 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:18.274106 kubelet[2659]: I0621 02:18:18.274000 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r5g8r" podStartSLOduration=1.273981641 podStartE2EDuration="1.273981641s" podCreationTimestamp="2025-06-21 02:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:18:18.273622187 +0000 UTC m=+8.142799106" watchObservedRunningTime="2025-06-21 02:18:18.273981641 +0000 UTC m=+8.143158560" Jun 21 02:18:18.291315 kubelet[2659]: E0621 02:18:18.291269 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:18.474193 update_engine[1497]: I20250621 02:18:18.473796 1497 update_attempter.cc:509] Updating boot flags... Jun 21 02:18:19.265316 kubelet[2659]: E0621 02:18:19.265284 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:19.267121 kubelet[2659]: E0621 02:18:19.265762 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:19.279363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2583138313.mount: Deactivated successfully. Jun 21 02:18:19.571401 containerd[1518]: time="2025-06-21T02:18:19.571269050Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:19.572263 containerd[1518]: time="2025-06-21T02:18:19.572192443Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=22149772" Jun 21 02:18:19.573450 containerd[1518]: time="2025-06-21T02:18:19.573415007Z" level=info msg="ImageCreate event name:\"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:19.575627 containerd[1518]: time="2025-06-21T02:18:19.575593485Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:19.576311 containerd[1518]: time="2025-06-21T02:18:19.576276470Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"22145767\" in 1.628929955s" Jun 21 02:18:19.576365 containerd[1518]: time="2025-06-21T02:18:19.576311911Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\"" Jun 21 02:18:19.579799 containerd[1518]: time="2025-06-21T02:18:19.579766316Z" level=info msg="CreateContainer within sandbox \"264f8057e21b8c3ae588381dc760ebb7fe54cf742da557ce15fdc84ffc7886f1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 21 02:18:19.589472 containerd[1518]: time="2025-06-21T02:18:19.589417582Z" level=info msg="Container 90209ad9ad8a53f8aada555c9756e360318385da668da59aa9df932205ff287b: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:19.591168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1255722609.mount: Deactivated successfully. Jun 21 02:18:19.639604 containerd[1518]: time="2025-06-21T02:18:19.639487902Z" level=info msg="CreateContainer within sandbox \"264f8057e21b8c3ae588381dc760ebb7fe54cf742da557ce15fdc84ffc7886f1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"90209ad9ad8a53f8aada555c9756e360318385da668da59aa9df932205ff287b\"" Jun 21 02:18:19.640246 containerd[1518]: time="2025-06-21T02:18:19.640191168Z" level=info msg="StartContainer for \"90209ad9ad8a53f8aada555c9756e360318385da668da59aa9df932205ff287b\"" Jun 21 02:18:19.643784 containerd[1518]: time="2025-06-21T02:18:19.643686773Z" level=info msg="connecting to shim 90209ad9ad8a53f8aada555c9756e360318385da668da59aa9df932205ff287b" address="unix:///run/containerd/s/1d53e569be24f0c4d071e5a0d26a53af04ae6ed2d576039ca017666b4f370216" protocol=ttrpc version=3 Jun 21 02:18:19.694509 systemd[1]: Started cri-containerd-90209ad9ad8a53f8aada555c9756e360318385da668da59aa9df932205ff287b.scope - libcontainer container 90209ad9ad8a53f8aada555c9756e360318385da668da59aa9df932205ff287b. Jun 21 02:18:19.729266 containerd[1518]: time="2025-06-21T02:18:19.729228169Z" level=info msg="StartContainer for \"90209ad9ad8a53f8aada555c9756e360318385da668da59aa9df932205ff287b\" returns successfully" Jun 21 02:18:20.290863 kubelet[2659]: I0621 02:18:20.290752 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-p7xhc" podStartSLOduration=1.659224487 podStartE2EDuration="3.290731057s" podCreationTimestamp="2025-06-21 02:18:17 +0000 UTC" firstStartedPulling="2025-06-21 02:18:17.946580285 +0000 UTC m=+7.815757164" lastFinishedPulling="2025-06-21 02:18:19.578086855 +0000 UTC m=+9.447263734" observedRunningTime="2025-06-21 02:18:20.290432167 +0000 UTC m=+10.159609086" watchObservedRunningTime="2025-06-21 02:18:20.290731057 +0000 UTC m=+10.159908016" Jun 21 02:18:22.211556 kubelet[2659]: E0621 02:18:22.211037 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:22.279994 kubelet[2659]: E0621 02:18:22.279912 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:25.231863 sudo[1730]: pam_unix(sudo:session): session closed for user root Jun 21 02:18:25.238905 sshd[1729]: Connection closed by 10.0.0.1 port 46092 Jun 21 02:18:25.239416 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Jun 21 02:18:25.243908 systemd[1]: sshd@8-10.0.0.75:22-10.0.0.1:46092.service: Deactivated successfully. Jun 21 02:18:25.245681 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 02:18:25.245850 systemd[1]: session-9.scope: Consumed 7.537s CPU time, 230.2M memory peak. Jun 21 02:18:25.247902 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. Jun 21 02:18:25.251504 systemd-logind[1493]: Removed session 9. Jun 21 02:18:30.618351 systemd[1]: Created slice kubepods-besteffort-pod8d7e5e1e_3981_45f1_8905_70635c4632ba.slice - libcontainer container kubepods-besteffort-pod8d7e5e1e_3981_45f1_8905_70635c4632ba.slice. Jun 21 02:18:30.629223 kubelet[2659]: I0621 02:18:30.629158 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvvnz\" (UniqueName: \"kubernetes.io/projected/8d7e5e1e-3981-45f1-8905-70635c4632ba-kube-api-access-mvvnz\") pod \"calico-typha-cbd69ff67-mk8m9\" (UID: \"8d7e5e1e-3981-45f1-8905-70635c4632ba\") " pod="calico-system/calico-typha-cbd69ff67-mk8m9" Jun 21 02:18:30.629692 kubelet[2659]: I0621 02:18:30.629592 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d7e5e1e-3981-45f1-8905-70635c4632ba-tigera-ca-bundle\") pod \"calico-typha-cbd69ff67-mk8m9\" (UID: \"8d7e5e1e-3981-45f1-8905-70635c4632ba\") " pod="calico-system/calico-typha-cbd69ff67-mk8m9" Jun 21 02:18:30.629692 kubelet[2659]: I0621 02:18:30.629641 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8d7e5e1e-3981-45f1-8905-70635c4632ba-typha-certs\") pod \"calico-typha-cbd69ff67-mk8m9\" (UID: \"8d7e5e1e-3981-45f1-8905-70635c4632ba\") " pod="calico-system/calico-typha-cbd69ff67-mk8m9" Jun 21 02:18:30.828751 systemd[1]: Created slice kubepods-besteffort-pod68c53c20_bc0d_4239_b308_2cac0fa88253.slice - libcontainer container kubepods-besteffort-pod68c53c20_bc0d_4239_b308_2cac0fa88253.slice. Jun 21 02:18:30.923622 kubelet[2659]: E0621 02:18:30.923580 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:30.925219 containerd[1518]: time="2025-06-21T02:18:30.925102356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cbd69ff67-mk8m9,Uid:8d7e5e1e-3981-45f1-8905-70635c4632ba,Namespace:calico-system,Attempt:0,}" Jun 21 02:18:30.931250 kubelet[2659]: I0621 02:18:30.931182 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/68c53c20-bc0d-4239-b308-2cac0fa88253-node-certs\") pod \"calico-node-c6647\" (UID: \"68c53c20-bc0d-4239-b308-2cac0fa88253\") " pod="calico-system/calico-node-c6647" Jun 21 02:18:30.931250 kubelet[2659]: I0621 02:18:30.931238 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/68c53c20-bc0d-4239-b308-2cac0fa88253-cni-log-dir\") pod \"calico-node-c6647\" (UID: \"68c53c20-bc0d-4239-b308-2cac0fa88253\") " pod="calico-system/calico-node-c6647" Jun 21 02:18:30.931369 kubelet[2659]: I0621 02:18:30.931281 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68c53c20-bc0d-4239-b308-2cac0fa88253-lib-modules\") pod \"calico-node-c6647\" (UID: \"68c53c20-bc0d-4239-b308-2cac0fa88253\") " pod="calico-system/calico-node-c6647" Jun 21 02:18:30.931369 kubelet[2659]: I0621 02:18:30.931338 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/68c53c20-bc0d-4239-b308-2cac0fa88253-policysync\") pod \"calico-node-c6647\" (UID: \"68c53c20-bc0d-4239-b308-2cac0fa88253\") " pod="calico-system/calico-node-c6647" Jun 21 02:18:30.931369 kubelet[2659]: I0621 02:18:30.931362 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68c53c20-bc0d-4239-b308-2cac0fa88253-tigera-ca-bundle\") pod \"calico-node-c6647\" (UID: \"68c53c20-bc0d-4239-b308-2cac0fa88253\") " pod="calico-system/calico-node-c6647" Jun 21 02:18:30.931436 kubelet[2659]: I0621 02:18:30.931377 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/68c53c20-bc0d-4239-b308-2cac0fa88253-var-lib-calico\") pod \"calico-node-c6647\" (UID: \"68c53c20-bc0d-4239-b308-2cac0fa88253\") " pod="calico-system/calico-node-c6647" Jun 21 02:18:30.931436 kubelet[2659]: I0621 02:18:30.931421 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68c53c20-bc0d-4239-b308-2cac0fa88253-xtables-lock\") pod \"calico-node-c6647\" (UID: \"68c53c20-bc0d-4239-b308-2cac0fa88253\") " pod="calico-system/calico-node-c6647" Jun 21 02:18:30.931497 kubelet[2659]: I0621 02:18:30.931471 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/68c53c20-bc0d-4239-b308-2cac0fa88253-flexvol-driver-host\") pod \"calico-node-c6647\" (UID: \"68c53c20-bc0d-4239-b308-2cac0fa88253\") " pod="calico-system/calico-node-c6647" Jun 21 02:18:30.931590 kubelet[2659]: I0621 02:18:30.931537 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/68c53c20-bc0d-4239-b308-2cac0fa88253-cni-net-dir\") pod \"calico-node-c6647\" (UID: \"68c53c20-bc0d-4239-b308-2cac0fa88253\") " pod="calico-system/calico-node-c6647" Jun 21 02:18:30.931624 kubelet[2659]: I0621 02:18:30.931600 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/68c53c20-bc0d-4239-b308-2cac0fa88253-var-run-calico\") pod \"calico-node-c6647\" (UID: \"68c53c20-bc0d-4239-b308-2cac0fa88253\") " pod="calico-system/calico-node-c6647" Jun 21 02:18:30.931624 kubelet[2659]: I0621 02:18:30.931620 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plcjf\" (UniqueName: \"kubernetes.io/projected/68c53c20-bc0d-4239-b308-2cac0fa88253-kube-api-access-plcjf\") pod \"calico-node-c6647\" (UID: \"68c53c20-bc0d-4239-b308-2cac0fa88253\") " pod="calico-system/calico-node-c6647" Jun 21 02:18:30.931690 kubelet[2659]: I0621 02:18:30.931680 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/68c53c20-bc0d-4239-b308-2cac0fa88253-cni-bin-dir\") pod \"calico-node-c6647\" (UID: \"68c53c20-bc0d-4239-b308-2cac0fa88253\") " pod="calico-system/calico-node-c6647" Jun 21 02:18:30.964778 containerd[1518]: time="2025-06-21T02:18:30.964732027Z" level=info msg="connecting to shim dca3ffe0afec76123b95a5e21fd52074f31275837b6056f6a11a28b2990d1342" address="unix:///run/containerd/s/00e18912e510d0aa1df7363c27a0ca2b448910df76f7ba7ceb869e7acdbd6b13" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:31.027000 kubelet[2659]: E0621 02:18:31.026926 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2n26r" podUID="9eb38543-6254-4688-8d4b-892c4068ec20" Jun 21 02:18:31.033629 systemd[1]: Started cri-containerd-dca3ffe0afec76123b95a5e21fd52074f31275837b6056f6a11a28b2990d1342.scope - libcontainer container dca3ffe0afec76123b95a5e21fd52074f31275837b6056f6a11a28b2990d1342. Jun 21 02:18:31.037398 kubelet[2659]: E0621 02:18:31.037368 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.037500 kubelet[2659]: W0621 02:18:31.037406 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.040326 kubelet[2659]: E0621 02:18:31.040280 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.042157 kubelet[2659]: E0621 02:18:31.042126 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.042157 kubelet[2659]: W0621 02:18:31.042145 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.042157 kubelet[2659]: E0621 02:18:31.042163 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.046452 kubelet[2659]: E0621 02:18:31.046415 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.046452 kubelet[2659]: W0621 02:18:31.046437 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.046452 kubelet[2659]: E0621 02:18:31.046454 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.049117 kubelet[2659]: E0621 02:18:31.048606 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.049117 kubelet[2659]: W0621 02:18:31.048625 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.049117 kubelet[2659]: E0621 02:18:31.048640 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.052234 kubelet[2659]: E0621 02:18:31.052194 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.052234 kubelet[2659]: W0621 02:18:31.052226 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.052234 kubelet[2659]: E0621 02:18:31.052242 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.058140 kubelet[2659]: E0621 02:18:31.057922 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.058669 kubelet[2659]: W0621 02:18:31.058349 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.058669 kubelet[2659]: E0621 02:18:31.058468 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.114365 kubelet[2659]: E0621 02:18:31.114329 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.114609 kubelet[2659]: W0621 02:18:31.114495 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.114609 kubelet[2659]: E0621 02:18:31.114524 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.114759 kubelet[2659]: E0621 02:18:31.114744 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.114857 kubelet[2659]: W0621 02:18:31.114813 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.114918 kubelet[2659]: E0621 02:18:31.114906 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.115137 kubelet[2659]: E0621 02:18:31.115123 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.115469 kubelet[2659]: W0621 02:18:31.115187 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.115662 kubelet[2659]: E0621 02:18:31.115549 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.115957 kubelet[2659]: E0621 02:18:31.115942 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.116185 kubelet[2659]: W0621 02:18:31.115995 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.116185 kubelet[2659]: E0621 02:18:31.116011 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.117937 kubelet[2659]: E0621 02:18:31.117685 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.117937 kubelet[2659]: W0621 02:18:31.117817 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.117937 kubelet[2659]: E0621 02:18:31.117838 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.118221 kubelet[2659]: E0621 02:18:31.118127 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.118221 kubelet[2659]: W0621 02:18:31.118166 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.118221 kubelet[2659]: E0621 02:18:31.118182 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.118590 kubelet[2659]: E0621 02:18:31.118576 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.118789 kubelet[2659]: W0621 02:18:31.118634 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.118789 kubelet[2659]: E0621 02:18:31.118649 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.118936 kubelet[2659]: E0621 02:18:31.118901 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.118936 kubelet[2659]: W0621 02:18:31.118914 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.119107 kubelet[2659]: E0621 02:18:31.119023 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.119296 kubelet[2659]: E0621 02:18:31.119275 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.119390 kubelet[2659]: W0621 02:18:31.119375 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.119462 kubelet[2659]: E0621 02:18:31.119450 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.119713 kubelet[2659]: E0621 02:18:31.119698 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.119808 kubelet[2659]: W0621 02:18:31.119794 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.119945 kubelet[2659]: E0621 02:18:31.119856 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.120064 kubelet[2659]: E0621 02:18:31.120050 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.120172 kubelet[2659]: W0621 02:18:31.120128 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.120172 kubelet[2659]: E0621 02:18:31.120144 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.120439 kubelet[2659]: E0621 02:18:31.120403 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.120439 kubelet[2659]: W0621 02:18:31.120416 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.120631 kubelet[2659]: E0621 02:18:31.120528 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.120784 kubelet[2659]: E0621 02:18:31.120745 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.120784 kubelet[2659]: W0621 02:18:31.120757 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.120950 kubelet[2659]: E0621 02:18:31.120767 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.121087 kubelet[2659]: E0621 02:18:31.121046 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.121087 kubelet[2659]: W0621 02:18:31.121057 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.121087 kubelet[2659]: E0621 02:18:31.121067 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.121480 kubelet[2659]: E0621 02:18:31.121360 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.121480 kubelet[2659]: W0621 02:18:31.121374 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.121480 kubelet[2659]: E0621 02:18:31.121390 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.121679 kubelet[2659]: E0621 02:18:31.121643 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.121679 kubelet[2659]: W0621 02:18:31.121658 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.121864 kubelet[2659]: E0621 02:18:31.121760 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.122021 kubelet[2659]: E0621 02:18:31.121967 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.122021 kubelet[2659]: W0621 02:18:31.121983 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.122190 kubelet[2659]: E0621 02:18:31.122100 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.123235 kubelet[2659]: E0621 02:18:31.122576 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.123329 kubelet[2659]: W0621 02:18:31.123269 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.123329 kubelet[2659]: E0621 02:18:31.123293 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.123561 kubelet[2659]: E0621 02:18:31.123525 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.123561 kubelet[2659]: W0621 02:18:31.123541 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.123561 kubelet[2659]: E0621 02:18:31.123552 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.123709 kubelet[2659]: E0621 02:18:31.123695 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.123709 kubelet[2659]: W0621 02:18:31.123706 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.123763 kubelet[2659]: E0621 02:18:31.123715 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.132920 containerd[1518]: time="2025-06-21T02:18:31.132835052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c6647,Uid:68c53c20-bc0d-4239-b308-2cac0fa88253,Namespace:calico-system,Attempt:0,}" Jun 21 02:18:31.133347 kubelet[2659]: E0621 02:18:31.133055 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.133347 kubelet[2659]: W0621 02:18:31.133070 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.133347 kubelet[2659]: E0621 02:18:31.133085 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.133347 kubelet[2659]: I0621 02:18:31.133112 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9eb38543-6254-4688-8d4b-892c4068ec20-registration-dir\") pod \"csi-node-driver-2n26r\" (UID: \"9eb38543-6254-4688-8d4b-892c4068ec20\") " pod="calico-system/csi-node-driver-2n26r" Jun 21 02:18:31.133758 kubelet[2659]: E0621 02:18:31.133721 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.133943 kubelet[2659]: W0621 02:18:31.133919 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.134135 kubelet[2659]: E0621 02:18:31.134110 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.134255 kubelet[2659]: I0621 02:18:31.134236 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9eb38543-6254-4688-8d4b-892c4068ec20-socket-dir\") pod \"csi-node-driver-2n26r\" (UID: \"9eb38543-6254-4688-8d4b-892c4068ec20\") " pod="calico-system/csi-node-driver-2n26r" Jun 21 02:18:31.135496 kubelet[2659]: E0621 02:18:31.135460 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.135496 kubelet[2659]: W0621 02:18:31.135482 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.135700 kubelet[2659]: E0621 02:18:31.135511 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.136725 kubelet[2659]: E0621 02:18:31.136577 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.136725 kubelet[2659]: W0621 02:18:31.136594 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.136725 kubelet[2659]: E0621 02:18:31.136611 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.137172 kubelet[2659]: E0621 02:18:31.137001 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.137359 kubelet[2659]: W0621 02:18:31.137284 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.137844 kubelet[2659]: E0621 02:18:31.137545 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.137844 kubelet[2659]: I0621 02:18:31.137680 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9eb38543-6254-4688-8d4b-892c4068ec20-kubelet-dir\") pod \"csi-node-driver-2n26r\" (UID: \"9eb38543-6254-4688-8d4b-892c4068ec20\") " pod="calico-system/csi-node-driver-2n26r" Jun 21 02:18:31.138172 kubelet[2659]: E0621 02:18:31.138073 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.138172 kubelet[2659]: W0621 02:18:31.138089 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.138172 kubelet[2659]: E0621 02:18:31.138101 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.138454 kubelet[2659]: E0621 02:18:31.138325 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.138454 kubelet[2659]: W0621 02:18:31.138344 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.138454 kubelet[2659]: E0621 02:18:31.138426 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.138651 kubelet[2659]: E0621 02:18:31.138614 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.138651 kubelet[2659]: W0621 02:18:31.138628 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.138994 kubelet[2659]: E0621 02:18:31.138657 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.138994 kubelet[2659]: I0621 02:18:31.138679 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p4sx\" (UniqueName: \"kubernetes.io/projected/9eb38543-6254-4688-8d4b-892c4068ec20-kube-api-access-6p4sx\") pod \"csi-node-driver-2n26r\" (UID: \"9eb38543-6254-4688-8d4b-892c4068ec20\") " pod="calico-system/csi-node-driver-2n26r" Jun 21 02:18:31.139282 kubelet[2659]: E0621 02:18:31.139263 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.139644 kubelet[2659]: W0621 02:18:31.139517 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.139644 kubelet[2659]: E0621 02:18:31.139548 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.140415 kubelet[2659]: E0621 02:18:31.140383 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.140615 kubelet[2659]: W0621 02:18:31.140494 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.140615 kubelet[2659]: E0621 02:18:31.140524 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.140813 kubelet[2659]: E0621 02:18:31.140772 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.140879 kubelet[2659]: W0621 02:18:31.140865 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.140943 kubelet[2659]: E0621 02:18:31.140931 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.141032 kubelet[2659]: I0621 02:18:31.141019 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9eb38543-6254-4688-8d4b-892c4068ec20-varrun\") pod \"csi-node-driver-2n26r\" (UID: \"9eb38543-6254-4688-8d4b-892c4068ec20\") " pod="calico-system/csi-node-driver-2n26r" Jun 21 02:18:31.141599 kubelet[2659]: E0621 02:18:31.141352 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.141723 kubelet[2659]: W0621 02:18:31.141669 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.141967 kubelet[2659]: E0621 02:18:31.141846 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.142277 kubelet[2659]: E0621 02:18:31.142111 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.142277 kubelet[2659]: W0621 02:18:31.142124 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.142277 kubelet[2659]: E0621 02:18:31.142143 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.143085 kubelet[2659]: E0621 02:18:31.143002 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.143677 kubelet[2659]: W0621 02:18:31.143184 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.143677 kubelet[2659]: E0621 02:18:31.143234 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.144746 kubelet[2659]: E0621 02:18:31.144643 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.145127 kubelet[2659]: W0621 02:18:31.144989 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.145127 kubelet[2659]: E0621 02:18:31.145017 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.152103 containerd[1518]: time="2025-06-21T02:18:31.152068458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cbd69ff67-mk8m9,Uid:8d7e5e1e-3981-45f1-8905-70635c4632ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"dca3ffe0afec76123b95a5e21fd52074f31275837b6056f6a11a28b2990d1342\"" Jun 21 02:18:31.153017 kubelet[2659]: E0621 02:18:31.152905 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:31.159989 containerd[1518]: time="2025-06-21T02:18:31.159951705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 21 02:18:31.171146 containerd[1518]: time="2025-06-21T02:18:31.171069140Z" level=info msg="connecting to shim 06006037c5ff620c30b7605b348755d1e122245becfb34f893ee3427c5393342" address="unix:///run/containerd/s/ef569fc67a59fff5ef387c7e826693625c0b7fe7acb68388bc55271593da362f" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:31.200399 systemd[1]: Started cri-containerd-06006037c5ff620c30b7605b348755d1e122245becfb34f893ee3427c5393342.scope - libcontainer container 06006037c5ff620c30b7605b348755d1e122245becfb34f893ee3427c5393342. Jun 21 02:18:31.242242 kubelet[2659]: E0621 02:18:31.241979 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.242242 kubelet[2659]: W0621 02:18:31.242006 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.242242 kubelet[2659]: E0621 02:18:31.242027 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.242641 kubelet[2659]: E0621 02:18:31.242623 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.242641 kubelet[2659]: W0621 02:18:31.242639 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.242699 kubelet[2659]: E0621 02:18:31.242655 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.242923 kubelet[2659]: E0621 02:18:31.242865 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.242923 kubelet[2659]: W0621 02:18:31.242878 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.242923 kubelet[2659]: E0621 02:18:31.242893 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.243169 kubelet[2659]: E0621 02:18:31.243147 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.243169 kubelet[2659]: W0621 02:18:31.243166 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.243374 kubelet[2659]: E0621 02:18:31.243199 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.243474 kubelet[2659]: E0621 02:18:31.243457 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.243539 kubelet[2659]: W0621 02:18:31.243526 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.243608 kubelet[2659]: E0621 02:18:31.243595 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.243775 kubelet[2659]: E0621 02:18:31.243754 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.243775 kubelet[2659]: W0621 02:18:31.243769 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.243836 kubelet[2659]: E0621 02:18:31.243787 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.243960 kubelet[2659]: E0621 02:18:31.243949 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.243960 kubelet[2659]: W0621 02:18:31.243959 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.244019 kubelet[2659]: E0621 02:18:31.243972 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.244147 kubelet[2659]: E0621 02:18:31.244135 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.244147 kubelet[2659]: W0621 02:18:31.244146 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.244262 kubelet[2659]: E0621 02:18:31.244177 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.244298 kubelet[2659]: E0621 02:18:31.244282 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.244298 kubelet[2659]: W0621 02:18:31.244293 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.244420 kubelet[2659]: E0621 02:18:31.244320 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.244486 kubelet[2659]: E0621 02:18:31.244423 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.244486 kubelet[2659]: W0621 02:18:31.244431 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.244486 kubelet[2659]: E0621 02:18:31.244461 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.244550 kubelet[2659]: E0621 02:18:31.244545 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.244599 kubelet[2659]: W0621 02:18:31.244552 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.244599 kubelet[2659]: E0621 02:18:31.244565 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.244690 kubelet[2659]: E0621 02:18:31.244678 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.244690 kubelet[2659]: W0621 02:18:31.244688 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.244740 kubelet[2659]: E0621 02:18:31.244699 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.244888 kubelet[2659]: E0621 02:18:31.244873 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.244888 kubelet[2659]: W0621 02:18:31.244886 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.245059 kubelet[2659]: E0621 02:18:31.244902 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.245143 kubelet[2659]: E0621 02:18:31.245127 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.245226 kubelet[2659]: W0621 02:18:31.245200 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.245295 kubelet[2659]: E0621 02:18:31.245284 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.245513 kubelet[2659]: E0621 02:18:31.245498 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.245513 kubelet[2659]: W0621 02:18:31.245511 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.245576 kubelet[2659]: E0621 02:18:31.245529 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.245719 kubelet[2659]: E0621 02:18:31.245706 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.245719 kubelet[2659]: W0621 02:18:31.245716 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.245770 kubelet[2659]: E0621 02:18:31.245742 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.245864 kubelet[2659]: E0621 02:18:31.245852 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.245864 kubelet[2659]: W0621 02:18:31.245863 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.245986 kubelet[2659]: E0621 02:18:31.245893 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.246015 kubelet[2659]: E0621 02:18:31.245987 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.246015 kubelet[2659]: W0621 02:18:31.245995 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.246015 kubelet[2659]: E0621 02:18:31.246008 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.246158 kubelet[2659]: E0621 02:18:31.246146 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.246158 kubelet[2659]: W0621 02:18:31.246157 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.246248 kubelet[2659]: E0621 02:18:31.246170 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.246359 kubelet[2659]: E0621 02:18:31.246346 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.246395 kubelet[2659]: W0621 02:18:31.246360 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.246395 kubelet[2659]: E0621 02:18:31.246380 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.246612 kubelet[2659]: E0621 02:18:31.246597 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.246612 kubelet[2659]: W0621 02:18:31.246611 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.246675 kubelet[2659]: E0621 02:18:31.246627 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.247298 kubelet[2659]: E0621 02:18:31.247277 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.247298 kubelet[2659]: W0621 02:18:31.247295 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.247385 kubelet[2659]: E0621 02:18:31.247314 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.247694 kubelet[2659]: E0621 02:18:31.247674 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.247903 kubelet[2659]: W0621 02:18:31.247696 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.247903 kubelet[2659]: E0621 02:18:31.247728 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.248132 kubelet[2659]: E0621 02:18:31.247981 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.248132 kubelet[2659]: W0621 02:18:31.248013 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.248132 kubelet[2659]: E0621 02:18:31.248025 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.253082 kubelet[2659]: E0621 02:18:31.253052 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.253082 kubelet[2659]: W0621 02:18:31.253076 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.253190 kubelet[2659]: E0621 02:18:31.253089 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:31.254164 containerd[1518]: time="2025-06-21T02:18:31.254128136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c6647,Uid:68c53c20-bc0d-4239-b308-2cac0fa88253,Namespace:calico-system,Attempt:0,} returns sandbox id \"06006037c5ff620c30b7605b348755d1e122245becfb34f893ee3427c5393342\"" Jun 21 02:18:31.265488 kubelet[2659]: E0621 02:18:31.265459 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:31.265488 kubelet[2659]: W0621 02:18:31.265480 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:31.265585 kubelet[2659]: E0621 02:18:31.265498 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:32.171457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980894336.mount: Deactivated successfully. Jun 21 02:18:33.226201 kubelet[2659]: E0621 02:18:33.226158 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2n26r" podUID="9eb38543-6254-4688-8d4b-892c4068ec20" Jun 21 02:18:33.456516 containerd[1518]: time="2025-06-21T02:18:33.456472845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:33.457062 containerd[1518]: time="2025-06-21T02:18:33.457028096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=33070817" Jun 21 02:18:33.458229 containerd[1518]: time="2025-06-21T02:18:33.457843512Z" level=info msg="ImageCreate event name:\"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:33.460620 containerd[1518]: time="2025-06-21T02:18:33.460583806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:33.461325 containerd[1518]: time="2025-06-21T02:18:33.461294460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"33070671\" in 2.301302914s" Jun 21 02:18:33.461325 containerd[1518]: time="2025-06-21T02:18:33.461324941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\"" Jun 21 02:18:33.463633 containerd[1518]: time="2025-06-21T02:18:33.463611225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 21 02:18:33.477553 containerd[1518]: time="2025-06-21T02:18:33.477452297Z" level=info msg="CreateContainer within sandbox \"dca3ffe0afec76123b95a5e21fd52074f31275837b6056f6a11a28b2990d1342\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 21 02:18:33.501979 containerd[1518]: time="2025-06-21T02:18:33.500953078Z" level=info msg="Container 64b43c818db0df1d0a569c6f747a50b808abe2bcaecad9595dce1c947e4fe74d: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:33.502618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304121555.mount: Deactivated successfully. Jun 21 02:18:33.508839 containerd[1518]: time="2025-06-21T02:18:33.508790911Z" level=info msg="CreateContainer within sandbox \"dca3ffe0afec76123b95a5e21fd52074f31275837b6056f6a11a28b2990d1342\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"64b43c818db0df1d0a569c6f747a50b808abe2bcaecad9595dce1c947e4fe74d\"" Jun 21 02:18:33.509496 containerd[1518]: time="2025-06-21T02:18:33.509469725Z" level=info msg="StartContainer for \"64b43c818db0df1d0a569c6f747a50b808abe2bcaecad9595dce1c947e4fe74d\"" Jun 21 02:18:33.510835 containerd[1518]: time="2025-06-21T02:18:33.510808631Z" level=info msg="connecting to shim 64b43c818db0df1d0a569c6f747a50b808abe2bcaecad9595dce1c947e4fe74d" address="unix:///run/containerd/s/00e18912e510d0aa1df7363c27a0ca2b448910df76f7ba7ceb869e7acdbd6b13" protocol=ttrpc version=3 Jun 21 02:18:33.539409 systemd[1]: Started cri-containerd-64b43c818db0df1d0a569c6f747a50b808abe2bcaecad9595dce1c947e4fe74d.scope - libcontainer container 64b43c818db0df1d0a569c6f747a50b808abe2bcaecad9595dce1c947e4fe74d. Jun 21 02:18:33.577009 containerd[1518]: time="2025-06-21T02:18:33.576972288Z" level=info msg="StartContainer for \"64b43c818db0df1d0a569c6f747a50b808abe2bcaecad9595dce1c947e4fe74d\" returns successfully" Jun 21 02:18:34.308303 kubelet[2659]: E0621 02:18:34.308241 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:34.320426 kubelet[2659]: I0621 02:18:34.320336 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cbd69ff67-mk8m9" podStartSLOduration=2.012652434 podStartE2EDuration="4.320316921s" podCreationTimestamp="2025-06-21 02:18:30 +0000 UTC" firstStartedPulling="2025-06-21 02:18:31.154906438 +0000 UTC m=+21.024083357" lastFinishedPulling="2025-06-21 02:18:33.462570925 +0000 UTC m=+23.331747844" observedRunningTime="2025-06-21 02:18:34.319590467 +0000 UTC m=+24.188767466" watchObservedRunningTime="2025-06-21 02:18:34.320316921 +0000 UTC m=+24.189493840" Jun 21 02:18:34.343680 kubelet[2659]: E0621 02:18:34.343641 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.343680 kubelet[2659]: W0621 02:18:34.343668 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.343872 kubelet[2659]: E0621 02:18:34.343752 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.344006 kubelet[2659]: E0621 02:18:34.343983 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.344051 kubelet[2659]: W0621 02:18:34.343997 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.344075 kubelet[2659]: E0621 02:18:34.344053 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.344228 kubelet[2659]: E0621 02:18:34.344200 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.344228 kubelet[2659]: W0621 02:18:34.344222 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.344273 kubelet[2659]: E0621 02:18:34.344230 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.344367 kubelet[2659]: E0621 02:18:34.344355 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.344367 kubelet[2659]: W0621 02:18:34.344366 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.344417 kubelet[2659]: E0621 02:18:34.344376 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.344662 kubelet[2659]: E0621 02:18:34.344647 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.344662 kubelet[2659]: W0621 02:18:34.344660 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.344725 kubelet[2659]: E0621 02:18:34.344672 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.344833 kubelet[2659]: E0621 02:18:34.344819 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.344833 kubelet[2659]: W0621 02:18:34.344830 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.344890 kubelet[2659]: E0621 02:18:34.344839 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.344999 kubelet[2659]: E0621 02:18:34.344987 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.345024 kubelet[2659]: W0621 02:18:34.344999 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.345024 kubelet[2659]: E0621 02:18:34.345006 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.345183 kubelet[2659]: E0621 02:18:34.345170 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.345217 kubelet[2659]: W0621 02:18:34.345183 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.345217 kubelet[2659]: E0621 02:18:34.345192 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.345453 kubelet[2659]: E0621 02:18:34.345440 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.345487 kubelet[2659]: W0621 02:18:34.345453 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.345487 kubelet[2659]: E0621 02:18:34.345464 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.345599 kubelet[2659]: E0621 02:18:34.345588 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.345599 kubelet[2659]: W0621 02:18:34.345598 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.345648 kubelet[2659]: E0621 02:18:34.345606 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.345730 kubelet[2659]: E0621 02:18:34.345719 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.345756 kubelet[2659]: W0621 02:18:34.345731 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.345756 kubelet[2659]: E0621 02:18:34.345739 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.346012 kubelet[2659]: E0621 02:18:34.345998 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.346045 kubelet[2659]: W0621 02:18:34.346012 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.346045 kubelet[2659]: E0621 02:18:34.346022 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.346179 kubelet[2659]: E0621 02:18:34.346167 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.346179 kubelet[2659]: W0621 02:18:34.346177 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.346246 kubelet[2659]: E0621 02:18:34.346186 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.346326 kubelet[2659]: E0621 02:18:34.346314 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.346326 kubelet[2659]: W0621 02:18:34.346325 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.346371 kubelet[2659]: E0621 02:18:34.346333 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.346478 kubelet[2659]: E0621 02:18:34.346467 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.346478 kubelet[2659]: W0621 02:18:34.346477 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.346526 kubelet[2659]: E0621 02:18:34.346484 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.362936 kubelet[2659]: E0621 02:18:34.362913 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.362936 kubelet[2659]: W0621 02:18:34.362933 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.363101 kubelet[2659]: E0621 02:18:34.362948 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.363183 kubelet[2659]: E0621 02:18:34.363171 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.363183 kubelet[2659]: W0621 02:18:34.363182 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.363340 kubelet[2659]: E0621 02:18:34.363196 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.363437 kubelet[2659]: E0621 02:18:34.363423 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.363437 kubelet[2659]: W0621 02:18:34.363435 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.363497 kubelet[2659]: E0621 02:18:34.363451 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.363635 kubelet[2659]: E0621 02:18:34.363624 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.363635 kubelet[2659]: W0621 02:18:34.363635 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.363694 kubelet[2659]: E0621 02:18:34.363649 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.363817 kubelet[2659]: E0621 02:18:34.363805 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.363817 kubelet[2659]: W0621 02:18:34.363816 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.363909 kubelet[2659]: E0621 02:18:34.363829 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.364025 kubelet[2659]: E0621 02:18:34.364012 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.364025 kubelet[2659]: W0621 02:18:34.364023 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.364086 kubelet[2659]: E0621 02:18:34.364039 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.364240 kubelet[2659]: E0621 02:18:34.364226 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.364240 kubelet[2659]: W0621 02:18:34.364240 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.364790 kubelet[2659]: E0621 02:18:34.364296 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.364790 kubelet[2659]: E0621 02:18:34.364364 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.364790 kubelet[2659]: W0621 02:18:34.364372 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.364790 kubelet[2659]: E0621 02:18:34.364408 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.364790 kubelet[2659]: E0621 02:18:34.364530 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.364790 kubelet[2659]: W0621 02:18:34.364538 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.364790 kubelet[2659]: E0621 02:18:34.364548 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.364934 kubelet[2659]: E0621 02:18:34.364821 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.364934 kubelet[2659]: W0621 02:18:34.364829 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.364934 kubelet[2659]: E0621 02:18:34.364837 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.364993 kubelet[2659]: E0621 02:18:34.364975 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.364993 kubelet[2659]: W0621 02:18:34.364982 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.364993 kubelet[2659]: E0621 02:18:34.364989 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.365141 kubelet[2659]: E0621 02:18:34.365127 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.365141 kubelet[2659]: W0621 02:18:34.365137 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.365196 kubelet[2659]: E0621 02:18:34.365146 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.365503 kubelet[2659]: E0621 02:18:34.365451 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.365503 kubelet[2659]: W0621 02:18:34.365472 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.365661 kubelet[2659]: E0621 02:18:34.365599 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.365877 kubelet[2659]: E0621 02:18:34.365865 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.365945 kubelet[2659]: W0621 02:18:34.365933 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.366034 kubelet[2659]: E0621 02:18:34.366023 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.366302 kubelet[2659]: E0621 02:18:34.366288 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.366368 kubelet[2659]: W0621 02:18:34.366356 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.366457 kubelet[2659]: E0621 02:18:34.366446 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.366704 kubelet[2659]: E0621 02:18:34.366691 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.366788 kubelet[2659]: W0621 02:18:34.366775 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.366915 kubelet[2659]: E0621 02:18:34.366843 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.367128 kubelet[2659]: E0621 02:18:34.367116 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.367471 kubelet[2659]: W0621 02:18:34.367187 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.367471 kubelet[2659]: E0621 02:18:34.367230 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.367751 kubelet[2659]: E0621 02:18:34.367737 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:18:34.367819 kubelet[2659]: W0621 02:18:34.367806 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:18:34.367888 kubelet[2659]: E0621 02:18:34.367876 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:18:34.793479 containerd[1518]: time="2025-06-21T02:18:34.793398428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:34.794415 containerd[1518]: time="2025-06-21T02:18:34.794354246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4264319" Jun 21 02:18:34.795300 containerd[1518]: time="2025-06-21T02:18:34.795264823Z" level=info msg="ImageCreate event name:\"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:34.797167 containerd[1518]: time="2025-06-21T02:18:34.797133538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:34.798158 containerd[1518]: time="2025-06-21T02:18:34.798128557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5633520\" in 1.334388329s" Jun 21 02:18:34.798381 containerd[1518]: time="2025-06-21T02:18:34.798340281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\"" Jun 21 02:18:34.801397 containerd[1518]: time="2025-06-21T02:18:34.801359658Z" level=info msg="CreateContainer within sandbox \"06006037c5ff620c30b7605b348755d1e122245becfb34f893ee3427c5393342\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 21 02:18:34.827228 containerd[1518]: time="2025-06-21T02:18:34.827066304Z" level=info msg="Container cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:34.836334 containerd[1518]: time="2025-06-21T02:18:34.836301999Z" level=info msg="CreateContainer within sandbox \"06006037c5ff620c30b7605b348755d1e122245becfb34f893ee3427c5393342\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20\"" Jun 21 02:18:34.838053 containerd[1518]: time="2025-06-21T02:18:34.837100654Z" level=info msg="StartContainer for \"cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20\"" Jun 21 02:18:34.838612 containerd[1518]: time="2025-06-21T02:18:34.838587762Z" level=info msg="connecting to shim cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20" address="unix:///run/containerd/s/ef569fc67a59fff5ef387c7e826693625c0b7fe7acb68388bc55271593da362f" protocol=ttrpc version=3 Jun 21 02:18:34.865367 systemd[1]: Started cri-containerd-cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20.scope - libcontainer container cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20. Jun 21 02:18:34.900300 containerd[1518]: time="2025-06-21T02:18:34.900261769Z" level=info msg="StartContainer for \"cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20\" returns successfully" Jun 21 02:18:34.934306 systemd[1]: cri-containerd-cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20.scope: Deactivated successfully. Jun 21 02:18:34.947812 containerd[1518]: time="2025-06-21T02:18:34.947756947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20\" id:\"cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20\" pid:3355 exited_at:{seconds:1750472314 nanos:945992274}" Jun 21 02:18:34.948622 containerd[1518]: time="2025-06-21T02:18:34.948586563Z" level=info msg="received exit event container_id:\"cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20\" id:\"cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20\" pid:3355 exited_at:{seconds:1750472314 nanos:945992274}" Jun 21 02:18:34.984437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb4de7d2f5133bac89c38fc15e4ca0cb5bc5b88dcbb000def2266d23de480e20-rootfs.mount: Deactivated successfully. Jun 21 02:18:35.226503 kubelet[2659]: E0621 02:18:35.226450 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2n26r" podUID="9eb38543-6254-4688-8d4b-892c4068ec20" Jun 21 02:18:35.311863 kubelet[2659]: I0621 02:18:35.311821 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:18:35.312235 kubelet[2659]: E0621 02:18:35.312084 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:35.313941 containerd[1518]: time="2025-06-21T02:18:35.313885147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 21 02:18:37.225579 kubelet[2659]: E0621 02:18:37.225492 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2n26r" podUID="9eb38543-6254-4688-8d4b-892c4068ec20" Jun 21 02:18:37.883794 containerd[1518]: time="2025-06-21T02:18:37.883747294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:37.884592 containerd[1518]: time="2025-06-21T02:18:37.884437586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=65872909" Jun 21 02:18:37.885265 containerd[1518]: time="2025-06-21T02:18:37.885240800Z" level=info msg="ImageCreate event name:\"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:37.887067 containerd[1518]: time="2025-06-21T02:18:37.887037311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:37.887921 containerd[1518]: time="2025-06-21T02:18:37.887888245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"67242150\" in 2.573856055s" Jun 21 02:18:37.887921 containerd[1518]: time="2025-06-21T02:18:37.887918886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\"" Jun 21 02:18:37.889937 containerd[1518]: time="2025-06-21T02:18:37.889899559Z" level=info msg="CreateContainer within sandbox \"06006037c5ff620c30b7605b348755d1e122245becfb34f893ee3427c5393342\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 21 02:18:37.896740 containerd[1518]: time="2025-06-21T02:18:37.896346269Z" level=info msg="Container f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:37.903186 containerd[1518]: time="2025-06-21T02:18:37.903126625Z" level=info msg="CreateContainer within sandbox \"06006037c5ff620c30b7605b348755d1e122245becfb34f893ee3427c5393342\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267\"" Jun 21 02:18:37.903621 containerd[1518]: time="2025-06-21T02:18:37.903583793Z" level=info msg="StartContainer for \"f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267\"" Jun 21 02:18:37.906152 containerd[1518]: time="2025-06-21T02:18:37.906115556Z" level=info msg="connecting to shim f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267" address="unix:///run/containerd/s/ef569fc67a59fff5ef387c7e826693625c0b7fe7acb68388bc55271593da362f" protocol=ttrpc version=3 Jun 21 02:18:37.926367 systemd[1]: Started cri-containerd-f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267.scope - libcontainer container f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267. Jun 21 02:18:37.958329 containerd[1518]: time="2025-06-21T02:18:37.958198286Z" level=info msg="StartContainer for \"f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267\" returns successfully" Jun 21 02:18:38.542157 containerd[1518]: time="2025-06-21T02:18:38.542106564Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 02:18:38.544428 systemd[1]: cri-containerd-f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267.scope: Deactivated successfully. Jun 21 02:18:38.544709 systemd[1]: cri-containerd-f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267.scope: Consumed 446ms CPU time, 174.5M memory peak, 2.5M read from disk, 165.8M written to disk. Jun 21 02:18:38.551243 containerd[1518]: time="2025-06-21T02:18:38.551191555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267\" id:\"f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267\" pid:3415 exited_at:{seconds:1750472318 nanos:550868269}" Jun 21 02:18:38.559579 containerd[1518]: time="2025-06-21T02:18:38.559530333Z" level=info msg="received exit event container_id:\"f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267\" id:\"f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267\" pid:3415 exited_at:{seconds:1750472318 nanos:550868269}" Jun 21 02:18:38.577699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f668ef4989d7f7fa7d7e7af3b56abeed6f0d01b0625c674de20749f05243a267-rootfs.mount: Deactivated successfully. Jun 21 02:18:38.583588 kubelet[2659]: I0621 02:18:38.583558 2659 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 21 02:18:38.692575 kubelet[2659]: I0621 02:18:38.692508 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4eff7017-d74b-4f5f-ae18-414a858ebf5d-config-volume\") pod \"coredns-668d6bf9bc-zl99q\" (UID: \"4eff7017-d74b-4f5f-ae18-414a858ebf5d\") " pod="kube-system/coredns-668d6bf9bc-zl99q" Jun 21 02:18:38.692575 kubelet[2659]: I0621 02:18:38.692552 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qdgw\" (UniqueName: \"kubernetes.io/projected/4eff7017-d74b-4f5f-ae18-414a858ebf5d-kube-api-access-6qdgw\") pod \"coredns-668d6bf9bc-zl99q\" (UID: \"4eff7017-d74b-4f5f-ae18-414a858ebf5d\") " pod="kube-system/coredns-668d6bf9bc-zl99q" Jun 21 02:18:38.694944 systemd[1]: Created slice kubepods-burstable-pod4eff7017_d74b_4f5f_ae18_414a858ebf5d.slice - libcontainer container kubepods-burstable-pod4eff7017_d74b_4f5f_ae18_414a858ebf5d.slice. Jun 21 02:18:38.702081 systemd[1]: Created slice kubepods-burstable-pod199308d2_e274_40e1_95da_3749d3ca34e0.slice - libcontainer container kubepods-burstable-pod199308d2_e274_40e1_95da_3749d3ca34e0.slice. Jun 21 02:18:38.708814 systemd[1]: Created slice kubepods-besteffort-pod43f29484_ed5e_42bf_89ce_fa67b3c94a12.slice - libcontainer container kubepods-besteffort-pod43f29484_ed5e_42bf_89ce_fa67b3c94a12.slice. Jun 21 02:18:38.714566 systemd[1]: Created slice kubepods-besteffort-pode5475b0a_595a_4d8e_8141_0513f82a526c.slice - libcontainer container kubepods-besteffort-pode5475b0a_595a_4d8e_8141_0513f82a526c.slice. Jun 21 02:18:38.721468 systemd[1]: Created slice kubepods-besteffort-pod1868a1e7_817a_401c_82f9_0302bdb55897.slice - libcontainer container kubepods-besteffort-pod1868a1e7_817a_401c_82f9_0302bdb55897.slice. Jun 21 02:18:38.726683 systemd[1]: Created slice kubepods-besteffort-pod90b0c939_56e9_4180_b51d_ba8dc6b8a7e5.slice - libcontainer container kubepods-besteffort-pod90b0c939_56e9_4180_b51d_ba8dc6b8a7e5.slice. Jun 21 02:18:38.733892 systemd[1]: Created slice kubepods-besteffort-podf361e498_d382_4c40_85d8_b8697e1e11f0.slice - libcontainer container kubepods-besteffort-podf361e498_d382_4c40_85d8_b8697e1e11f0.slice. Jun 21 02:18:38.747264 systemd[1]: Created slice kubepods-besteffort-pod13d8fede_0173_472e_b7e7_ee6276d45c03.slice - libcontainer container kubepods-besteffort-pod13d8fede_0173_472e_b7e7_ee6276d45c03.slice. Jun 21 02:18:38.793348 kubelet[2659]: I0621 02:18:38.793218 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e5475b0a-595a-4d8e-8141-0513f82a526c-goldmane-key-pair\") pod \"goldmane-5bd85449d4-j6rw9\" (UID: \"e5475b0a-595a-4d8e-8141-0513f82a526c\") " pod="calico-system/goldmane-5bd85449d4-j6rw9" Jun 21 02:18:38.793348 kubelet[2659]: I0621 02:18:38.793266 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f361e498-d382-4c40-85d8-b8697e1e11f0-calico-apiserver-certs\") pod \"calico-apiserver-5477c4f559-kt7q2\" (UID: \"f361e498-d382-4c40-85d8-b8697e1e11f0\") " pod="calico-apiserver/calico-apiserver-5477c4f559-kt7q2" Jun 21 02:18:38.793348 kubelet[2659]: I0621 02:18:38.793286 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pjw6\" (UniqueName: \"kubernetes.io/projected/13d8fede-0173-472e-b7e7-ee6276d45c03-kube-api-access-9pjw6\") pod \"calico-apiserver-6c75966784-9qsx8\" (UID: \"13d8fede-0173-472e-b7e7-ee6276d45c03\") " pod="calico-apiserver/calico-apiserver-6c75966784-9qsx8" Jun 21 02:18:38.793348 kubelet[2659]: I0621 02:18:38.793325 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgcdk\" (UniqueName: \"kubernetes.io/projected/43f29484-ed5e-42bf-89ce-fa67b3c94a12-kube-api-access-xgcdk\") pod \"calico-apiserver-6c75966784-8tlmb\" (UID: \"43f29484-ed5e-42bf-89ce-fa67b3c94a12\") " pod="calico-apiserver/calico-apiserver-6c75966784-8tlmb" Jun 21 02:18:38.793348 kubelet[2659]: I0621 02:18:38.793345 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1868a1e7-817a-401c-82f9-0302bdb55897-whisker-ca-bundle\") pod \"whisker-6d66798546-9qrph\" (UID: \"1868a1e7-817a-401c-82f9-0302bdb55897\") " pod="calico-system/whisker-6d66798546-9qrph" Jun 21 02:18:38.793779 kubelet[2659]: I0621 02:18:38.793364 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/199308d2-e274-40e1-95da-3749d3ca34e0-config-volume\") pod \"coredns-668d6bf9bc-8s44t\" (UID: \"199308d2-e274-40e1-95da-3749d3ca34e0\") " pod="kube-system/coredns-668d6bf9bc-8s44t" Jun 21 02:18:38.793779 kubelet[2659]: I0621 02:18:38.793382 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5475b0a-595a-4d8e-8141-0513f82a526c-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-j6rw9\" (UID: \"e5475b0a-595a-4d8e-8141-0513f82a526c\") " pod="calico-system/goldmane-5bd85449d4-j6rw9" Jun 21 02:18:38.793779 kubelet[2659]: I0621 02:18:38.793400 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v8qm\" (UniqueName: \"kubernetes.io/projected/f361e498-d382-4c40-85d8-b8697e1e11f0-kube-api-access-5v8qm\") pod \"calico-apiserver-5477c4f559-kt7q2\" (UID: \"f361e498-d382-4c40-85d8-b8697e1e11f0\") " pod="calico-apiserver/calico-apiserver-5477c4f559-kt7q2" Jun 21 02:18:38.793779 kubelet[2659]: I0621 02:18:38.793427 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khfr6\" (UniqueName: \"kubernetes.io/projected/90b0c939-56e9-4180-b51d-ba8dc6b8a7e5-kube-api-access-khfr6\") pod \"calico-kube-controllers-79c8dcd6dd-dk8jf\" (UID: \"90b0c939-56e9-4180-b51d-ba8dc6b8a7e5\") " pod="calico-system/calico-kube-controllers-79c8dcd6dd-dk8jf" Jun 21 02:18:38.793779 kubelet[2659]: I0621 02:18:38.793456 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4kdx\" (UniqueName: \"kubernetes.io/projected/199308d2-e274-40e1-95da-3749d3ca34e0-kube-api-access-f4kdx\") pod \"coredns-668d6bf9bc-8s44t\" (UID: \"199308d2-e274-40e1-95da-3749d3ca34e0\") " pod="kube-system/coredns-668d6bf9bc-8s44t" Jun 21 02:18:38.793980 kubelet[2659]: I0621 02:18:38.793479 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjdjj\" (UniqueName: \"kubernetes.io/projected/1868a1e7-817a-401c-82f9-0302bdb55897-kube-api-access-fjdjj\") pod \"whisker-6d66798546-9qrph\" (UID: \"1868a1e7-817a-401c-82f9-0302bdb55897\") " pod="calico-system/whisker-6d66798546-9qrph" Jun 21 02:18:38.793980 kubelet[2659]: I0621 02:18:38.793498 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1868a1e7-817a-401c-82f9-0302bdb55897-whisker-backend-key-pair\") pod \"whisker-6d66798546-9qrph\" (UID: \"1868a1e7-817a-401c-82f9-0302bdb55897\") " pod="calico-system/whisker-6d66798546-9qrph" Jun 21 02:18:38.793980 kubelet[2659]: I0621 02:18:38.793515 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/43f29484-ed5e-42bf-89ce-fa67b3c94a12-calico-apiserver-certs\") pod \"calico-apiserver-6c75966784-8tlmb\" (UID: \"43f29484-ed5e-42bf-89ce-fa67b3c94a12\") " pod="calico-apiserver/calico-apiserver-6c75966784-8tlmb" Jun 21 02:18:38.793980 kubelet[2659]: I0621 02:18:38.793533 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90b0c939-56e9-4180-b51d-ba8dc6b8a7e5-tigera-ca-bundle\") pod \"calico-kube-controllers-79c8dcd6dd-dk8jf\" (UID: \"90b0c939-56e9-4180-b51d-ba8dc6b8a7e5\") " pod="calico-system/calico-kube-controllers-79c8dcd6dd-dk8jf" Jun 21 02:18:38.793980 kubelet[2659]: I0621 02:18:38.793548 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4wsf\" (UniqueName: \"kubernetes.io/projected/e5475b0a-595a-4d8e-8141-0513f82a526c-kube-api-access-q4wsf\") pod \"goldmane-5bd85449d4-j6rw9\" (UID: \"e5475b0a-595a-4d8e-8141-0513f82a526c\") " pod="calico-system/goldmane-5bd85449d4-j6rw9" Jun 21 02:18:38.794173 kubelet[2659]: I0621 02:18:38.793567 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/13d8fede-0173-472e-b7e7-ee6276d45c03-calico-apiserver-certs\") pod \"calico-apiserver-6c75966784-9qsx8\" (UID: \"13d8fede-0173-472e-b7e7-ee6276d45c03\") " pod="calico-apiserver/calico-apiserver-6c75966784-9qsx8" Jun 21 02:18:38.794173 kubelet[2659]: I0621 02:18:38.793582 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5475b0a-595a-4d8e-8141-0513f82a526c-config\") pod \"goldmane-5bd85449d4-j6rw9\" (UID: \"e5475b0a-595a-4d8e-8141-0513f82a526c\") " pod="calico-system/goldmane-5bd85449d4-j6rw9" Jun 21 02:18:38.999383 kubelet[2659]: E0621 02:18:38.999020 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:38.999593 containerd[1518]: time="2025-06-21T02:18:38.999555170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zl99q,Uid:4eff7017-d74b-4f5f-ae18-414a858ebf5d,Namespace:kube-system,Attempt:0,}" Jun 21 02:18:39.006277 kubelet[2659]: E0621 02:18:39.006245 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:39.006745 containerd[1518]: time="2025-06-21T02:18:39.006687844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8s44t,Uid:199308d2-e274-40e1-95da-3749d3ca34e0,Namespace:kube-system,Attempt:0,}" Jun 21 02:18:39.011756 containerd[1518]: time="2025-06-21T02:18:39.011644764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c75966784-8tlmb,Uid:43f29484-ed5e-42bf-89ce-fa67b3c94a12,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:18:39.046363 containerd[1518]: time="2025-06-21T02:18:39.041751566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79c8dcd6dd-dk8jf,Uid:90b0c939-56e9-4180-b51d-ba8dc6b8a7e5,Namespace:calico-system,Attempt:0,}" Jun 21 02:18:39.046363 containerd[1518]: time="2025-06-21T02:18:39.042070092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-j6rw9,Uid:e5475b0a-595a-4d8e-8141-0513f82a526c,Namespace:calico-system,Attempt:0,}" Jun 21 02:18:39.046363 containerd[1518]: time="2025-06-21T02:18:39.042220814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d66798546-9qrph,Uid:1868a1e7-817a-401c-82f9-0302bdb55897,Namespace:calico-system,Attempt:0,}" Jun 21 02:18:39.047728 containerd[1518]: time="2025-06-21T02:18:39.047693462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5477c4f559-kt7q2,Uid:f361e498-d382-4c40-85d8-b8697e1e11f0,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:18:39.055937 containerd[1518]: time="2025-06-21T02:18:39.055906953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c75966784-9qsx8,Uid:13d8fede-0173-472e-b7e7-ee6276d45c03,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:18:39.241056 systemd[1]: Created slice kubepods-besteffort-pod9eb38543_6254_4688_8d4b_892c4068ec20.slice - libcontainer container kubepods-besteffort-pod9eb38543_6254_4688_8d4b_892c4068ec20.slice. Jun 21 02:18:39.256953 containerd[1518]: time="2025-06-21T02:18:39.250068146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2n26r,Uid:9eb38543-6254-4688-8d4b-892c4068ec20,Namespace:calico-system,Attempt:0,}" Jun 21 02:18:39.346522 containerd[1518]: time="2025-06-21T02:18:39.336973620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 21 02:18:39.501411 containerd[1518]: time="2025-06-21T02:18:39.501363616Z" level=error msg="Failed to destroy network for sandbox \"30f9c4b08ea67df991e97d386493f79120be03ad48d0b7b7381b68523d720346\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.503187 containerd[1518]: time="2025-06-21T02:18:39.503140564Z" level=error msg="Failed to destroy network for sandbox \"da2bf6a9ac255bb15a4f3f58bdcfacb42336831f32c8d3728dd2d65e1a084808\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.504675 containerd[1518]: time="2025-06-21T02:18:39.504637348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d66798546-9qrph,Uid:1868a1e7-817a-401c-82f9-0302bdb55897,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f9c4b08ea67df991e97d386493f79120be03ad48d0b7b7381b68523d720346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.505329 containerd[1518]: time="2025-06-21T02:18:39.505295479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c75966784-8tlmb,Uid:43f29484-ed5e-42bf-89ce-fa67b3c94a12,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da2bf6a9ac255bb15a4f3f58bdcfacb42336831f32c8d3728dd2d65e1a084808\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.505698 kubelet[2659]: E0621 02:18:39.505564 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da2bf6a9ac255bb15a4f3f58bdcfacb42336831f32c8d3728dd2d65e1a084808\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.506264 kubelet[2659]: E0621 02:18:39.505897 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f9c4b08ea67df991e97d386493f79120be03ad48d0b7b7381b68523d720346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.508960 kubelet[2659]: E0621 02:18:39.507852 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da2bf6a9ac255bb15a4f3f58bdcfacb42336831f32c8d3728dd2d65e1a084808\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c75966784-8tlmb" Jun 21 02:18:39.508960 kubelet[2659]: E0621 02:18:39.507897 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da2bf6a9ac255bb15a4f3f58bdcfacb42336831f32c8d3728dd2d65e1a084808\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c75966784-8tlmb" Jun 21 02:18:39.508960 kubelet[2659]: E0621 02:18:39.507961 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c75966784-8tlmb_calico-apiserver(43f29484-ed5e-42bf-89ce-fa67b3c94a12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c75966784-8tlmb_calico-apiserver(43f29484-ed5e-42bf-89ce-fa67b3c94a12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da2bf6a9ac255bb15a4f3f58bdcfacb42336831f32c8d3728dd2d65e1a084808\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c75966784-8tlmb" podUID="43f29484-ed5e-42bf-89ce-fa67b3c94a12" Jun 21 02:18:39.509767 kubelet[2659]: E0621 02:18:39.508326 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f9c4b08ea67df991e97d386493f79120be03ad48d0b7b7381b68523d720346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d66798546-9qrph" Jun 21 02:18:39.509767 kubelet[2659]: E0621 02:18:39.508362 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f9c4b08ea67df991e97d386493f79120be03ad48d0b7b7381b68523d720346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d66798546-9qrph" Jun 21 02:18:39.509767 kubelet[2659]: E0621 02:18:39.508448 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d66798546-9qrph_calico-system(1868a1e7-817a-401c-82f9-0302bdb55897)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d66798546-9qrph_calico-system(1868a1e7-817a-401c-82f9-0302bdb55897)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30f9c4b08ea67df991e97d386493f79120be03ad48d0b7b7381b68523d720346\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d66798546-9qrph" podUID="1868a1e7-817a-401c-82f9-0302bdb55897" Jun 21 02:18:39.514903 containerd[1518]: time="2025-06-21T02:18:39.514860072Z" level=error msg="Failed to destroy network for sandbox \"4c20541302c34d43581bae950a6ab7695341b9873476c40f7995ced13819d311\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.516399 containerd[1518]: time="2025-06-21T02:18:39.516338176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-j6rw9,Uid:e5475b0a-595a-4d8e-8141-0513f82a526c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c20541302c34d43581bae950a6ab7695341b9873476c40f7995ced13819d311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.517522 kubelet[2659]: E0621 02:18:39.516533 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c20541302c34d43581bae950a6ab7695341b9873476c40f7995ced13819d311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.517522 kubelet[2659]: E0621 02:18:39.516608 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c20541302c34d43581bae950a6ab7695341b9873476c40f7995ced13819d311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-j6rw9" Jun 21 02:18:39.517522 kubelet[2659]: E0621 02:18:39.516628 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c20541302c34d43581bae950a6ab7695341b9873476c40f7995ced13819d311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-j6rw9" Jun 21 02:18:39.517612 kubelet[2659]: E0621 02:18:39.516672 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-j6rw9_calico-system(e5475b0a-595a-4d8e-8141-0513f82a526c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-j6rw9_calico-system(e5475b0a-595a-4d8e-8141-0513f82a526c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c20541302c34d43581bae950a6ab7695341b9873476c40f7995ced13819d311\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-j6rw9" podUID="e5475b0a-595a-4d8e-8141-0513f82a526c" Jun 21 02:18:39.520876 containerd[1518]: time="2025-06-21T02:18:39.520835808Z" level=error msg="Failed to destroy network for sandbox \"3841cc460abb3f52916f2cc97b7eb0e47634c6a1de10b178c371ba53bff24172\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.523625 containerd[1518]: time="2025-06-21T02:18:39.523555531Z" level=error msg="Failed to destroy network for sandbox \"3daa952da5c3bbbd6b12edb074cba297b110c6476b03ffa59a55dcf45318b9f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.524119 containerd[1518]: time="2025-06-21T02:18:39.524071140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79c8dcd6dd-dk8jf,Uid:90b0c939-56e9-4180-b51d-ba8dc6b8a7e5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3841cc460abb3f52916f2cc97b7eb0e47634c6a1de10b178c371ba53bff24172\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.524516 kubelet[2659]: E0621 02:18:39.524301 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3841cc460abb3f52916f2cc97b7eb0e47634c6a1de10b178c371ba53bff24172\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.524516 kubelet[2659]: E0621 02:18:39.524370 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3841cc460abb3f52916f2cc97b7eb0e47634c6a1de10b178c371ba53bff24172\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79c8dcd6dd-dk8jf" Jun 21 02:18:39.524516 kubelet[2659]: E0621 02:18:39.524387 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3841cc460abb3f52916f2cc97b7eb0e47634c6a1de10b178c371ba53bff24172\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79c8dcd6dd-dk8jf" Jun 21 02:18:39.524655 kubelet[2659]: E0621 02:18:39.524450 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79c8dcd6dd-dk8jf_calico-system(90b0c939-56e9-4180-b51d-ba8dc6b8a7e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79c8dcd6dd-dk8jf_calico-system(90b0c939-56e9-4180-b51d-ba8dc6b8a7e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3841cc460abb3f52916f2cc97b7eb0e47634c6a1de10b178c371ba53bff24172\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79c8dcd6dd-dk8jf" podUID="90b0c939-56e9-4180-b51d-ba8dc6b8a7e5" Jun 21 02:18:39.525073 containerd[1518]: time="2025-06-21T02:18:39.525030955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8s44t,Uid:199308d2-e274-40e1-95da-3749d3ca34e0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3daa952da5c3bbbd6b12edb074cba297b110c6476b03ffa59a55dcf45318b9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.525467 kubelet[2659]: E0621 02:18:39.525237 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3daa952da5c3bbbd6b12edb074cba297b110c6476b03ffa59a55dcf45318b9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.525467 kubelet[2659]: E0621 02:18:39.525272 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3daa952da5c3bbbd6b12edb074cba297b110c6476b03ffa59a55dcf45318b9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8s44t" Jun 21 02:18:39.525467 kubelet[2659]: E0621 02:18:39.525287 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3daa952da5c3bbbd6b12edb074cba297b110c6476b03ffa59a55dcf45318b9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8s44t" Jun 21 02:18:39.525560 kubelet[2659]: E0621 02:18:39.525363 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8s44t_kube-system(199308d2-e274-40e1-95da-3749d3ca34e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8s44t_kube-system(199308d2-e274-40e1-95da-3749d3ca34e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3daa952da5c3bbbd6b12edb074cba297b110c6476b03ffa59a55dcf45318b9f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8s44t" podUID="199308d2-e274-40e1-95da-3749d3ca34e0" Jun 21 02:18:39.528392 containerd[1518]: time="2025-06-21T02:18:39.528345448Z" level=error msg="Failed to destroy network for sandbox \"f7857aa078e2ade58367f49056b9754479297391e3cceffe17035b7fa75f863b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.529379 containerd[1518]: time="2025-06-21T02:18:39.529344064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c75966784-9qsx8,Uid:13d8fede-0173-472e-b7e7-ee6276d45c03,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7857aa078e2ade58367f49056b9754479297391e3cceffe17035b7fa75f863b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.530299 kubelet[2659]: E0621 02:18:39.530186 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7857aa078e2ade58367f49056b9754479297391e3cceffe17035b7fa75f863b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.530401 kubelet[2659]: E0621 02:18:39.530311 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7857aa078e2ade58367f49056b9754479297391e3cceffe17035b7fa75f863b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c75966784-9qsx8" Jun 21 02:18:39.530432 kubelet[2659]: E0621 02:18:39.530403 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7857aa078e2ade58367f49056b9754479297391e3cceffe17035b7fa75f863b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c75966784-9qsx8" Jun 21 02:18:39.530507 kubelet[2659]: E0621 02:18:39.530441 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c75966784-9qsx8_calico-apiserver(13d8fede-0173-472e-b7e7-ee6276d45c03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c75966784-9qsx8_calico-apiserver(13d8fede-0173-472e-b7e7-ee6276d45c03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7857aa078e2ade58367f49056b9754479297391e3cceffe17035b7fa75f863b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c75966784-9qsx8" podUID="13d8fede-0173-472e-b7e7-ee6276d45c03" Jun 21 02:18:39.534518 containerd[1518]: time="2025-06-21T02:18:39.534475227Z" level=error msg="Failed to destroy network for sandbox \"ea72302060f65492025e4b1a4c903d2d5e08f8e3d755c3fe770c0f9290da0c19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.535420 containerd[1518]: time="2025-06-21T02:18:39.535386721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zl99q,Uid:4eff7017-d74b-4f5f-ae18-414a858ebf5d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea72302060f65492025e4b1a4c903d2d5e08f8e3d755c3fe770c0f9290da0c19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.536002 containerd[1518]: time="2025-06-21T02:18:39.535973291Z" level=error msg="Failed to destroy network for sandbox \"844b6e4b355e0498bf645893ed89e142cf1ec63352269d4fa32a8f0667d63a9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.536297 kubelet[2659]: E0621 02:18:39.536263 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea72302060f65492025e4b1a4c903d2d5e08f8e3d755c3fe770c0f9290da0c19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.536349 kubelet[2659]: E0621 02:18:39.536303 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea72302060f65492025e4b1a4c903d2d5e08f8e3d755c3fe770c0f9290da0c19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zl99q" Jun 21 02:18:39.536349 kubelet[2659]: E0621 02:18:39.536331 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea72302060f65492025e4b1a4c903d2d5e08f8e3d755c3fe770c0f9290da0c19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zl99q" Jun 21 02:18:39.536406 kubelet[2659]: E0621 02:18:39.536364 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zl99q_kube-system(4eff7017-d74b-4f5f-ae18-414a858ebf5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zl99q_kube-system(4eff7017-d74b-4f5f-ae18-414a858ebf5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea72302060f65492025e4b1a4c903d2d5e08f8e3d755c3fe770c0f9290da0c19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zl99q" podUID="4eff7017-d74b-4f5f-ae18-414a858ebf5d" Jun 21 02:18:39.537132 containerd[1518]: time="2025-06-21T02:18:39.537080908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5477c4f559-kt7q2,Uid:f361e498-d382-4c40-85d8-b8697e1e11f0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"844b6e4b355e0498bf645893ed89e142cf1ec63352269d4fa32a8f0667d63a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.537484 kubelet[2659]: E0621 02:18:39.537428 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"844b6e4b355e0498bf645893ed89e142cf1ec63352269d4fa32a8f0667d63a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.537597 kubelet[2659]: E0621 02:18:39.537576 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"844b6e4b355e0498bf645893ed89e142cf1ec63352269d4fa32a8f0667d63a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5477c4f559-kt7q2" Jun 21 02:18:39.537696 kubelet[2659]: E0621 02:18:39.537680 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"844b6e4b355e0498bf645893ed89e142cf1ec63352269d4fa32a8f0667d63a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5477c4f559-kt7q2" Jun 21 02:18:39.537942 kubelet[2659]: E0621 02:18:39.537910 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5477c4f559-kt7q2_calico-apiserver(f361e498-d382-4c40-85d8-b8697e1e11f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5477c4f559-kt7q2_calico-apiserver(f361e498-d382-4c40-85d8-b8697e1e11f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"844b6e4b355e0498bf645893ed89e142cf1ec63352269d4fa32a8f0667d63a9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5477c4f559-kt7q2" podUID="f361e498-d382-4c40-85d8-b8697e1e11f0" Jun 21 02:18:39.538024 containerd[1518]: time="2025-06-21T02:18:39.537923242Z" level=error msg="Failed to destroy network for sandbox \"609ca6170e2fc8d748cde716df1afa0e095f3caf65f357168b15b381ed115739\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.538901 containerd[1518]: time="2025-06-21T02:18:39.538866017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2n26r,Uid:9eb38543-6254-4688-8d4b-892c4068ec20,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"609ca6170e2fc8d748cde716df1afa0e095f3caf65f357168b15b381ed115739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.539479 kubelet[2659]: E0621 02:18:39.539032 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"609ca6170e2fc8d748cde716df1afa0e095f3caf65f357168b15b381ed115739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:18:39.539479 kubelet[2659]: E0621 02:18:39.539066 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"609ca6170e2fc8d748cde716df1afa0e095f3caf65f357168b15b381ed115739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2n26r" Jun 21 02:18:39.539479 kubelet[2659]: E0621 02:18:39.539083 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"609ca6170e2fc8d748cde716df1afa0e095f3caf65f357168b15b381ed115739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2n26r" Jun 21 02:18:39.539579 kubelet[2659]: E0621 02:18:39.539112 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2n26r_calico-system(9eb38543-6254-4688-8d4b-892c4068ec20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2n26r_calico-system(9eb38543-6254-4688-8d4b-892c4068ec20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"609ca6170e2fc8d748cde716df1afa0e095f3caf65f357168b15b381ed115739\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2n26r" podUID="9eb38543-6254-4688-8d4b-892c4068ec20" Jun 21 02:18:39.904756 systemd[1]: run-netns-cni\x2ddf9aabf3\x2db05e\x2d99db\x2df978\x2db51db78e9a45.mount: Deactivated successfully. Jun 21 02:18:42.846300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152839023.mount: Deactivated successfully. Jun 21 02:18:43.045220 containerd[1518]: time="2025-06-21T02:18:43.030642153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=150542367" Jun 21 02:18:43.045723 containerd[1518]: time="2025-06-21T02:18:43.033934440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"150542229\" in 3.687429507s" Jun 21 02:18:43.045723 containerd[1518]: time="2025-06-21T02:18:43.045260162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\"" Jun 21 02:18:43.045723 containerd[1518]: time="2025-06-21T02:18:43.037776295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:43.045957 containerd[1518]: time="2025-06-21T02:18:43.045920492Z" level=info msg="ImageCreate event name:\"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:43.046541 containerd[1518]: time="2025-06-21T02:18:43.046500220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:43.054091 containerd[1518]: time="2025-06-21T02:18:43.054048528Z" level=info msg="CreateContainer within sandbox \"06006037c5ff620c30b7605b348755d1e122245becfb34f893ee3427c5393342\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 21 02:18:43.063475 containerd[1518]: time="2025-06-21T02:18:43.061389273Z" level=info msg="Container e40b9571be6d1b687082ece36b9449274b1b621fe5e262760b06c4c8e1c972df: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:43.079951 containerd[1518]: time="2025-06-21T02:18:43.079911978Z" level=info msg="CreateContainer within sandbox \"06006037c5ff620c30b7605b348755d1e122245becfb34f893ee3427c5393342\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e40b9571be6d1b687082ece36b9449274b1b621fe5e262760b06c4c8e1c972df\"" Jun 21 02:18:43.080769 containerd[1518]: time="2025-06-21T02:18:43.080738990Z" level=info msg="StartContainer for \"e40b9571be6d1b687082ece36b9449274b1b621fe5e262760b06c4c8e1c972df\"" Jun 21 02:18:43.082406 containerd[1518]: time="2025-06-21T02:18:43.082336493Z" level=info msg="connecting to shim e40b9571be6d1b687082ece36b9449274b1b621fe5e262760b06c4c8e1c972df" address="unix:///run/containerd/s/ef569fc67a59fff5ef387c7e826693625c0b7fe7acb68388bc55271593da362f" protocol=ttrpc version=3 Jun 21 02:18:43.109410 systemd[1]: Started cri-containerd-e40b9571be6d1b687082ece36b9449274b1b621fe5e262760b06c4c8e1c972df.scope - libcontainer container e40b9571be6d1b687082ece36b9449274b1b621fe5e262760b06c4c8e1c972df. Jun 21 02:18:43.155217 containerd[1518]: time="2025-06-21T02:18:43.155172416Z" level=info msg="StartContainer for \"e40b9571be6d1b687082ece36b9449274b1b621fe5e262760b06c4c8e1c972df\" returns successfully" Jun 21 02:18:43.378345 kubelet[2659]: I0621 02:18:43.376638 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-c6647" podStartSLOduration=1.585359086 podStartE2EDuration="13.376621305s" podCreationTimestamp="2025-06-21 02:18:30 +0000 UTC" firstStartedPulling="2025-06-21 02:18:31.255513965 +0000 UTC m=+21.124690884" lastFinishedPulling="2025-06-21 02:18:43.046776184 +0000 UTC m=+32.915953103" observedRunningTime="2025-06-21 02:18:43.37418227 +0000 UTC m=+33.243359189" watchObservedRunningTime="2025-06-21 02:18:43.376621305 +0000 UTC m=+33.245798224" Jun 21 02:18:43.459917 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 21 02:18:43.460058 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 21 02:18:43.597769 containerd[1518]: time="2025-06-21T02:18:43.597700350Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e40b9571be6d1b687082ece36b9449274b1b621fe5e262760b06c4c8e1c972df\" id:\"96e2c4e49b4e4059402e3a698c571a37c056438ea144085f709e76fc0be436cc\" pid:3816 exit_status:1 exited_at:{seconds:1750472323 nanos:597227863}" Jun 21 02:18:43.736716 kubelet[2659]: I0621 02:18:43.736677 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1868a1e7-817a-401c-82f9-0302bdb55897-whisker-backend-key-pair\") pod \"1868a1e7-817a-401c-82f9-0302bdb55897\" (UID: \"1868a1e7-817a-401c-82f9-0302bdb55897\") " Jun 21 02:18:43.736716 kubelet[2659]: I0621 02:18:43.736726 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjdjj\" (UniqueName: \"kubernetes.io/projected/1868a1e7-817a-401c-82f9-0302bdb55897-kube-api-access-fjdjj\") pod \"1868a1e7-817a-401c-82f9-0302bdb55897\" (UID: \"1868a1e7-817a-401c-82f9-0302bdb55897\") " Jun 21 02:18:43.736886 kubelet[2659]: I0621 02:18:43.736754 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1868a1e7-817a-401c-82f9-0302bdb55897-whisker-ca-bundle\") pod \"1868a1e7-817a-401c-82f9-0302bdb55897\" (UID: \"1868a1e7-817a-401c-82f9-0302bdb55897\") " Jun 21 02:18:43.737199 kubelet[2659]: I0621 02:18:43.737161 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1868a1e7-817a-401c-82f9-0302bdb55897-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1868a1e7-817a-401c-82f9-0302bdb55897" (UID: "1868a1e7-817a-401c-82f9-0302bdb55897"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 21 02:18:43.760810 kubelet[2659]: I0621 02:18:43.760667 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1868a1e7-817a-401c-82f9-0302bdb55897-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1868a1e7-817a-401c-82f9-0302bdb55897" (UID: "1868a1e7-817a-401c-82f9-0302bdb55897"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 21 02:18:43.760949 kubelet[2659]: I0621 02:18:43.760918 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1868a1e7-817a-401c-82f9-0302bdb55897-kube-api-access-fjdjj" (OuterVolumeSpecName: "kube-api-access-fjdjj") pod "1868a1e7-817a-401c-82f9-0302bdb55897" (UID: "1868a1e7-817a-401c-82f9-0302bdb55897"). InnerVolumeSpecName "kube-api-access-fjdjj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 02:18:43.837357 kubelet[2659]: I0621 02:18:43.837306 2659 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1868a1e7-817a-401c-82f9-0302bdb55897-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jun 21 02:18:43.837357 kubelet[2659]: I0621 02:18:43.837345 2659 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fjdjj\" (UniqueName: \"kubernetes.io/projected/1868a1e7-817a-401c-82f9-0302bdb55897-kube-api-access-fjdjj\") on node \"localhost\" DevicePath \"\"" Jun 21 02:18:43.837357 kubelet[2659]: I0621 02:18:43.837356 2659 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1868a1e7-817a-401c-82f9-0302bdb55897-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 21 02:18:43.847639 systemd[1]: var-lib-kubelet-pods-1868a1e7\x2d817a\x2d401c\x2d82f9\x2d0302bdb55897-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfjdjj.mount: Deactivated successfully. Jun 21 02:18:43.848375 systemd[1]: var-lib-kubelet-pods-1868a1e7\x2d817a\x2d401c\x2d82f9\x2d0302bdb55897-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 21 02:18:44.233692 systemd[1]: Removed slice kubepods-besteffort-pod1868a1e7_817a_401c_82f9_0302bdb55897.slice - libcontainer container kubepods-besteffort-pod1868a1e7_817a_401c_82f9_0302bdb55897.slice. Jun 21 02:18:44.409807 systemd[1]: Created slice kubepods-besteffort-poda3a14f7f_a8cf_434d_8ca8_5143ad5f203c.slice - libcontainer container kubepods-besteffort-poda3a14f7f_a8cf_434d_8ca8_5143ad5f203c.slice. Jun 21 02:18:44.440648 kubelet[2659]: I0621 02:18:44.440607 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a3a14f7f-a8cf-434d-8ca8-5143ad5f203c-whisker-backend-key-pair\") pod \"whisker-96cf778f5-7t9mf\" (UID: \"a3a14f7f-a8cf-434d-8ca8-5143ad5f203c\") " pod="calico-system/whisker-96cf778f5-7t9mf" Jun 21 02:18:44.440648 kubelet[2659]: I0621 02:18:44.440651 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3a14f7f-a8cf-434d-8ca8-5143ad5f203c-whisker-ca-bundle\") pod \"whisker-96cf778f5-7t9mf\" (UID: \"a3a14f7f-a8cf-434d-8ca8-5143ad5f203c\") " pod="calico-system/whisker-96cf778f5-7t9mf" Jun 21 02:18:44.440989 kubelet[2659]: I0621 02:18:44.440777 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvs86\" (UniqueName: \"kubernetes.io/projected/a3a14f7f-a8cf-434d-8ca8-5143ad5f203c-kube-api-access-pvs86\") pod \"whisker-96cf778f5-7t9mf\" (UID: \"a3a14f7f-a8cf-434d-8ca8-5143ad5f203c\") " pod="calico-system/whisker-96cf778f5-7t9mf" Jun 21 02:18:44.458357 containerd[1518]: time="2025-06-21T02:18:44.458311382Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e40b9571be6d1b687082ece36b9449274b1b621fe5e262760b06c4c8e1c972df\" id:\"85810237d468529537c0fe5c79a0d4d8a39cf63269744b4481b2721ffaf1b383\" pid:3863 exit_status:1 exited_at:{seconds:1750472324 nanos:457756694}" Jun 21 02:18:44.733493 containerd[1518]: time="2025-06-21T02:18:44.733199936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-96cf778f5-7t9mf,Uid:a3a14f7f-a8cf-434d-8ca8-5143ad5f203c,Namespace:calico-system,Attempt:0,}" Jun 21 02:18:45.076329 systemd-networkd[1439]: calia0466a6c8d2: Link UP Jun 21 02:18:45.076561 systemd-networkd[1439]: calia0466a6c8d2: Gained carrier Jun 21 02:18:45.096434 containerd[1518]: 2025-06-21 02:18:44.759 [INFO][3878] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 02:18:45.096434 containerd[1518]: 2025-06-21 02:18:44.824 [INFO][3878] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--96cf778f5--7t9mf-eth0 whisker-96cf778f5- calico-system a3a14f7f-a8cf-434d-8ca8-5143ad5f203c 894 0 2025-06-21 02:18:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:96cf778f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-96cf778f5-7t9mf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia0466a6c8d2 [] [] }} ContainerID="da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" Namespace="calico-system" Pod="whisker-96cf778f5-7t9mf" WorkloadEndpoint="localhost-k8s-whisker--96cf778f5--7t9mf-" Jun 21 02:18:45.096434 containerd[1518]: 2025-06-21 02:18:44.826 [INFO][3878] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" Namespace="calico-system" Pod="whisker-96cf778f5-7t9mf" WorkloadEndpoint="localhost-k8s-whisker--96cf778f5--7t9mf-eth0" Jun 21 02:18:45.096434 containerd[1518]: 2025-06-21 02:18:44.958 [INFO][3892] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" HandleID="k8s-pod-network.da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" Workload="localhost-k8s-whisker--96cf778f5--7t9mf-eth0" Jun 21 02:18:45.096660 containerd[1518]: 2025-06-21 02:18:44.958 [INFO][3892] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" HandleID="k8s-pod-network.da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" Workload="localhost-k8s-whisker--96cf778f5--7t9mf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039c350), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-96cf778f5-7t9mf", "timestamp":"2025-06-21 02:18:44.958302957 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:18:45.096660 containerd[1518]: 2025-06-21 02:18:44.958 [INFO][3892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:18:45.096660 containerd[1518]: 2025-06-21 02:18:44.958 [INFO][3892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:18:45.096660 containerd[1518]: 2025-06-21 02:18:44.962 [INFO][3892] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:18:45.096660 containerd[1518]: 2025-06-21 02:18:44.982 [INFO][3892] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" host="localhost" Jun 21 02:18:45.096660 containerd[1518]: 2025-06-21 02:18:44.996 [INFO][3892] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:18:45.096660 containerd[1518]: 2025-06-21 02:18:45.002 [INFO][3892] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:18:45.096660 containerd[1518]: 2025-06-21 02:18:45.004 [INFO][3892] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:45.096660 containerd[1518]: 2025-06-21 02:18:45.006 [INFO][3892] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:45.096660 containerd[1518]: 2025-06-21 02:18:45.006 [INFO][3892] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" host="localhost" Jun 21 02:18:45.096854 containerd[1518]: 2025-06-21 02:18:45.008 [INFO][3892] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759 Jun 21 02:18:45.096854 containerd[1518]: 2025-06-21 02:18:45.045 [INFO][3892] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" host="localhost" Jun 21 02:18:45.096854 containerd[1518]: 2025-06-21 02:18:45.055 [INFO][3892] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" host="localhost" Jun 21 02:18:45.096854 containerd[1518]: 2025-06-21 02:18:45.055 [INFO][3892] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" host="localhost" Jun 21 02:18:45.096854 containerd[1518]: 2025-06-21 02:18:45.055 [INFO][3892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:18:45.096854 containerd[1518]: 2025-06-21 02:18:45.056 [INFO][3892] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" HandleID="k8s-pod-network.da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" Workload="localhost-k8s-whisker--96cf778f5--7t9mf-eth0" Jun 21 02:18:45.096961 containerd[1518]: 2025-06-21 02:18:45.058 [INFO][3878] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" Namespace="calico-system" Pod="whisker-96cf778f5-7t9mf" WorkloadEndpoint="localhost-k8s-whisker--96cf778f5--7t9mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--96cf778f5--7t9mf-eth0", GenerateName:"whisker-96cf778f5-", Namespace:"calico-system", SelfLink:"", UID:"a3a14f7f-a8cf-434d-8ca8-5143ad5f203c", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"96cf778f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-96cf778f5-7t9mf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia0466a6c8d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:45.096961 containerd[1518]: 2025-06-21 02:18:45.058 [INFO][3878] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" Namespace="calico-system" Pod="whisker-96cf778f5-7t9mf" WorkloadEndpoint="localhost-k8s-whisker--96cf778f5--7t9mf-eth0" Jun 21 02:18:45.097024 containerd[1518]: 2025-06-21 02:18:45.058 [INFO][3878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0466a6c8d2 ContainerID="da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" Namespace="calico-system" Pod="whisker-96cf778f5-7t9mf" WorkloadEndpoint="localhost-k8s-whisker--96cf778f5--7t9mf-eth0" Jun 21 02:18:45.097024 containerd[1518]: 2025-06-21 02:18:45.076 [INFO][3878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" Namespace="calico-system" Pod="whisker-96cf778f5-7t9mf" WorkloadEndpoint="localhost-k8s-whisker--96cf778f5--7t9mf-eth0" Jun 21 02:18:45.097064 containerd[1518]: 2025-06-21 02:18:45.076 [INFO][3878] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" Namespace="calico-system" Pod="whisker-96cf778f5-7t9mf" WorkloadEndpoint="localhost-k8s-whisker--96cf778f5--7t9mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--96cf778f5--7t9mf-eth0", GenerateName:"whisker-96cf778f5-", Namespace:"calico-system", SelfLink:"", UID:"a3a14f7f-a8cf-434d-8ca8-5143ad5f203c", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"96cf778f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759", Pod:"whisker-96cf778f5-7t9mf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia0466a6c8d2", MAC:"56:35:6c:3f:45:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:45.097107 containerd[1518]: 2025-06-21 02:18:45.091 [INFO][3878] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" Namespace="calico-system" Pod="whisker-96cf778f5-7t9mf" WorkloadEndpoint="localhost-k8s-whisker--96cf778f5--7t9mf-eth0" Jun 21 02:18:45.138703 containerd[1518]: time="2025-06-21T02:18:45.138657865Z" level=info msg="connecting to shim da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759" address="unix:///run/containerd/s/522751882cca264d946ff5b546c53ad4e2b490500aed11ba191e81d3396f4d82" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:45.172725 systemd[1]: Started cri-containerd-da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759.scope - libcontainer container da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759. Jun 21 02:18:45.183417 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:18:45.201965 containerd[1518]: time="2025-06-21T02:18:45.201929566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-96cf778f5-7t9mf,Uid:a3a14f7f-a8cf-434d-8ca8-5143ad5f203c,Namespace:calico-system,Attempt:0,} returns sandbox id \"da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759\"" Jun 21 02:18:45.205659 containerd[1518]: time="2025-06-21T02:18:45.205630856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 21 02:18:46.082238 containerd[1518]: time="2025-06-21T02:18:46.081706271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:46.082627 containerd[1518]: time="2025-06-21T02:18:46.082576603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4605623" Jun 21 02:18:46.083130 containerd[1518]: time="2025-06-21T02:18:46.083101330Z" level=info msg="ImageCreate event name:\"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:46.085193 containerd[1518]: time="2025-06-21T02:18:46.085139677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:46.088271 containerd[1518]: time="2025-06-21T02:18:46.088237038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"5974856\" in 882.45482ms" Jun 21 02:18:46.088271 containerd[1518]: time="2025-06-21T02:18:46.088268999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\"" Jun 21 02:18:46.092163 containerd[1518]: time="2025-06-21T02:18:46.091285959Z" level=info msg="CreateContainer within sandbox \"da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 21 02:18:46.112188 containerd[1518]: time="2025-06-21T02:18:46.112142116Z" level=info msg="Container 58512b8d31a68b15b3758f144c1a5a318297f98598f6ed2d50037fa90e8e8c37: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:46.122623 containerd[1518]: time="2025-06-21T02:18:46.122576694Z" level=info msg="CreateContainer within sandbox \"da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"58512b8d31a68b15b3758f144c1a5a318297f98598f6ed2d50037fa90e8e8c37\"" Jun 21 02:18:46.123621 containerd[1518]: time="2025-06-21T02:18:46.123323584Z" level=info msg="StartContainer for \"58512b8d31a68b15b3758f144c1a5a318297f98598f6ed2d50037fa90e8e8c37\"" Jun 21 02:18:46.126344 containerd[1518]: time="2025-06-21T02:18:46.126086421Z" level=info msg="connecting to shim 58512b8d31a68b15b3758f144c1a5a318297f98598f6ed2d50037fa90e8e8c37" address="unix:///run/containerd/s/522751882cca264d946ff5b546c53ad4e2b490500aed11ba191e81d3396f4d82" protocol=ttrpc version=3 Jun 21 02:18:46.149378 systemd[1]: Started cri-containerd-58512b8d31a68b15b3758f144c1a5a318297f98598f6ed2d50037fa90e8e8c37.scope - libcontainer container 58512b8d31a68b15b3758f144c1a5a318297f98598f6ed2d50037fa90e8e8c37. Jun 21 02:18:46.208147 containerd[1518]: time="2025-06-21T02:18:46.207074457Z" level=info msg="StartContainer for \"58512b8d31a68b15b3758f144c1a5a318297f98598f6ed2d50037fa90e8e8c37\" returns successfully" Jun 21 02:18:46.208147 containerd[1518]: time="2025-06-21T02:18:46.208126391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 21 02:18:46.229065 kubelet[2659]: I0621 02:18:46.229020 2659 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1868a1e7-817a-401c-82f9-0302bdb55897" path="/var/lib/kubelet/pods/1868a1e7-817a-401c-82f9-0302bdb55897/volumes" Jun 21 02:18:46.848378 systemd-networkd[1439]: calia0466a6c8d2: Gained IPv6LL Jun 21 02:18:47.966594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3213798140.mount: Deactivated successfully. Jun 21 02:18:48.019782 containerd[1518]: time="2025-06-21T02:18:48.019731710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:48.020252 containerd[1518]: time="2025-06-21T02:18:48.020220796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=30829716" Jun 21 02:18:48.021033 containerd[1518]: time="2025-06-21T02:18:48.021008126Z" level=info msg="ImageCreate event name:\"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:48.023380 containerd[1518]: time="2025-06-21T02:18:48.023355196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:48.024059 containerd[1518]: time="2025-06-21T02:18:48.024021284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"30829546\" in 1.815865973s" Jun 21 02:18:48.024059 containerd[1518]: time="2025-06-21T02:18:48.024051044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\"" Jun 21 02:18:48.027179 containerd[1518]: time="2025-06-21T02:18:48.027144564Z" level=info msg="CreateContainer within sandbox \"da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 21 02:18:48.033237 containerd[1518]: time="2025-06-21T02:18:48.033148520Z" level=info msg="Container 68a1cc3710c4650e226128e92ab0148761e8a2c215c9e1f464b1d4eef2007edf: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:48.040673 containerd[1518]: time="2025-06-21T02:18:48.040627495Z" level=info msg="CreateContainer within sandbox \"da0daf732c2ba695384efe3380a878d0b44a701e4dce9d1b51fbb4d22af2a759\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"68a1cc3710c4650e226128e92ab0148761e8a2c215c9e1f464b1d4eef2007edf\"" Jun 21 02:18:48.041093 containerd[1518]: time="2025-06-21T02:18:48.041059381Z" level=info msg="StartContainer for \"68a1cc3710c4650e226128e92ab0148761e8a2c215c9e1f464b1d4eef2007edf\"" Jun 21 02:18:48.043917 containerd[1518]: time="2025-06-21T02:18:48.043876376Z" level=info msg="connecting to shim 68a1cc3710c4650e226128e92ab0148761e8a2c215c9e1f464b1d4eef2007edf" address="unix:///run/containerd/s/522751882cca264d946ff5b546c53ad4e2b490500aed11ba191e81d3396f4d82" protocol=ttrpc version=3 Jun 21 02:18:48.060358 systemd[1]: Started cri-containerd-68a1cc3710c4650e226128e92ab0148761e8a2c215c9e1f464b1d4eef2007edf.scope - libcontainer container 68a1cc3710c4650e226128e92ab0148761e8a2c215c9e1f464b1d4eef2007edf. Jun 21 02:18:48.091441 containerd[1518]: time="2025-06-21T02:18:48.091409140Z" level=info msg="StartContainer for \"68a1cc3710c4650e226128e92ab0148761e8a2c215c9e1f464b1d4eef2007edf\" returns successfully" Jun 21 02:18:50.226959 containerd[1518]: time="2025-06-21T02:18:50.226898290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79c8dcd6dd-dk8jf,Uid:90b0c939-56e9-4180-b51d-ba8dc6b8a7e5,Namespace:calico-system,Attempt:0,}" Jun 21 02:18:50.227345 containerd[1518]: time="2025-06-21T02:18:50.227317455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5477c4f559-kt7q2,Uid:f361e498-d382-4c40-85d8-b8697e1e11f0,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:18:50.369083 systemd-networkd[1439]: calie9f6eadf01e: Link UP Jun 21 02:18:50.369566 systemd-networkd[1439]: calie9f6eadf01e: Gained carrier Jun 21 02:18:50.394233 kubelet[2659]: I0621 02:18:50.393645 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-96cf778f5-7t9mf" podStartSLOduration=3.574250442 podStartE2EDuration="6.393625123s" podCreationTimestamp="2025-06-21 02:18:44 +0000 UTC" firstStartedPulling="2025-06-21 02:18:45.205407973 +0000 UTC m=+35.074584892" lastFinishedPulling="2025-06-21 02:18:48.024782654 +0000 UTC m=+37.893959573" observedRunningTime="2025-06-21 02:18:48.38737446 +0000 UTC m=+38.256551379" watchObservedRunningTime="2025-06-21 02:18:50.393625123 +0000 UTC m=+40.262802042" Jun 21 02:18:50.398583 containerd[1518]: 2025-06-21 02:18:50.258 [INFO][4243] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 02:18:50.398583 containerd[1518]: 2025-06-21 02:18:50.276 [INFO][4243] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0 calico-apiserver-5477c4f559- calico-apiserver f361e498-d382-4c40-85d8-b8697e1e11f0 825 0 2025-06-21 02:18:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5477c4f559 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5477c4f559-kt7q2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie9f6eadf01e [] [] }} ContainerID="9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" Namespace="calico-apiserver" Pod="calico-apiserver-5477c4f559-kt7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5477c4f559--kt7q2-" Jun 21 02:18:50.398583 containerd[1518]: 2025-06-21 02:18:50.276 [INFO][4243] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" Namespace="calico-apiserver" Pod="calico-apiserver-5477c4f559-kt7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0" Jun 21 02:18:50.398583 containerd[1518]: 2025-06-21 02:18:50.321 [INFO][4263] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" HandleID="k8s-pod-network.9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" Workload="localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0" Jun 21 02:18:50.398763 containerd[1518]: 2025-06-21 02:18:50.322 [INFO][4263] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" HandleID="k8s-pod-network.9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" Workload="localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5477c4f559-kt7q2", "timestamp":"2025-06-21 02:18:50.321957169 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:18:50.398763 containerd[1518]: 2025-06-21 02:18:50.322 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:18:50.398763 containerd[1518]: 2025-06-21 02:18:50.322 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:18:50.398763 containerd[1518]: 2025-06-21 02:18:50.322 [INFO][4263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:18:50.398763 containerd[1518]: 2025-06-21 02:18:50.332 [INFO][4263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" host="localhost" Jun 21 02:18:50.398763 containerd[1518]: 2025-06-21 02:18:50.335 [INFO][4263] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:18:50.398763 containerd[1518]: 2025-06-21 02:18:50.340 [INFO][4263] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:18:50.398763 containerd[1518]: 2025-06-21 02:18:50.342 [INFO][4263] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:50.398763 containerd[1518]: 2025-06-21 02:18:50.345 [INFO][4263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:50.398763 containerd[1518]: 2025-06-21 02:18:50.347 [INFO][4263] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" host="localhost" Jun 21 02:18:50.398957 containerd[1518]: 2025-06-21 02:18:50.349 [INFO][4263] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5 Jun 21 02:18:50.398957 containerd[1518]: 2025-06-21 02:18:50.354 [INFO][4263] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" host="localhost" Jun 21 02:18:50.398957 containerd[1518]: 2025-06-21 02:18:50.359 [INFO][4263] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" host="localhost" Jun 21 02:18:50.398957 containerd[1518]: 2025-06-21 02:18:50.359 [INFO][4263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" host="localhost" Jun 21 02:18:50.398957 containerd[1518]: 2025-06-21 02:18:50.361 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:18:50.398957 containerd[1518]: 2025-06-21 02:18:50.361 [INFO][4263] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" HandleID="k8s-pod-network.9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" Workload="localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0" Jun 21 02:18:50.399084 containerd[1518]: 2025-06-21 02:18:50.366 [INFO][4243] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" Namespace="calico-apiserver" Pod="calico-apiserver-5477c4f559-kt7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0", GenerateName:"calico-apiserver-5477c4f559-", Namespace:"calico-apiserver", SelfLink:"", UID:"f361e498-d382-4c40-85d8-b8697e1e11f0", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5477c4f559", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5477c4f559-kt7q2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9f6eadf01e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:50.399131 containerd[1518]: 2025-06-21 02:18:50.366 [INFO][4243] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" Namespace="calico-apiserver" Pod="calico-apiserver-5477c4f559-kt7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0" Jun 21 02:18:50.399131 containerd[1518]: 2025-06-21 02:18:50.366 [INFO][4243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9f6eadf01e ContainerID="9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" Namespace="calico-apiserver" Pod="calico-apiserver-5477c4f559-kt7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0" Jun 21 02:18:50.399131 containerd[1518]: 2025-06-21 02:18:50.370 [INFO][4243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" Namespace="calico-apiserver" Pod="calico-apiserver-5477c4f559-kt7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0" Jun 21 02:18:50.399185 containerd[1518]: 2025-06-21 02:18:50.371 [INFO][4243] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" Namespace="calico-apiserver" Pod="calico-apiserver-5477c4f559-kt7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0", GenerateName:"calico-apiserver-5477c4f559-", Namespace:"calico-apiserver", SelfLink:"", UID:"f361e498-d382-4c40-85d8-b8697e1e11f0", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5477c4f559", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5", Pod:"calico-apiserver-5477c4f559-kt7q2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9f6eadf01e", MAC:"9e:02:9f:6f:6c:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:50.399248 containerd[1518]: 2025-06-21 02:18:50.393 [INFO][4243] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" Namespace="calico-apiserver" Pod="calico-apiserver-5477c4f559-kt7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5477c4f559--kt7q2-eth0" Jun 21 02:18:50.431776 containerd[1518]: time="2025-06-21T02:18:50.431727347Z" level=info msg="connecting to shim 9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5" address="unix:///run/containerd/s/35de6a1502b4c291e10d7d6ae3c0a14aae6703e192170bae5c29d8d541fdaeb7" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:50.474409 systemd[1]: Started cri-containerd-9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5.scope - libcontainer container 9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5. Jun 21 02:18:50.477953 systemd-networkd[1439]: cali07fca302f25: Link UP Jun 21 02:18:50.480767 systemd-networkd[1439]: cali07fca302f25: Gained carrier Jun 21 02:18:50.494472 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:18:50.499301 containerd[1518]: 2025-06-21 02:18:50.249 [INFO][4233] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 02:18:50.499301 containerd[1518]: 2025-06-21 02:18:50.274 [INFO][4233] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0 calico-kube-controllers-79c8dcd6dd- calico-system 90b0c939-56e9-4180-b51d-ba8dc6b8a7e5 829 0 2025-06-21 02:18:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79c8dcd6dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-79c8dcd6dd-dk8jf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali07fca302f25 [] [] }} ContainerID="6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" Namespace="calico-system" Pod="calico-kube-controllers-79c8dcd6dd-dk8jf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-" Jun 21 02:18:50.499301 containerd[1518]: 2025-06-21 02:18:50.274 [INFO][4233] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" Namespace="calico-system" Pod="calico-kube-controllers-79c8dcd6dd-dk8jf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0" Jun 21 02:18:50.499301 containerd[1518]: 2025-06-21 02:18:50.323 [INFO][4261] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" HandleID="k8s-pod-network.6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" Workload="localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0" Jun 21 02:18:50.499473 containerd[1518]: 2025-06-21 02:18:50.323 [INFO][4261] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" HandleID="k8s-pod-network.6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" Workload="localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-79c8dcd6dd-dk8jf", "timestamp":"2025-06-21 02:18:50.323176104 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:18:50.499473 containerd[1518]: 2025-06-21 02:18:50.323 [INFO][4261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:18:50.499473 containerd[1518]: 2025-06-21 02:18:50.360 [INFO][4261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:18:50.499473 containerd[1518]: 2025-06-21 02:18:50.360 [INFO][4261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:18:50.499473 containerd[1518]: 2025-06-21 02:18:50.434 [INFO][4261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" host="localhost" Jun 21 02:18:50.499473 containerd[1518]: 2025-06-21 02:18:50.448 [INFO][4261] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:18:50.499473 containerd[1518]: 2025-06-21 02:18:50.452 [INFO][4261] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:18:50.499473 containerd[1518]: 2025-06-21 02:18:50.456 [INFO][4261] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:50.499473 containerd[1518]: 2025-06-21 02:18:50.459 [INFO][4261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:50.499473 containerd[1518]: 2025-06-21 02:18:50.459 [INFO][4261] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" host="localhost" Jun 21 02:18:50.499673 containerd[1518]: 2025-06-21 02:18:50.461 [INFO][4261] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f Jun 21 02:18:50.499673 containerd[1518]: 2025-06-21 02:18:50.466 [INFO][4261] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" host="localhost" Jun 21 02:18:50.499673 containerd[1518]: 2025-06-21 02:18:50.471 [INFO][4261] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" host="localhost" Jun 21 02:18:50.499673 containerd[1518]: 2025-06-21 02:18:50.471 [INFO][4261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" host="localhost" Jun 21 02:18:50.499673 containerd[1518]: 2025-06-21 02:18:50.471 [INFO][4261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:18:50.499673 containerd[1518]: 2025-06-21 02:18:50.471 [INFO][4261] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" HandleID="k8s-pod-network.6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" Workload="localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0" Jun 21 02:18:50.499780 containerd[1518]: 2025-06-21 02:18:50.473 [INFO][4233] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" Namespace="calico-system" Pod="calico-kube-controllers-79c8dcd6dd-dk8jf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0", GenerateName:"calico-kube-controllers-79c8dcd6dd-", Namespace:"calico-system", SelfLink:"", UID:"90b0c939-56e9-4180-b51d-ba8dc6b8a7e5", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79c8dcd6dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-79c8dcd6dd-dk8jf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali07fca302f25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:50.499828 containerd[1518]: 2025-06-21 02:18:50.473 [INFO][4233] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" Namespace="calico-system" Pod="calico-kube-controllers-79c8dcd6dd-dk8jf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0" Jun 21 02:18:50.499828 containerd[1518]: 2025-06-21 02:18:50.473 [INFO][4233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07fca302f25 ContainerID="6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" Namespace="calico-system" Pod="calico-kube-controllers-79c8dcd6dd-dk8jf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0" Jun 21 02:18:50.499828 containerd[1518]: 2025-06-21 02:18:50.480 [INFO][4233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" Namespace="calico-system" Pod="calico-kube-controllers-79c8dcd6dd-dk8jf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0" Jun 21 02:18:50.499882 containerd[1518]: 2025-06-21 02:18:50.480 [INFO][4233] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" Namespace="calico-system" Pod="calico-kube-controllers-79c8dcd6dd-dk8jf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0", GenerateName:"calico-kube-controllers-79c8dcd6dd-", Namespace:"calico-system", SelfLink:"", UID:"90b0c939-56e9-4180-b51d-ba8dc6b8a7e5", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79c8dcd6dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f", Pod:"calico-kube-controllers-79c8dcd6dd-dk8jf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali07fca302f25", MAC:"22:6f:53:50:a8:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:50.499927 containerd[1518]: 2025-06-21 02:18:50.490 [INFO][4233] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" Namespace="calico-system" Pod="calico-kube-controllers-79c8dcd6dd-dk8jf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79c8dcd6dd--dk8jf-eth0" Jun 21 02:18:50.521600 containerd[1518]: time="2025-06-21T02:18:50.521553642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5477c4f559-kt7q2,Uid:f361e498-d382-4c40-85d8-b8697e1e11f0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5\"" Jun 21 02:18:50.523371 containerd[1518]: time="2025-06-21T02:18:50.523338624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 02:18:50.535539 containerd[1518]: time="2025-06-21T02:18:50.535491852Z" level=info msg="connecting to shim 6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f" address="unix:///run/containerd/s/6e3be3bdba4203e7f461589d18275524396de9d0ad9d71222ed48024ddc4761c" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:50.558353 systemd[1]: Started cri-containerd-6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f.scope - libcontainer container 6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f. Jun 21 02:18:50.569258 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:18:50.593676 containerd[1518]: time="2025-06-21T02:18:50.593600641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79c8dcd6dd-dk8jf,Uid:90b0c939-56e9-4180-b51d-ba8dc6b8a7e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f\"" Jun 21 02:18:51.226320 kubelet[2659]: E0621 02:18:51.226255 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:51.227275 containerd[1518]: time="2025-06-21T02:18:51.227230074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2n26r,Uid:9eb38543-6254-4688-8d4b-892c4068ec20,Namespace:calico-system,Attempt:0,}" Jun 21 02:18:51.227865 containerd[1518]: time="2025-06-21T02:18:51.227287554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zl99q,Uid:4eff7017-d74b-4f5f-ae18-414a858ebf5d,Namespace:kube-system,Attempt:0,}" Jun 21 02:18:51.227865 containerd[1518]: time="2025-06-21T02:18:51.227237354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c75966784-9qsx8,Uid:13d8fede-0173-472e-b7e7-ee6276d45c03,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:18:51.355856 systemd[1]: Started sshd@9-10.0.0.75:22-10.0.0.1:35642.service - OpenSSH per-connection server daemon (10.0.0.1:35642). Jun 21 02:18:51.385913 systemd-networkd[1439]: cali03efbc120f6: Link UP Jun 21 02:18:51.389169 systemd-networkd[1439]: cali03efbc120f6: Gained carrier Jun 21 02:18:51.406829 containerd[1518]: 2025-06-21 02:18:51.268 [INFO][4409] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 02:18:51.406829 containerd[1518]: 2025-06-21 02:18:51.297 [INFO][4409] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2n26r-eth0 csi-node-driver- calico-system 9eb38543-6254-4688-8d4b-892c4068ec20 700 0 2025-06-21 02:18:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2n26r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali03efbc120f6 [] [] }} ContainerID="321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" Namespace="calico-system" Pod="csi-node-driver-2n26r" WorkloadEndpoint="localhost-k8s-csi--node--driver--2n26r-" Jun 21 02:18:51.406829 containerd[1518]: 2025-06-21 02:18:51.297 [INFO][4409] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" Namespace="calico-system" Pod="csi-node-driver-2n26r" WorkloadEndpoint="localhost-k8s-csi--node--driver--2n26r-eth0" Jun 21 02:18:51.406829 containerd[1518]: 2025-06-21 02:18:51.340 [INFO][4459] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" HandleID="k8s-pod-network.321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" Workload="localhost-k8s-csi--node--driver--2n26r-eth0" Jun 21 02:18:51.407678 containerd[1518]: 2025-06-21 02:18:51.341 [INFO][4459] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" HandleID="k8s-pod-network.321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" Workload="localhost-k8s-csi--node--driver--2n26r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b2d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2n26r", "timestamp":"2025-06-21 02:18:51.340943394 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:18:51.407678 containerd[1518]: 2025-06-21 02:18:51.341 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:18:51.407678 containerd[1518]: 2025-06-21 02:18:51.341 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:18:51.407678 containerd[1518]: 2025-06-21 02:18:51.341 [INFO][4459] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:18:51.407678 containerd[1518]: 2025-06-21 02:18:51.351 [INFO][4459] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" host="localhost" Jun 21 02:18:51.407678 containerd[1518]: 2025-06-21 02:18:51.359 [INFO][4459] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:18:51.407678 containerd[1518]: 2025-06-21 02:18:51.363 [INFO][4459] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:18:51.407678 containerd[1518]: 2025-06-21 02:18:51.365 [INFO][4459] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:51.407678 containerd[1518]: 2025-06-21 02:18:51.367 [INFO][4459] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:51.407678 containerd[1518]: 2025-06-21 02:18:51.367 [INFO][4459] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" host="localhost" Jun 21 02:18:51.407930 containerd[1518]: 2025-06-21 02:18:51.369 [INFO][4459] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135 Jun 21 02:18:51.407930 containerd[1518]: 2025-06-21 02:18:51.373 [INFO][4459] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" host="localhost" Jun 21 02:18:51.407930 containerd[1518]: 2025-06-21 02:18:51.380 [INFO][4459] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" host="localhost" Jun 21 02:18:51.407930 containerd[1518]: 2025-06-21 02:18:51.380 [INFO][4459] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" host="localhost" Jun 21 02:18:51.407930 containerd[1518]: 2025-06-21 02:18:51.380 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:18:51.407930 containerd[1518]: 2025-06-21 02:18:51.380 [INFO][4459] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" HandleID="k8s-pod-network.321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" Workload="localhost-k8s-csi--node--driver--2n26r-eth0" Jun 21 02:18:51.408037 containerd[1518]: 2025-06-21 02:18:51.382 [INFO][4409] cni-plugin/k8s.go 418: Populated endpoint ContainerID="321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" Namespace="calico-system" Pod="csi-node-driver-2n26r" WorkloadEndpoint="localhost-k8s-csi--node--driver--2n26r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2n26r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9eb38543-6254-4688-8d4b-892c4068ec20", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2n26r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03efbc120f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:51.408087 containerd[1518]: 2025-06-21 02:18:51.382 [INFO][4409] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" Namespace="calico-system" Pod="csi-node-driver-2n26r" WorkloadEndpoint="localhost-k8s-csi--node--driver--2n26r-eth0" Jun 21 02:18:51.408087 containerd[1518]: 2025-06-21 02:18:51.382 [INFO][4409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03efbc120f6 ContainerID="321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" Namespace="calico-system" Pod="csi-node-driver-2n26r" WorkloadEndpoint="localhost-k8s-csi--node--driver--2n26r-eth0" Jun 21 02:18:51.408087 containerd[1518]: 2025-06-21 02:18:51.390 [INFO][4409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" Namespace="calico-system" Pod="csi-node-driver-2n26r" WorkloadEndpoint="localhost-k8s-csi--node--driver--2n26r-eth0" Jun 21 02:18:51.408142 containerd[1518]: 2025-06-21 02:18:51.390 [INFO][4409] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" Namespace="calico-system" Pod="csi-node-driver-2n26r" WorkloadEndpoint="localhost-k8s-csi--node--driver--2n26r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2n26r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9eb38543-6254-4688-8d4b-892c4068ec20", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135", Pod:"csi-node-driver-2n26r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03efbc120f6", MAC:"7a:fe:5d:10:b0:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:51.408187 containerd[1518]: 2025-06-21 02:18:51.405 [INFO][4409] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" Namespace="calico-system" Pod="csi-node-driver-2n26r" WorkloadEndpoint="localhost-k8s-csi--node--driver--2n26r-eth0" Jun 21 02:18:51.426115 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 35642 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:18:51.438430 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:18:51.443454 systemd-logind[1493]: New session 10 of user core. Jun 21 02:18:51.452434 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 02:18:51.521755 systemd-networkd[1439]: cali07fca302f25: Gained IPv6LL Jun 21 02:18:51.527713 systemd-networkd[1439]: cali7198c1761f2: Link UP Jun 21 02:18:51.528629 systemd-networkd[1439]: cali7198c1761f2: Gained carrier Jun 21 02:18:51.555042 containerd[1518]: 2025-06-21 02:18:51.265 [INFO][4425] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 02:18:51.555042 containerd[1518]: 2025-06-21 02:18:51.296 [INFO][4425] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0 calico-apiserver-6c75966784- calico-apiserver 13d8fede-0173-472e-b7e7-ee6276d45c03 830 0 2025-06-21 02:18:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c75966784 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c75966784-9qsx8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7198c1761f2 [] [] }} ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-9qsx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--9qsx8-" Jun 21 02:18:51.555042 containerd[1518]: 2025-06-21 02:18:51.296 [INFO][4425] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-9qsx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" Jun 21 02:18:51.555042 containerd[1518]: 2025-06-21 02:18:51.344 [INFO][4453] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" HandleID="k8s-pod-network.1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Workload="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" Jun 21 02:18:51.555400 containerd[1518]: 2025-06-21 02:18:51.344 [INFO][4453] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" HandleID="k8s-pod-network.1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Workload="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c75966784-9qsx8", "timestamp":"2025-06-21 02:18:51.34403091 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:18:51.555400 containerd[1518]: 2025-06-21 02:18:51.344 [INFO][4453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:18:51.555400 containerd[1518]: 2025-06-21 02:18:51.380 [INFO][4453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:18:51.555400 containerd[1518]: 2025-06-21 02:18:51.381 [INFO][4453] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:18:51.555400 containerd[1518]: 2025-06-21 02:18:51.452 [INFO][4453] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" host="localhost" Jun 21 02:18:51.555400 containerd[1518]: 2025-06-21 02:18:51.459 [INFO][4453] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:18:51.555400 containerd[1518]: 2025-06-21 02:18:51.478 [INFO][4453] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:18:51.555400 containerd[1518]: 2025-06-21 02:18:51.480 [INFO][4453] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:51.555400 containerd[1518]: 2025-06-21 02:18:51.483 [INFO][4453] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:51.555400 containerd[1518]: 2025-06-21 02:18:51.483 [INFO][4453] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" host="localhost" Jun 21 02:18:51.556154 containerd[1518]: 2025-06-21 02:18:51.487 [INFO][4453] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432 Jun 21 02:18:51.556154 containerd[1518]: 2025-06-21 02:18:51.491 [INFO][4453] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" host="localhost" Jun 21 02:18:51.556154 containerd[1518]: 2025-06-21 02:18:51.513 [INFO][4453] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" host="localhost" Jun 21 02:18:51.556154 containerd[1518]: 2025-06-21 02:18:51.513 [INFO][4453] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" host="localhost" Jun 21 02:18:51.556154 containerd[1518]: 2025-06-21 02:18:51.515 [INFO][4453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:18:51.556154 containerd[1518]: 2025-06-21 02:18:51.516 [INFO][4453] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" HandleID="k8s-pod-network.1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Workload="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" Jun 21 02:18:51.556518 containerd[1518]: 2025-06-21 02:18:51.520 [INFO][4425] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-9qsx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0", GenerateName:"calico-apiserver-6c75966784-", Namespace:"calico-apiserver", SelfLink:"", UID:"13d8fede-0173-472e-b7e7-ee6276d45c03", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c75966784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c75966784-9qsx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7198c1761f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:51.556726 containerd[1518]: 2025-06-21 02:18:51.520 [INFO][4425] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-9qsx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" Jun 21 02:18:51.556726 containerd[1518]: 2025-06-21 02:18:51.520 [INFO][4425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7198c1761f2 ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-9qsx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" Jun 21 02:18:51.556726 containerd[1518]: 2025-06-21 02:18:51.528 [INFO][4425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-9qsx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" Jun 21 02:18:51.556862 containerd[1518]: 2025-06-21 02:18:51.529 [INFO][4425] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-9qsx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0", GenerateName:"calico-apiserver-6c75966784-", Namespace:"calico-apiserver", SelfLink:"", UID:"13d8fede-0173-472e-b7e7-ee6276d45c03", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c75966784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432", Pod:"calico-apiserver-6c75966784-9qsx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7198c1761f2", MAC:"b6:89:d2:04:9f:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:51.557026 containerd[1518]: 2025-06-21 02:18:51.549 [INFO][4425] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-9qsx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" Jun 21 02:18:51.581842 containerd[1518]: time="2025-06-21T02:18:51.581795914Z" level=info msg="connecting to shim 321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135" address="unix:///run/containerd/s/471e0135ca57898b492409850f20df855d994f1520b9ee08852be15c9b96843b" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:51.626363 systemd[1]: Started cri-containerd-321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135.scope - libcontainer container 321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135. Jun 21 02:18:51.638748 systemd-networkd[1439]: caliaa96e7730db: Link UP Jun 21 02:18:51.640631 systemd-networkd[1439]: caliaa96e7730db: Gained carrier Jun 21 02:18:51.648314 systemd-networkd[1439]: calie9f6eadf01e: Gained IPv6LL Jun 21 02:18:51.659414 containerd[1518]: time="2025-06-21T02:18:51.659372442Z" level=info msg="connecting to shim 1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" address="unix:///run/containerd/s/1dfa4c062ece7dc187aee978bb1a704598f76c02f24f72442a696f0c782dda74" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:51.660583 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:18:51.661384 containerd[1518]: 2025-06-21 02:18:51.285 [INFO][4415] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 02:18:51.661384 containerd[1518]: 2025-06-21 02:18:51.312 [INFO][4415] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--zl99q-eth0 coredns-668d6bf9bc- kube-system 4eff7017-d74b-4f5f-ae18-414a858ebf5d 819 0 2025-06-21 02:18:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-zl99q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa96e7730db [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl99q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zl99q-" Jun 21 02:18:51.661384 containerd[1518]: 2025-06-21 02:18:51.312 [INFO][4415] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl99q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zl99q-eth0" Jun 21 02:18:51.661384 containerd[1518]: 2025-06-21 02:18:51.353 [INFO][4465] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" HandleID="k8s-pod-network.39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" Workload="localhost-k8s-coredns--668d6bf9bc--zl99q-eth0" Jun 21 02:18:51.661549 containerd[1518]: 2025-06-21 02:18:51.353 [INFO][4465] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" HandleID="k8s-pod-network.39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" Workload="localhost-k8s-coredns--668d6bf9bc--zl99q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058ca50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-zl99q", "timestamp":"2025-06-21 02:18:51.353238741 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:18:51.661549 containerd[1518]: 2025-06-21 02:18:51.353 [INFO][4465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:18:51.661549 containerd[1518]: 2025-06-21 02:18:51.515 [INFO][4465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:18:51.661549 containerd[1518]: 2025-06-21 02:18:51.516 [INFO][4465] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:18:51.661549 containerd[1518]: 2025-06-21 02:18:51.552 [INFO][4465] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" host="localhost" Jun 21 02:18:51.661549 containerd[1518]: 2025-06-21 02:18:51.563 [INFO][4465] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:18:51.661549 containerd[1518]: 2025-06-21 02:18:51.581 [INFO][4465] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:18:51.661549 containerd[1518]: 2025-06-21 02:18:51.584 [INFO][4465] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:51.661549 containerd[1518]: 2025-06-21 02:18:51.588 [INFO][4465] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:51.661549 containerd[1518]: 2025-06-21 02:18:51.588 [INFO][4465] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" host="localhost" Jun 21 02:18:51.662444 containerd[1518]: 2025-06-21 02:18:51.591 [INFO][4465] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e Jun 21 02:18:51.662444 containerd[1518]: 2025-06-21 02:18:51.606 [INFO][4465] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" host="localhost" Jun 21 02:18:51.662444 containerd[1518]: 2025-06-21 02:18:51.628 [INFO][4465] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" host="localhost" Jun 21 02:18:51.662444 containerd[1518]: 2025-06-21 02:18:51.628 [INFO][4465] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" host="localhost" Jun 21 02:18:51.662444 containerd[1518]: 2025-06-21 02:18:51.628 [INFO][4465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:18:51.662444 containerd[1518]: 2025-06-21 02:18:51.628 [INFO][4465] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" HandleID="k8s-pod-network.39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" Workload="localhost-k8s-coredns--668d6bf9bc--zl99q-eth0" Jun 21 02:18:51.662596 containerd[1518]: 2025-06-21 02:18:51.635 [INFO][4415] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl99q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zl99q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--zl99q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4eff7017-d74b-4f5f-ae18-414a858ebf5d", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-zl99q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa96e7730db", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:51.662680 containerd[1518]: 2025-06-21 02:18:51.635 [INFO][4415] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl99q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zl99q-eth0" Jun 21 02:18:51.662680 containerd[1518]: 2025-06-21 02:18:51.635 [INFO][4415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa96e7730db ContainerID="39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl99q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zl99q-eth0" Jun 21 02:18:51.662680 containerd[1518]: 2025-06-21 02:18:51.641 [INFO][4415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl99q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zl99q-eth0" Jun 21 02:18:51.662981 containerd[1518]: 2025-06-21 02:18:51.642 [INFO][4415] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl99q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zl99q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--zl99q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4eff7017-d74b-4f5f-ae18-414a858ebf5d", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e", Pod:"coredns-668d6bf9bc-zl99q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa96e7730db", MAC:"2a:46:c2:f3:46:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:51.662981 containerd[1518]: 2025-06-21 02:18:51.654 [INFO][4415] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl99q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zl99q-eth0" Jun 21 02:18:51.690463 systemd[1]: Started cri-containerd-1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432.scope - libcontainer container 1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432. Jun 21 02:18:51.705496 containerd[1518]: time="2025-06-21T02:18:51.705396752Z" level=info msg="connecting to shim 39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e" address="unix:///run/containerd/s/220bcd296faed9446cb62bb4ba05be456ed6815dc1410d34c35c6337bafcf4ac" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:51.713359 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:18:51.736398 systemd[1]: Started cri-containerd-39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e.scope - libcontainer container 39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e. Jun 21 02:18:51.747219 containerd[1518]: time="2025-06-21T02:18:51.747147652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2n26r,Uid:9eb38543-6254-4688-8d4b-892c4068ec20,Namespace:calico-system,Attempt:0,} returns sandbox id \"321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135\"" Jun 21 02:18:51.756768 containerd[1518]: time="2025-06-21T02:18:51.756733726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c75966784-9qsx8,Uid:13d8fede-0173-472e-b7e7-ee6276d45c03,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432\"" Jun 21 02:18:51.757391 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:18:51.783541 sshd[4497]: Connection closed by 10.0.0.1 port 35642 Jun 21 02:18:51.783841 containerd[1518]: time="2025-06-21T02:18:51.783808050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zl99q,Uid:4eff7017-d74b-4f5f-ae18-414a858ebf5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e\"" Jun 21 02:18:51.784073 sshd-session[4478]: pam_unix(sshd:session): session closed for user core Jun 21 02:18:51.785941 kubelet[2659]: E0621 02:18:51.785193 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:51.789768 systemd[1]: sshd@9-10.0.0.75:22-10.0.0.1:35642.service: Deactivated successfully. Jun 21 02:18:51.791694 containerd[1518]: time="2025-06-21T02:18:51.791652424Z" level=info msg="CreateContainer within sandbox \"39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 02:18:51.793125 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 02:18:51.794588 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. Jun 21 02:18:51.795976 systemd-logind[1493]: Removed session 10. Jun 21 02:18:51.799621 containerd[1518]: time="2025-06-21T02:18:51.799577599Z" level=info msg="Container 950fc69daca70be8cc0bcff212cadfc581e05706fb9988f71a12f3bad1f1e8c7: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:51.804322 containerd[1518]: time="2025-06-21T02:18:51.804287335Z" level=info msg="CreateContainer within sandbox \"39a13f7105e79ea60aec0b2f159a501323bcd109e817703c60eec999b2c00e4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"950fc69daca70be8cc0bcff212cadfc581e05706fb9988f71a12f3bad1f1e8c7\"" Jun 21 02:18:51.804788 containerd[1518]: time="2025-06-21T02:18:51.804761701Z" level=info msg="StartContainer for \"950fc69daca70be8cc0bcff212cadfc581e05706fb9988f71a12f3bad1f1e8c7\"" Jun 21 02:18:51.806192 containerd[1518]: time="2025-06-21T02:18:51.806163038Z" level=info msg="connecting to shim 950fc69daca70be8cc0bcff212cadfc581e05706fb9988f71a12f3bad1f1e8c7" address="unix:///run/containerd/s/220bcd296faed9446cb62bb4ba05be456ed6815dc1410d34c35c6337bafcf4ac" protocol=ttrpc version=3 Jun 21 02:18:51.834381 systemd[1]: Started cri-containerd-950fc69daca70be8cc0bcff212cadfc581e05706fb9988f71a12f3bad1f1e8c7.scope - libcontainer container 950fc69daca70be8cc0bcff212cadfc581e05706fb9988f71a12f3bad1f1e8c7. Jun 21 02:18:51.873486 containerd[1518]: time="2025-06-21T02:18:51.873451322Z" level=info msg="StartContainer for \"950fc69daca70be8cc0bcff212cadfc581e05706fb9988f71a12f3bad1f1e8c7\" returns successfully" Jun 21 02:18:52.229519 containerd[1518]: time="2025-06-21T02:18:52.229427730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-j6rw9,Uid:e5475b0a-595a-4d8e-8141-0513f82a526c,Namespace:calico-system,Attempt:0,}" Jun 21 02:18:52.357164 systemd-networkd[1439]: cali49048b6924e: Link UP Jun 21 02:18:52.357635 systemd-networkd[1439]: cali49048b6924e: Gained carrier Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.265 [INFO][4722] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.278 [INFO][4722] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0 goldmane-5bd85449d4- calico-system e5475b0a-595a-4d8e-8141-0513f82a526c 827 0 2025-06-21 02:18:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5bd85449d4-j6rw9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali49048b6924e [] [] }} ContainerID="2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" Namespace="calico-system" Pod="goldmane-5bd85449d4-j6rw9" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--j6rw9-" Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.278 [INFO][4722] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" Namespace="calico-system" Pod="goldmane-5bd85449d4-j6rw9" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0" Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.316 [INFO][4737] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" HandleID="k8s-pod-network.2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" Workload="localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0" Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.316 [INFO][4737] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" HandleID="k8s-pod-network.2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" Workload="localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5bd85449d4-j6rw9", "timestamp":"2025-06-21 02:18:52.316468952 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.316 [INFO][4737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.316 [INFO][4737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.316 [INFO][4737] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.327 [INFO][4737] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" host="localhost" Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.332 [INFO][4737] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.336 [INFO][4737] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.337 [INFO][4737] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.340 [INFO][4737] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.340 [INFO][4737] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" host="localhost" Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.342 [INFO][4737] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8 Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.345 [INFO][4737] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" host="localhost" Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.352 [INFO][4737] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" host="localhost" Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.352 [INFO][4737] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" host="localhost" Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.352 [INFO][4737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:18:52.373318 containerd[1518]: 2025-06-21 02:18:52.352 [INFO][4737] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" HandleID="k8s-pod-network.2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" Workload="localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0" Jun 21 02:18:52.374104 containerd[1518]: 2025-06-21 02:18:52.355 [INFO][4722] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" Namespace="calico-system" Pod="goldmane-5bd85449d4-j6rw9" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"e5475b0a-595a-4d8e-8141-0513f82a526c", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5bd85449d4-j6rw9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali49048b6924e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:52.374104 containerd[1518]: 2025-06-21 02:18:52.355 [INFO][4722] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" Namespace="calico-system" Pod="goldmane-5bd85449d4-j6rw9" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0" Jun 21 02:18:52.374104 containerd[1518]: 2025-06-21 02:18:52.355 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49048b6924e ContainerID="2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" Namespace="calico-system" Pod="goldmane-5bd85449d4-j6rw9" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0" Jun 21 02:18:52.374104 containerd[1518]: 2025-06-21 02:18:52.356 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" Namespace="calico-system" Pod="goldmane-5bd85449d4-j6rw9" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0" Jun 21 02:18:52.374104 containerd[1518]: 2025-06-21 02:18:52.358 [INFO][4722] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" Namespace="calico-system" Pod="goldmane-5bd85449d4-j6rw9" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"e5475b0a-595a-4d8e-8141-0513f82a526c", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8", Pod:"goldmane-5bd85449d4-j6rw9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali49048b6924e", MAC:"72:05:c1:dc:49:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:52.374104 containerd[1518]: 2025-06-21 02:18:52.371 [INFO][4722] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" Namespace="calico-system" Pod="goldmane-5bd85449d4-j6rw9" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--j6rw9-eth0" Jun 21 02:18:52.392527 kubelet[2659]: E0621 02:18:52.392500 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:52.405244 containerd[1518]: time="2025-06-21T02:18:52.404687588Z" level=info msg="connecting to shim 2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8" address="unix:///run/containerd/s/5243e08a88736ec752438d4522d55057347408cc351eb2467b66365982008f49" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:52.409161 kubelet[2659]: I0621 02:18:52.409105 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zl99q" podStartSLOduration=35.40908628 podStartE2EDuration="35.40908628s" podCreationTimestamp="2025-06-21 02:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:18:52.409027919 +0000 UTC m=+42.278204838" watchObservedRunningTime="2025-06-21 02:18:52.40908628 +0000 UTC m=+42.278263199" Jun 21 02:18:52.458426 systemd[1]: Started cri-containerd-2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8.scope - libcontainer container 2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8. Jun 21 02:18:52.474258 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:18:52.517297 containerd[1518]: time="2025-06-21T02:18:52.517173669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-j6rw9,Uid:e5475b0a-595a-4d8e-8141-0513f82a526c,Namespace:calico-system,Attempt:0,} returns sandbox id \"2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8\"" Jun 21 02:18:52.608342 systemd-networkd[1439]: cali7198c1761f2: Gained IPv6LL Jun 21 02:18:52.666258 containerd[1518]: time="2025-06-21T02:18:52.665812574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:52.667110 containerd[1518]: time="2025-06-21T02:18:52.667080029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=44514850" Jun 21 02:18:52.668271 containerd[1518]: time="2025-06-21T02:18:52.668236443Z" level=info msg="ImageCreate event name:\"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:52.672361 systemd-networkd[1439]: cali03efbc120f6: Gained IPv6LL Jun 21 02:18:52.672463 containerd[1518]: time="2025-06-21T02:18:52.672353611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:52.673151 containerd[1518]: time="2025-06-21T02:18:52.672953618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 2.149578874s" Jun 21 02:18:52.673151 containerd[1518]: time="2025-06-21T02:18:52.672985619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 21 02:18:52.675101 containerd[1518]: time="2025-06-21T02:18:52.675067403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 21 02:18:52.677326 containerd[1518]: time="2025-06-21T02:18:52.677282509Z" level=info msg="CreateContainer within sandbox \"9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 02:18:52.688566 containerd[1518]: time="2025-06-21T02:18:52.687811033Z" level=info msg="Container 37d8e49d17eac78669bfd1a44ca15ba2c37991146210bda63d2e996f385c32e1: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:52.696721 containerd[1518]: time="2025-06-21T02:18:52.696676937Z" level=info msg="CreateContainer within sandbox \"9692d504bd84281a1b1c0e1b7139fb9e0193e33e09a28b3bacf9dcc3d43f60b5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"37d8e49d17eac78669bfd1a44ca15ba2c37991146210bda63d2e996f385c32e1\"" Jun 21 02:18:52.697405 containerd[1518]: time="2025-06-21T02:18:52.697373985Z" level=info msg="StartContainer for \"37d8e49d17eac78669bfd1a44ca15ba2c37991146210bda63d2e996f385c32e1\"" Jun 21 02:18:52.700606 containerd[1518]: time="2025-06-21T02:18:52.700563222Z" level=info msg="connecting to shim 37d8e49d17eac78669bfd1a44ca15ba2c37991146210bda63d2e996f385c32e1" address="unix:///run/containerd/s/35de6a1502b4c291e10d7d6ae3c0a14aae6703e192170bae5c29d8d541fdaeb7" protocol=ttrpc version=3 Jun 21 02:18:52.725370 systemd[1]: Started cri-containerd-37d8e49d17eac78669bfd1a44ca15ba2c37991146210bda63d2e996f385c32e1.scope - libcontainer container 37d8e49d17eac78669bfd1a44ca15ba2c37991146210bda63d2e996f385c32e1. Jun 21 02:18:52.814694 containerd[1518]: time="2025-06-21T02:18:52.814591841Z" level=info msg="StartContainer for \"37d8e49d17eac78669bfd1a44ca15ba2c37991146210bda63d2e996f385c32e1\" returns successfully" Jun 21 02:18:53.414171 kubelet[2659]: E0621 02:18:53.411221 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:53.435794 kubelet[2659]: I0621 02:18:53.433311 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5477c4f559-kt7q2" podStartSLOduration=24.281371596 podStartE2EDuration="26.433292138s" podCreationTimestamp="2025-06-21 02:18:27 +0000 UTC" firstStartedPulling="2025-06-21 02:18:50.522949579 +0000 UTC m=+40.392126498" lastFinishedPulling="2025-06-21 02:18:52.674870081 +0000 UTC m=+42.544047040" observedRunningTime="2025-06-21 02:18:53.432198486 +0000 UTC m=+43.301375405" watchObservedRunningTime="2025-06-21 02:18:53.433292138 +0000 UTC m=+43.302469057" Jun 21 02:18:53.696459 systemd-networkd[1439]: caliaa96e7730db: Gained IPv6LL Jun 21 02:18:54.252150 containerd[1518]: time="2025-06-21T02:18:54.252083818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:54.253077 containerd[1518]: time="2025-06-21T02:18:54.253024348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=48129475" Jun 21 02:18:54.254054 containerd[1518]: time="2025-06-21T02:18:54.254012120Z" level=info msg="ImageCreate event name:\"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:54.255960 containerd[1518]: time="2025-06-21T02:18:54.255921061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:54.256556 containerd[1518]: time="2025-06-21T02:18:54.256522068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"49498684\" in 1.581380424s" Jun 21 02:18:54.256607 containerd[1518]: time="2025-06-21T02:18:54.256554548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\"" Jun 21 02:18:54.260770 containerd[1518]: time="2025-06-21T02:18:54.260505153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 21 02:18:54.268011 containerd[1518]: time="2025-06-21T02:18:54.267956198Z" level=info msg="CreateContainer within sandbox \"6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 21 02:18:54.279215 containerd[1518]: time="2025-06-21T02:18:54.279154765Z" level=info msg="Container 3bead4b52a41beed6709c1cd9e1f7d80f644b375f9f3635201cac821f932f3f3: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:54.286768 containerd[1518]: time="2025-06-21T02:18:54.286716051Z" level=info msg="CreateContainer within sandbox \"6a3ae7e5ee25d968a6936cb80b74956c031c2a3ed34601e066d589a3b46ede5f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3bead4b52a41beed6709c1cd9e1f7d80f644b375f9f3635201cac821f932f3f3\"" Jun 21 02:18:54.287576 containerd[1518]: time="2025-06-21T02:18:54.287361498Z" level=info msg="StartContainer for \"3bead4b52a41beed6709c1cd9e1f7d80f644b375f9f3635201cac821f932f3f3\"" Jun 21 02:18:54.289121 containerd[1518]: time="2025-06-21T02:18:54.288715873Z" level=info msg="connecting to shim 3bead4b52a41beed6709c1cd9e1f7d80f644b375f9f3635201cac821f932f3f3" address="unix:///run/containerd/s/6e3be3bdba4203e7f461589d18275524396de9d0ad9d71222ed48024ddc4761c" protocol=ttrpc version=3 Jun 21 02:18:54.318419 systemd[1]: Started cri-containerd-3bead4b52a41beed6709c1cd9e1f7d80f644b375f9f3635201cac821f932f3f3.scope - libcontainer container 3bead4b52a41beed6709c1cd9e1f7d80f644b375f9f3635201cac821f932f3f3. Jun 21 02:18:54.358050 containerd[1518]: time="2025-06-21T02:18:54.357936619Z" level=info msg="StartContainer for \"3bead4b52a41beed6709c1cd9e1f7d80f644b375f9f3635201cac821f932f3f3\" returns successfully" Jun 21 02:18:54.400342 systemd-networkd[1439]: cali49048b6924e: Gained IPv6LL Jun 21 02:18:54.421053 kubelet[2659]: E0621 02:18:54.420291 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:54.429537 kubelet[2659]: I0621 02:18:54.428104 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:18:55.194528 containerd[1518]: time="2025-06-21T02:18:55.194469316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:55.195784 containerd[1518]: time="2025-06-21T02:18:55.195746291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8226240" Jun 21 02:18:55.196497 containerd[1518]: time="2025-06-21T02:18:55.196458979Z" level=info msg="ImageCreate event name:\"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:55.198877 containerd[1518]: time="2025-06-21T02:18:55.198837805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:55.200029 containerd[1518]: time="2025-06-21T02:18:55.199989698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"9595481\" in 939.441944ms" Jun 21 02:18:55.200071 containerd[1518]: time="2025-06-21T02:18:55.200024658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\"" Jun 21 02:18:55.201058 containerd[1518]: time="2025-06-21T02:18:55.200936189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 02:18:55.202364 containerd[1518]: time="2025-06-21T02:18:55.202331684Z" level=info msg="CreateContainer within sandbox \"321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 21 02:18:55.209977 containerd[1518]: time="2025-06-21T02:18:55.209941609Z" level=info msg="Container a59e1c8493a2b53279145d34ff214e270cf5da15acaa3506fcffb361f14791e8: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:55.220055 containerd[1518]: time="2025-06-21T02:18:55.219864440Z" level=info msg="CreateContainer within sandbox \"321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a59e1c8493a2b53279145d34ff214e270cf5da15acaa3506fcffb361f14791e8\"" Jun 21 02:18:55.220467 containerd[1518]: time="2025-06-21T02:18:55.220444126Z" level=info msg="StartContainer for \"a59e1c8493a2b53279145d34ff214e270cf5da15acaa3506fcffb361f14791e8\"" Jun 21 02:18:55.221835 containerd[1518]: time="2025-06-21T02:18:55.221806782Z" level=info msg="connecting to shim a59e1c8493a2b53279145d34ff214e270cf5da15acaa3506fcffb361f14791e8" address="unix:///run/containerd/s/471e0135ca57898b492409850f20df855d994f1520b9ee08852be15c9b96843b" protocol=ttrpc version=3 Jun 21 02:18:55.225902 kubelet[2659]: E0621 02:18:55.225737 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:55.227227 containerd[1518]: time="2025-06-21T02:18:55.226984999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c75966784-8tlmb,Uid:43f29484-ed5e-42bf-89ce-fa67b3c94a12,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:18:55.227227 containerd[1518]: time="2025-06-21T02:18:55.227196202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8s44t,Uid:199308d2-e274-40e1-95da-3749d3ca34e0,Namespace:kube-system,Attempt:0,}" Jun 21 02:18:55.252386 systemd[1]: Started cri-containerd-a59e1c8493a2b53279145d34ff214e270cf5da15acaa3506fcffb361f14791e8.scope - libcontainer container a59e1c8493a2b53279145d34ff214e270cf5da15acaa3506fcffb361f14791e8. Jun 21 02:18:55.316046 containerd[1518]: time="2025-06-21T02:18:55.315993833Z" level=info msg="StartContainer for \"a59e1c8493a2b53279145d34ff214e270cf5da15acaa3506fcffb361f14791e8\" returns successfully" Jun 21 02:18:55.425751 kubelet[2659]: I0621 02:18:55.425363 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:18:55.439343 systemd-networkd[1439]: cali44c2bcfc259: Link UP Jun 21 02:18:55.439724 systemd-networkd[1439]: cali44c2bcfc259: Gained carrier Jun 21 02:18:55.448818 containerd[1518]: time="2025-06-21T02:18:55.448699115Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:55.451827 containerd[1518]: time="2025-06-21T02:18:55.451031622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 21 02:18:55.453700 kubelet[2659]: I0621 02:18:55.453509 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79c8dcd6dd-dk8jf" podStartSLOduration=20.788862844 podStartE2EDuration="24.453477649s" podCreationTimestamp="2025-06-21 02:18:31 +0000 UTC" firstStartedPulling="2025-06-21 02:18:50.59517814 +0000 UTC m=+40.464355059" lastFinishedPulling="2025-06-21 02:18:54.259792945 +0000 UTC m=+44.128969864" observedRunningTime="2025-06-21 02:18:54.440174512 +0000 UTC m=+44.309351431" watchObservedRunningTime="2025-06-21 02:18:55.453477649 +0000 UTC m=+45.322654568" Jun 21 02:18:55.454545 containerd[1518]: time="2025-06-21T02:18:55.454509500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 253.538031ms" Jun 21 02:18:55.454618 containerd[1518]: time="2025-06-21T02:18:55.454549701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 21 02:18:55.455752 containerd[1518]: time="2025-06-21T02:18:55.455722074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 21 02:18:55.457639 containerd[1518]: time="2025-06-21T02:18:55.457608815Z" level=info msg="CreateContainer within sandbox \"1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.253 [INFO][4977] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.272 [INFO][4977] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0 calico-apiserver-6c75966784- calico-apiserver 43f29484-ed5e-42bf-89ce-fa67b3c94a12 826 0 2025-06-21 02:18:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c75966784 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c75966784-8tlmb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali44c2bcfc259 [] [] }} ContainerID="2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-8tlmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--8tlmb-" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.272 [INFO][4977] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-8tlmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.313 [INFO][5017] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" HandleID="k8s-pod-network.2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" Workload="localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.313 [INFO][5017] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" HandleID="k8s-pod-network.2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" Workload="localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d560), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c75966784-8tlmb", "timestamp":"2025-06-21 02:18:55.313266203 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.313 [INFO][5017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.313 [INFO][5017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.313 [INFO][5017] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.331 [INFO][5017] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" host="localhost" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.403 [INFO][5017] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.407 [INFO][5017] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.409 [INFO][5017] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.411 [INFO][5017] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.411 [INFO][5017] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" host="localhost" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.413 [INFO][5017] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9 Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.417 [INFO][5017] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" host="localhost" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.426 [INFO][5017] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" host="localhost" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.426 [INFO][5017] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" host="localhost" Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.426 [INFO][5017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:18:55.458410 containerd[1518]: 2025-06-21 02:18:55.426 [INFO][5017] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" HandleID="k8s-pod-network.2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" Workload="localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0" Jun 21 02:18:55.458876 containerd[1518]: 2025-06-21 02:18:55.433 [INFO][4977] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-8tlmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0", GenerateName:"calico-apiserver-6c75966784-", Namespace:"calico-apiserver", SelfLink:"", UID:"43f29484-ed5e-42bf-89ce-fa67b3c94a12", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c75966784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c75966784-8tlmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44c2bcfc259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:55.458876 containerd[1518]: 2025-06-21 02:18:55.433 [INFO][4977] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-8tlmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0" Jun 21 02:18:55.458876 containerd[1518]: 2025-06-21 02:18:55.433 [INFO][4977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44c2bcfc259 ContainerID="2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-8tlmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0" Jun 21 02:18:55.458876 containerd[1518]: 2025-06-21 02:18:55.440 [INFO][4977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-8tlmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0" Jun 21 02:18:55.458876 containerd[1518]: 2025-06-21 02:18:55.441 [INFO][4977] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-8tlmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0", GenerateName:"calico-apiserver-6c75966784-", Namespace:"calico-apiserver", SelfLink:"", UID:"43f29484-ed5e-42bf-89ce-fa67b3c94a12", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c75966784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9", Pod:"calico-apiserver-6c75966784-8tlmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44c2bcfc259", MAC:"fa:4d:1b:6f:c2:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:55.458876 containerd[1518]: 2025-06-21 02:18:55.453 [INFO][4977] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" Namespace="calico-apiserver" Pod="calico-apiserver-6c75966784-8tlmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c75966784--8tlmb-eth0" Jun 21 02:18:55.466960 containerd[1518]: time="2025-06-21T02:18:55.466915959Z" level=info msg="Container 86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:55.482971 containerd[1518]: time="2025-06-21T02:18:55.482917018Z" level=info msg="CreateContainer within sandbox \"1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\"" Jun 21 02:18:55.483602 containerd[1518]: time="2025-06-21T02:18:55.483557505Z" level=info msg="StartContainer for \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\"" Jun 21 02:18:55.485354 containerd[1518]: time="2025-06-21T02:18:55.485317764Z" level=info msg="connecting to shim 86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21" address="unix:///run/containerd/s/1dfa4c062ece7dc187aee978bb1a704598f76c02f24f72442a696f0c782dda74" protocol=ttrpc version=3 Jun 21 02:18:55.497634 containerd[1518]: time="2025-06-21T02:18:55.497575341Z" level=info msg="connecting to shim 2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9" address="unix:///run/containerd/s/b4b14f8578d1239f63d7facef85e00d01d3f940334608732f5e1c35bb66169a8" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:55.507379 systemd[1]: Started cri-containerd-86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21.scope - libcontainer container 86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21. Jun 21 02:18:55.525357 systemd[1]: Started cri-containerd-2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9.scope - libcontainer container 2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9. Jun 21 02:18:55.539003 systemd-networkd[1439]: calibdb4e38efbf: Link UP Jun 21 02:18:55.540214 systemd-networkd[1439]: calibdb4e38efbf: Gained carrier Jun 21 02:18:55.547724 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.262 [INFO][4991] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.278 [INFO][4991] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--8s44t-eth0 coredns-668d6bf9bc- kube-system 199308d2-e274-40e1-95da-3749d3ca34e0 823 0 2025-06-21 02:18:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-8s44t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibdb4e38efbf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s44t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8s44t-" Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.279 [INFO][4991] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s44t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8s44t-eth0" Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.319 [INFO][5024] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" HandleID="k8s-pod-network.d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" Workload="localhost-k8s-coredns--668d6bf9bc--8s44t-eth0" Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.319 [INFO][5024] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" HandleID="k8s-pod-network.d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" Workload="localhost-k8s-coredns--668d6bf9bc--8s44t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a24c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-8s44t", "timestamp":"2025-06-21 02:18:55.319721755 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.320 [INFO][5024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.426 [INFO][5024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.426 [INFO][5024] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.439 [INFO][5024] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" host="localhost" Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.504 [INFO][5024] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.510 [INFO][5024] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.512 [INFO][5024] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.515 [INFO][5024] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.515 [INFO][5024] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" host="localhost" Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.519 [INFO][5024] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540 Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.523 [INFO][5024] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" host="localhost" Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.531 [INFO][5024] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" host="localhost" Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.531 [INFO][5024] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" host="localhost" Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.531 [INFO][5024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:18:55.559321 containerd[1518]: 2025-06-21 02:18:55.531 [INFO][5024] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" HandleID="k8s-pod-network.d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" Workload="localhost-k8s-coredns--668d6bf9bc--8s44t-eth0" Jun 21 02:18:55.559954 containerd[1518]: 2025-06-21 02:18:55.533 [INFO][4991] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s44t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8s44t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8s44t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"199308d2-e274-40e1-95da-3749d3ca34e0", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-8s44t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibdb4e38efbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:55.559954 containerd[1518]: 2025-06-21 02:18:55.533 [INFO][4991] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s44t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8s44t-eth0" Jun 21 02:18:55.559954 containerd[1518]: 2025-06-21 02:18:55.533 [INFO][4991] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdb4e38efbf ContainerID="d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s44t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8s44t-eth0" Jun 21 02:18:55.559954 containerd[1518]: 2025-06-21 02:18:55.540 [INFO][4991] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s44t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8s44t-eth0" Jun 21 02:18:55.559954 containerd[1518]: 2025-06-21 02:18:55.540 [INFO][4991] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s44t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8s44t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8s44t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"199308d2-e274-40e1-95da-3749d3ca34e0", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540", Pod:"coredns-668d6bf9bc-8s44t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibdb4e38efbf", MAC:"72:05:dc:82:94:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:18:55.559954 containerd[1518]: 2025-06-21 02:18:55.555 [INFO][4991] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" Namespace="kube-system" Pod="coredns-668d6bf9bc-8s44t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8s44t-eth0" Jun 21 02:18:55.576548 containerd[1518]: time="2025-06-21T02:18:55.576512543Z" level=info msg="StartContainer for \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\" returns successfully" Jun 21 02:18:55.587998 containerd[1518]: time="2025-06-21T02:18:55.587761108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c75966784-8tlmb,Uid:43f29484-ed5e-42bf-89ce-fa67b3c94a12,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9\"" Jun 21 02:18:55.588990 containerd[1518]: time="2025-06-21T02:18:55.588959362Z" level=info msg="connecting to shim d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540" address="unix:///run/containerd/s/69ab1ede8fe845027e9943f6c4f6640d8a8ce936dfe3aa203e0f94cf4f3acb05" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:18:55.590856 containerd[1518]: time="2025-06-21T02:18:55.590807702Z" level=info msg="CreateContainer within sandbox \"2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 02:18:55.602438 containerd[1518]: time="2025-06-21T02:18:55.602397872Z" level=info msg="Container de422bb8adea188b4b8a50826634668f38c10c694fc446beda0692a605e638e7: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:55.618685 containerd[1518]: time="2025-06-21T02:18:55.618410291Z" level=info msg="CreateContainer within sandbox \"2aebdf04744136efb76c8d3eeff626c2c21dfbbd8b8117f1a72903697cc25af9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de422bb8adea188b4b8a50826634668f38c10c694fc446beda0692a605e638e7\"" Jun 21 02:18:55.619338 containerd[1518]: time="2025-06-21T02:18:55.619306621Z" level=info msg="StartContainer for \"de422bb8adea188b4b8a50826634668f38c10c694fc446beda0692a605e638e7\"" Jun 21 02:18:55.620857 containerd[1518]: time="2025-06-21T02:18:55.620823438Z" level=info msg="connecting to shim de422bb8adea188b4b8a50826634668f38c10c694fc446beda0692a605e638e7" address="unix:///run/containerd/s/b4b14f8578d1239f63d7facef85e00d01d3f940334608732f5e1c35bb66169a8" protocol=ttrpc version=3 Jun 21 02:18:55.628373 systemd[1]: Started cri-containerd-d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540.scope - libcontainer container d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540. Jun 21 02:18:55.643382 systemd[1]: Started cri-containerd-de422bb8adea188b4b8a50826634668f38c10c694fc446beda0692a605e638e7.scope - libcontainer container de422bb8adea188b4b8a50826634668f38c10c694fc446beda0692a605e638e7. Jun 21 02:18:55.648752 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:18:55.685835 containerd[1518]: time="2025-06-21T02:18:55.685798443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8s44t,Uid:199308d2-e274-40e1-95da-3749d3ca34e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540\"" Jun 21 02:18:55.686737 kubelet[2659]: E0621 02:18:55.686516 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:55.689037 containerd[1518]: time="2025-06-21T02:18:55.688974199Z" level=info msg="CreateContainer within sandbox \"d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 02:18:55.704034 containerd[1518]: time="2025-06-21T02:18:55.702930195Z" level=info msg="Container 7a3b0399c8184236b672a08bc8de1a50ec3c6590c45b34f6345c97b731b76a03: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:55.704308 containerd[1518]: time="2025-06-21T02:18:55.704185049Z" level=info msg="StartContainer for \"de422bb8adea188b4b8a50826634668f38c10c694fc446beda0692a605e638e7\" returns successfully" Jun 21 02:18:55.712840 containerd[1518]: time="2025-06-21T02:18:55.712777025Z" level=info msg="CreateContainer within sandbox \"d3864c3ce216a106ff5de7733850ecee03fa9bb980f75756bc86f37f83d34540\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a3b0399c8184236b672a08bc8de1a50ec3c6590c45b34f6345c97b731b76a03\"" Jun 21 02:18:55.713721 containerd[1518]: time="2025-06-21T02:18:55.713674475Z" level=info msg="StartContainer for \"7a3b0399c8184236b672a08bc8de1a50ec3c6590c45b34f6345c97b731b76a03\"" Jun 21 02:18:55.716489 containerd[1518]: time="2025-06-21T02:18:55.716454826Z" level=info msg="connecting to shim 7a3b0399c8184236b672a08bc8de1a50ec3c6590c45b34f6345c97b731b76a03" address="unix:///run/containerd/s/69ab1ede8fe845027e9943f6c4f6640d8a8ce936dfe3aa203e0f94cf4f3acb05" protocol=ttrpc version=3 Jun 21 02:18:55.762380 systemd[1]: Started cri-containerd-7a3b0399c8184236b672a08bc8de1a50ec3c6590c45b34f6345c97b731b76a03.scope - libcontainer container 7a3b0399c8184236b672a08bc8de1a50ec3c6590c45b34f6345c97b731b76a03. Jun 21 02:18:55.792079 containerd[1518]: time="2025-06-21T02:18:55.792040790Z" level=info msg="StartContainer for \"7a3b0399c8184236b672a08bc8de1a50ec3c6590c45b34f6345c97b731b76a03\" returns successfully" Jun 21 02:18:56.431050 kubelet[2659]: E0621 02:18:56.431020 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:56.463001 kubelet[2659]: I0621 02:18:56.462515 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8s44t" podStartSLOduration=39.462286037 podStartE2EDuration="39.462286037s" podCreationTimestamp="2025-06-21 02:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:18:56.455609884 +0000 UTC m=+46.324786843" watchObservedRunningTime="2025-06-21 02:18:56.462286037 +0000 UTC m=+46.331462956" Jun 21 02:18:56.493756 kubelet[2659]: I0621 02:18:56.493676 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c75966784-8tlmb" podStartSLOduration=29.493659062 podStartE2EDuration="29.493659062s" podCreationTimestamp="2025-06-21 02:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:18:56.491817002 +0000 UTC m=+46.360993921" watchObservedRunningTime="2025-06-21 02:18:56.493659062 +0000 UTC m=+46.362836061" Jun 21 02:18:56.512136 kubelet[2659]: I0621 02:18:56.512071 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c75966784-9qsx8" podStartSLOduration=25.815438626 podStartE2EDuration="29.512052625s" podCreationTimestamp="2025-06-21 02:18:27 +0000 UTC" firstStartedPulling="2025-06-21 02:18:51.75867667 +0000 UTC m=+41.627853549" lastFinishedPulling="2025-06-21 02:18:55.455290629 +0000 UTC m=+45.324467548" observedRunningTime="2025-06-21 02:18:56.510495328 +0000 UTC m=+46.379672247" watchObservedRunningTime="2025-06-21 02:18:56.512052625 +0000 UTC m=+46.381229544" Jun 21 02:18:56.804173 systemd[1]: Started sshd@10-10.0.0.75:22-10.0.0.1:50296.service - OpenSSH per-connection server daemon (10.0.0.1:50296). Jun 21 02:18:56.833108 systemd-networkd[1439]: cali44c2bcfc259: Gained IPv6LL Jun 21 02:18:56.888931 sshd[5289]: Accepted publickey for core from 10.0.0.1 port 50296 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:18:56.891096 sshd-session[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:18:56.898183 systemd-logind[1493]: New session 11 of user core. Jun 21 02:18:56.906409 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 02:18:57.120619 kubelet[2659]: I0621 02:18:57.120434 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:18:57.155106 systemd-networkd[1439]: calibdb4e38efbf: Gained IPv6LL Jun 21 02:18:57.207402 containerd[1518]: time="2025-06-21T02:18:57.207360560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bead4b52a41beed6709c1cd9e1f7d80f644b375f9f3635201cac821f932f3f3\" id:\"65a55d56c876aa3481c9d27d3807377b999252e5f8d4ab55386ae6cf21475550\" pid:5340 exited_at:{seconds:1750472337 nanos:205262017}" Jun 21 02:18:57.226483 sshd[5291]: Connection closed by 10.0.0.1 port 50296 Jun 21 02:18:57.226799 sshd-session[5289]: pam_unix(sshd:session): session closed for user core Jun 21 02:18:57.231176 systemd[1]: sshd@10-10.0.0.75:22-10.0.0.1:50296.service: Deactivated successfully. Jun 21 02:18:57.237377 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 02:18:57.243628 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. Jun 21 02:18:57.247705 systemd-logind[1493]: Removed session 11. Jun 21 02:18:57.287974 containerd[1518]: time="2025-06-21T02:18:57.287940434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bead4b52a41beed6709c1cd9e1f7d80f644b375f9f3635201cac821f932f3f3\" id:\"dbb3e9c0184b27f6748ab9148af8b19ef608553d3f1662a026d2b25dcee039f1\" pid:5366 exited_at:{seconds:1750472337 nanos:287469789}" Jun 21 02:18:57.443324 kubelet[2659]: E0621 02:18:57.443286 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:57.919617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2227992373.mount: Deactivated successfully. Jun 21 02:18:58.445089 kubelet[2659]: I0621 02:18:58.445046 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:18:58.445668 kubelet[2659]: E0621 02:18:58.445643 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:58.507390 containerd[1518]: time="2025-06-21T02:18:58.507269339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:58.508226 containerd[1518]: time="2025-06-21T02:18:58.508157868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=61832718" Jun 21 02:18:58.509309 containerd[1518]: time="2025-06-21T02:18:58.509269480Z" level=info msg="ImageCreate event name:\"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:58.511260 containerd[1518]: time="2025-06-21T02:18:58.511226141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:18:58.512307 containerd[1518]: time="2025-06-21T02:18:58.512279792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"61832564\" in 3.056522918s" Jun 21 02:18:58.512342 containerd[1518]: time="2025-06-21T02:18:58.512309353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\"" Jun 21 02:18:58.515022 containerd[1518]: time="2025-06-21T02:18:58.514995101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 21 02:18:58.516259 containerd[1518]: time="2025-06-21T02:18:58.516227155Z" level=info msg="CreateContainer within sandbox \"2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 21 02:18:58.523095 containerd[1518]: time="2025-06-21T02:18:58.523055588Z" level=info msg="Container ba0c5b12e767f8801de2fa27c2957a32deb25ded4d3fa5a8ef8ae74efbd9dc73: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:18:58.530715 containerd[1518]: time="2025-06-21T02:18:58.530604668Z" level=info msg="CreateContainer within sandbox \"2caaba5119545c3ca3fde33feb809c3bda34f845431c8260b8d179fb7aa4c3c8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ba0c5b12e767f8801de2fa27c2957a32deb25ded4d3fa5a8ef8ae74efbd9dc73\"" Jun 21 02:18:58.531333 containerd[1518]: time="2025-06-21T02:18:58.531308676Z" level=info msg="StartContainer for \"ba0c5b12e767f8801de2fa27c2957a32deb25ded4d3fa5a8ef8ae74efbd9dc73\"" Jun 21 02:18:58.532414 containerd[1518]: time="2025-06-21T02:18:58.532384087Z" level=info msg="connecting to shim ba0c5b12e767f8801de2fa27c2957a32deb25ded4d3fa5a8ef8ae74efbd9dc73" address="unix:///run/containerd/s/5243e08a88736ec752438d4522d55057347408cc351eb2467b66365982008f49" protocol=ttrpc version=3 Jun 21 02:18:58.551389 systemd[1]: Started cri-containerd-ba0c5b12e767f8801de2fa27c2957a32deb25ded4d3fa5a8ef8ae74efbd9dc73.scope - libcontainer container ba0c5b12e767f8801de2fa27c2957a32deb25ded4d3fa5a8ef8ae74efbd9dc73. Jun 21 02:18:58.615706 containerd[1518]: time="2025-06-21T02:18:58.615666418Z" level=info msg="StartContainer for \"ba0c5b12e767f8801de2fa27c2957a32deb25ded4d3fa5a8ef8ae74efbd9dc73\" returns successfully" Jun 21 02:18:58.837140 kubelet[2659]: I0621 02:18:58.836351 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:18:58.837140 kubelet[2659]: E0621 02:18:58.836699 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:59.452190 kubelet[2659]: E0621 02:18:59.452147 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:18:59.470294 kubelet[2659]: I0621 02:18:59.470225 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-j6rw9" podStartSLOduration=22.474674365 podStartE2EDuration="28.47011077s" podCreationTimestamp="2025-06-21 02:18:31 +0000 UTC" firstStartedPulling="2025-06-21 02:18:52.519436295 +0000 UTC m=+42.388613214" lastFinishedPulling="2025-06-21 02:18:58.5148727 +0000 UTC m=+48.384049619" observedRunningTime="2025-06-21 02:18:59.468954358 +0000 UTC m=+49.338131277" watchObservedRunningTime="2025-06-21 02:18:59.47011077 +0000 UTC m=+49.339287689" Jun 21 02:18:59.834643 systemd-networkd[1439]: vxlan.calico: Link UP Jun 21 02:18:59.834652 systemd-networkd[1439]: vxlan.calico: Gained carrier Jun 21 02:19:00.454230 kubelet[2659]: I0621 02:19:00.453772 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:19:00.767420 containerd[1518]: time="2025-06-21T02:19:00.767213362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:19:00.767973 containerd[1518]: time="2025-06-21T02:19:00.767719928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=13749925" Jun 21 02:19:00.768665 containerd[1518]: time="2025-06-21T02:19:00.768609057Z" level=info msg="ImageCreate event name:\"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:19:00.770673 containerd[1518]: time="2025-06-21T02:19:00.770640238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:19:00.772189 containerd[1518]: time="2025-06-21T02:19:00.771316965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"15119118\" in 2.256290343s" Jun 21 02:19:00.772189 containerd[1518]: time="2025-06-21T02:19:00.771351965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\"" Jun 21 02:19:00.797456 containerd[1518]: time="2025-06-21T02:19:00.797408757Z" level=info msg="CreateContainer within sandbox \"321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 21 02:19:00.807477 containerd[1518]: time="2025-06-21T02:19:00.807433102Z" level=info msg="Container 2867f6414492f2271072de4dfa698d4a3dc1e223757b27762b2cd49687aa2fbd: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:19:00.816961 containerd[1518]: time="2025-06-21T02:19:00.816920161Z" level=info msg="CreateContainer within sandbox \"321d2aa1612e054ac57c5fe9268938de6ea1f5b00611ac4c7fc0c464e1a7a135\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2867f6414492f2271072de4dfa698d4a3dc1e223757b27762b2cd49687aa2fbd\"" Jun 21 02:19:00.817923 containerd[1518]: time="2025-06-21T02:19:00.817894491Z" level=info msg="StartContainer for \"2867f6414492f2271072de4dfa698d4a3dc1e223757b27762b2cd49687aa2fbd\"" Jun 21 02:19:00.819682 containerd[1518]: time="2025-06-21T02:19:00.819647349Z" level=info msg="connecting to shim 2867f6414492f2271072de4dfa698d4a3dc1e223757b27762b2cd49687aa2fbd" address="unix:///run/containerd/s/471e0135ca57898b492409850f20df855d994f1520b9ee08852be15c9b96843b" protocol=ttrpc version=3 Jun 21 02:19:00.847399 systemd[1]: Started cri-containerd-2867f6414492f2271072de4dfa698d4a3dc1e223757b27762b2cd49687aa2fbd.scope - libcontainer container 2867f6414492f2271072de4dfa698d4a3dc1e223757b27762b2cd49687aa2fbd. Jun 21 02:19:00.905198 containerd[1518]: time="2025-06-21T02:19:00.905147960Z" level=info msg="StartContainer for \"2867f6414492f2271072de4dfa698d4a3dc1e223757b27762b2cd49687aa2fbd\" returns successfully" Jun 21 02:19:01.184897 systemd-networkd[1439]: vxlan.calico: Gained IPv6LL Jun 21 02:19:01.307440 kubelet[2659]: I0621 02:19:01.307398 2659 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 21 02:19:01.307440 kubelet[2659]: I0621 02:19:01.307439 2659 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 21 02:19:01.480432 kubelet[2659]: I0621 02:19:01.480298 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2n26r" podStartSLOduration=21.443606496 podStartE2EDuration="30.480279818s" podCreationTimestamp="2025-06-21 02:18:31 +0000 UTC" firstStartedPulling="2025-06-21 02:18:51.748245185 +0000 UTC m=+41.617422104" lastFinishedPulling="2025-06-21 02:19:00.784918507 +0000 UTC m=+50.654095426" observedRunningTime="2025-06-21 02:19:01.479350208 +0000 UTC m=+51.348527167" watchObservedRunningTime="2025-06-21 02:19:01.480279818 +0000 UTC m=+51.349456737" Jun 21 02:19:02.242662 systemd[1]: Started sshd@11-10.0.0.75:22-10.0.0.1:50308.service - OpenSSH per-connection server daemon (10.0.0.1:50308). Jun 21 02:19:02.312015 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 50308 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:02.313824 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:02.318882 systemd-logind[1493]: New session 12 of user core. Jun 21 02:19:02.331417 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 02:19:02.597696 sshd[5644]: Connection closed by 10.0.0.1 port 50308 Jun 21 02:19:02.598054 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:02.610407 systemd[1]: sshd@11-10.0.0.75:22-10.0.0.1:50308.service: Deactivated successfully. Jun 21 02:19:02.614304 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 02:19:02.616315 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. Jun 21 02:19:02.618989 systemd[1]: Started sshd@12-10.0.0.75:22-10.0.0.1:46942.service - OpenSSH per-connection server daemon (10.0.0.1:46942). Jun 21 02:19:02.619678 systemd-logind[1493]: Removed session 12. Jun 21 02:19:02.674970 sshd[5659]: Accepted publickey for core from 10.0.0.1 port 46942 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:02.676529 sshd-session[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:02.683742 systemd-logind[1493]: New session 13 of user core. Jun 21 02:19:02.693393 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 02:19:02.898241 sshd[5661]: Connection closed by 10.0.0.1 port 46942 Jun 21 02:19:02.898651 sshd-session[5659]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:02.916624 systemd[1]: sshd@12-10.0.0.75:22-10.0.0.1:46942.service: Deactivated successfully. Jun 21 02:19:02.922104 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 02:19:02.926273 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. Jun 21 02:19:02.930290 systemd[1]: Started sshd@13-10.0.0.75:22-10.0.0.1:46948.service - OpenSSH per-connection server daemon (10.0.0.1:46948). Jun 21 02:19:02.931944 systemd-logind[1493]: Removed session 13. Jun 21 02:19:02.988271 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 46948 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:02.990278 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:02.997278 systemd-logind[1493]: New session 14 of user core. Jun 21 02:19:03.006361 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 02:19:03.200748 sshd[5675]: Connection closed by 10.0.0.1 port 46948 Jun 21 02:19:03.200998 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:03.206357 systemd[1]: sshd@13-10.0.0.75:22-10.0.0.1:46948.service: Deactivated successfully. Jun 21 02:19:03.208410 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 02:19:03.209381 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. Jun 21 02:19:03.210547 systemd-logind[1493]: Removed session 14. Jun 21 02:19:06.498029 kubelet[2659]: I0621 02:19:06.497901 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:19:06.594912 containerd[1518]: time="2025-06-21T02:19:06.594865585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba0c5b12e767f8801de2fa27c2957a32deb25ded4d3fa5a8ef8ae74efbd9dc73\" id:\"12beb231b5d53269efcfbc6a060347e88ffb63813f795d944ae524a6f20a7b6d\" pid:5709 exited_at:{seconds:1750472346 nanos:594520902}" Jun 21 02:19:06.695102 containerd[1518]: time="2025-06-21T02:19:06.695061607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba0c5b12e767f8801de2fa27c2957a32deb25ded4d3fa5a8ef8ae74efbd9dc73\" id:\"9b5bafdf3e517bbca92196e57547fe3162177acae5014c4466a1776b5872d466\" pid:5733 exited_at:{seconds:1750472346 nanos:694758404}" Jun 21 02:19:08.042308 containerd[1518]: time="2025-06-21T02:19:08.039576495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bead4b52a41beed6709c1cd9e1f7d80f644b375f9f3635201cac821f932f3f3\" id:\"669c744e311919960070a46a72eb7714a8befb7708d73361ac43829f2ecca2ca\" pid:5759 exited_at:{seconds:1750472348 nanos:38903528}" Jun 21 02:19:08.218184 systemd[1]: Started sshd@14-10.0.0.75:22-10.0.0.1:46956.service - OpenSSH per-connection server daemon (10.0.0.1:46956). Jun 21 02:19:08.293311 sshd[5770]: Accepted publickey for core from 10.0.0.1 port 46956 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:08.293897 sshd-session[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:08.299767 systemd-logind[1493]: New session 15 of user core. Jun 21 02:19:08.308398 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 02:19:08.458258 sshd[5772]: Connection closed by 10.0.0.1 port 46956 Jun 21 02:19:08.458849 sshd-session[5770]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:08.462520 systemd[1]: sshd@14-10.0.0.75:22-10.0.0.1:46956.service: Deactivated successfully. Jun 21 02:19:08.464399 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 02:19:08.466932 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. Jun 21 02:19:08.468766 systemd-logind[1493]: Removed session 15. Jun 21 02:19:13.472839 systemd[1]: Started sshd@15-10.0.0.75:22-10.0.0.1:42122.service - OpenSSH per-connection server daemon (10.0.0.1:42122). Jun 21 02:19:13.544223 sshd[5800]: Accepted publickey for core from 10.0.0.1 port 42122 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:13.545515 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:13.550859 systemd-logind[1493]: New session 16 of user core. Jun 21 02:19:13.563372 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 02:19:13.723687 sshd[5802]: Connection closed by 10.0.0.1 port 42122 Jun 21 02:19:13.725021 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:13.728616 systemd[1]: sshd@15-10.0.0.75:22-10.0.0.1:42122.service: Deactivated successfully. Jun 21 02:19:13.730329 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 02:19:13.733663 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. Jun 21 02:19:13.735330 systemd-logind[1493]: Removed session 16. Jun 21 02:19:14.442504 containerd[1518]: time="2025-06-21T02:19:14.442447214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e40b9571be6d1b687082ece36b9449274b1b621fe5e262760b06c4c8e1c972df\" id:\"588b73b925433051c54a6b23afb0fb640c6a601d4267098ea606d0e702035787\" pid:5826 exit_status:1 exited_at:{seconds:1750472354 nanos:440846850}" Jun 21 02:19:18.739512 systemd[1]: Started sshd@16-10.0.0.75:22-10.0.0.1:42128.service - OpenSSH per-connection server daemon (10.0.0.1:42128). Jun 21 02:19:18.813041 sshd[5842]: Accepted publickey for core from 10.0.0.1 port 42128 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:18.814349 sshd-session[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:18.818251 systemd-logind[1493]: New session 17 of user core. Jun 21 02:19:18.829382 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 02:19:19.036481 sshd[5844]: Connection closed by 10.0.0.1 port 42128 Jun 21 02:19:19.037252 sshd-session[5842]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:19.042014 systemd[1]: sshd@16-10.0.0.75:22-10.0.0.1:42128.service: Deactivated successfully. Jun 21 02:19:19.044761 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 02:19:19.045632 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. Jun 21 02:19:19.047961 systemd-logind[1493]: Removed session 17. Jun 21 02:19:24.051884 systemd[1]: Started sshd@17-10.0.0.75:22-10.0.0.1:59174.service - OpenSSH per-connection server daemon (10.0.0.1:59174). Jun 21 02:19:24.118228 sshd[5867]: Accepted publickey for core from 10.0.0.1 port 59174 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:24.119658 sshd-session[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:24.124507 systemd-logind[1493]: New session 18 of user core. Jun 21 02:19:24.135442 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 02:19:24.304076 sshd[5869]: Connection closed by 10.0.0.1 port 59174 Jun 21 02:19:24.304520 sshd-session[5867]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:24.317644 systemd[1]: sshd@17-10.0.0.75:22-10.0.0.1:59174.service: Deactivated successfully. Jun 21 02:19:24.319583 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 02:19:24.320375 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. Jun 21 02:19:24.323556 systemd[1]: Started sshd@18-10.0.0.75:22-10.0.0.1:59182.service - OpenSSH per-connection server daemon (10.0.0.1:59182). Jun 21 02:19:24.324362 systemd-logind[1493]: Removed session 18. Jun 21 02:19:24.381282 sshd[5882]: Accepted publickey for core from 10.0.0.1 port 59182 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:24.382841 sshd-session[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:24.387665 systemd-logind[1493]: New session 19 of user core. Jun 21 02:19:24.401454 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 02:19:24.636733 sshd[5884]: Connection closed by 10.0.0.1 port 59182 Jun 21 02:19:24.636741 sshd-session[5882]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:24.651909 systemd[1]: sshd@18-10.0.0.75:22-10.0.0.1:59182.service: Deactivated successfully. Jun 21 02:19:24.654571 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 02:19:24.655298 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. Jun 21 02:19:24.657959 systemd[1]: Started sshd@19-10.0.0.75:22-10.0.0.1:59192.service - OpenSSH per-connection server daemon (10.0.0.1:59192). Jun 21 02:19:24.659073 systemd-logind[1493]: Removed session 19. Jun 21 02:19:24.720380 sshd[5896]: Accepted publickey for core from 10.0.0.1 port 59192 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:24.721754 sshd-session[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:24.725834 systemd-logind[1493]: New session 20 of user core. Jun 21 02:19:24.736421 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 02:19:25.500755 sshd[5898]: Connection closed by 10.0.0.1 port 59192 Jun 21 02:19:25.501714 sshd-session[5896]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:25.512608 systemd[1]: sshd@19-10.0.0.75:22-10.0.0.1:59192.service: Deactivated successfully. Jun 21 02:19:25.514422 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 02:19:25.516553 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. Jun 21 02:19:25.522608 systemd[1]: Started sshd@20-10.0.0.75:22-10.0.0.1:59200.service - OpenSSH per-connection server daemon (10.0.0.1:59200). Jun 21 02:19:25.526686 systemd-logind[1493]: Removed session 20. Jun 21 02:19:25.579835 sshd[5918]: Accepted publickey for core from 10.0.0.1 port 59200 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:25.581185 sshd-session[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:25.585420 systemd-logind[1493]: New session 21 of user core. Jun 21 02:19:25.595393 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 02:19:25.919960 sshd[5922]: Connection closed by 10.0.0.1 port 59200 Jun 21 02:19:25.920755 sshd-session[5918]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:25.932773 systemd[1]: sshd@20-10.0.0.75:22-10.0.0.1:59200.service: Deactivated successfully. Jun 21 02:19:25.935069 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 02:19:25.936962 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. Jun 21 02:19:25.941385 systemd[1]: Started sshd@21-10.0.0.75:22-10.0.0.1:59204.service - OpenSSH per-connection server daemon (10.0.0.1:59204). Jun 21 02:19:25.944158 systemd-logind[1493]: Removed session 21. Jun 21 02:19:25.959153 containerd[1518]: time="2025-06-21T02:19:25.958736254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba0c5b12e767f8801de2fa27c2957a32deb25ded4d3fa5a8ef8ae74efbd9dc73\" id:\"36699040336076b6b9573ec01ed75cedfe38591e4073b5717a9550ed4cbc592c\" pid:5942 exited_at:{seconds:1750472365 nanos:958383253}" Jun 21 02:19:26.000255 sshd[5958]: Accepted publickey for core from 10.0.0.1 port 59204 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:26.001863 sshd-session[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:26.006526 systemd-logind[1493]: New session 22 of user core. Jun 21 02:19:26.015373 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 02:19:26.167221 sshd[5960]: Connection closed by 10.0.0.1 port 59204 Jun 21 02:19:26.166718 sshd-session[5958]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:26.170594 systemd[1]: sshd@21-10.0.0.75:22-10.0.0.1:59204.service: Deactivated successfully. Jun 21 02:19:26.173941 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 02:19:26.174692 systemd-logind[1493]: Session 22 logged out. Waiting for processes to exit. Jun 21 02:19:26.175769 systemd-logind[1493]: Removed session 22. Jun 21 02:19:27.244225 containerd[1518]: time="2025-06-21T02:19:27.244143413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bead4b52a41beed6709c1cd9e1f7d80f644b375f9f3635201cac821f932f3f3\" id:\"2d95a1b48fd450d0558faea6614d9c6d0bf5e92782f3da88fff9f7f60a1704a1\" pid:5984 exited_at:{seconds:1750472367 nanos:243865052}" Jun 21 02:19:31.178317 systemd[1]: Started sshd@22-10.0.0.75:22-10.0.0.1:59208.service - OpenSSH per-connection server daemon (10.0.0.1:59208). Jun 21 02:19:31.232202 sshd[5997]: Accepted publickey for core from 10.0.0.1 port 59208 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:31.233630 sshd-session[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:31.241514 systemd-logind[1493]: New session 23 of user core. Jun 21 02:19:31.247043 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 21 02:19:31.381572 sshd[5999]: Connection closed by 10.0.0.1 port 59208 Jun 21 02:19:31.381921 sshd-session[5997]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:31.386433 systemd[1]: sshd@22-10.0.0.75:22-10.0.0.1:59208.service: Deactivated successfully. Jun 21 02:19:31.388139 systemd[1]: session-23.scope: Deactivated successfully. Jun 21 02:19:31.391280 systemd-logind[1493]: Session 23 logged out. Waiting for processes to exit. Jun 21 02:19:31.392590 systemd-logind[1493]: Removed session 23. Jun 21 02:19:36.397491 systemd[1]: Started sshd@23-10.0.0.75:22-10.0.0.1:52426.service - OpenSSH per-connection server daemon (10.0.0.1:52426). Jun 21 02:19:36.481847 sshd[6013]: Accepted publickey for core from 10.0.0.1 port 52426 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:36.482376 sshd-session[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:36.490058 systemd-logind[1493]: New session 24 of user core. Jun 21 02:19:36.500449 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 21 02:19:36.717354 containerd[1518]: time="2025-06-21T02:19:36.716924537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba0c5b12e767f8801de2fa27c2957a32deb25ded4d3fa5a8ef8ae74efbd9dc73\" id:\"c09d25379cb14e18972944aca4023187a12706e909d1f53c1276ed9737203c5d\" pid:6038 exited_at:{seconds:1750472376 nanos:716581295}" Jun 21 02:19:36.769264 sshd[6016]: Connection closed by 10.0.0.1 port 52426 Jun 21 02:19:36.769592 sshd-session[6013]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:36.773976 systemd-logind[1493]: Session 24 logged out. Waiting for processes to exit. Jun 21 02:19:36.774114 systemd[1]: sshd@23-10.0.0.75:22-10.0.0.1:52426.service: Deactivated successfully. Jun 21 02:19:36.776377 systemd[1]: session-24.scope: Deactivated successfully. Jun 21 02:19:36.777974 systemd-logind[1493]: Removed session 24. Jun 21 02:19:37.226162 kubelet[2659]: E0621 02:19:37.226064 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:19:38.728800 kubelet[2659]: I0621 02:19:38.728756 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:19:38.795794 containerd[1518]: time="2025-06-21T02:19:38.795745765Z" level=info msg="StopContainer for \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\" with timeout 30 (s)" Jun 21 02:19:38.796909 containerd[1518]: time="2025-06-21T02:19:38.796691890Z" level=info msg="Stop container \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\" with signal terminated" Jun 21 02:19:38.815375 systemd[1]: cri-containerd-86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21.scope: Deactivated successfully. Jun 21 02:19:38.815763 systemd[1]: cri-containerd-86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21.scope: Consumed 2.095s CPU time, 40.3M memory peak, 1.3M read from disk. Jun 21 02:19:38.817855 containerd[1518]: time="2025-06-21T02:19:38.817441562Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\" id:\"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\" pid:5097 exit_status:1 exited_at:{seconds:1750472378 nanos:816678398}" Jun 21 02:19:38.829100 containerd[1518]: time="2025-06-21T02:19:38.828985785Z" level=info msg="received exit event container_id:\"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\" id:\"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\" pid:5097 exit_status:1 exited_at:{seconds:1750472378 nanos:816678398}" Jun 21 02:19:38.865398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21-rootfs.mount: Deactivated successfully. Jun 21 02:19:38.895297 containerd[1518]: time="2025-06-21T02:19:38.895141502Z" level=info msg="StopContainer for \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\" returns successfully" Jun 21 02:19:38.900267 containerd[1518]: time="2025-06-21T02:19:38.900188449Z" level=info msg="StopPodSandbox for \"1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432\"" Jun 21 02:19:38.900338 containerd[1518]: time="2025-06-21T02:19:38.900319770Z" level=info msg="Container to stop \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 02:19:38.913565 systemd[1]: cri-containerd-1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432.scope: Deactivated successfully. Jun 21 02:19:38.918001 containerd[1518]: time="2025-06-21T02:19:38.917890145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432\" id:\"1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432\" pid:4617 exit_status:137 exited_at:{seconds:1750472378 nanos:917453023}" Jun 21 02:19:38.947102 containerd[1518]: time="2025-06-21T02:19:38.942649199Z" level=info msg="shim disconnected" id=1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432 namespace=k8s.io Jun 21 02:19:38.947102 containerd[1518]: time="2025-06-21T02:19:38.942693279Z" level=warning msg="cleaning up after shim disconnected" id=1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432 namespace=k8s.io Jun 21 02:19:38.947102 containerd[1518]: time="2025-06-21T02:19:38.942729039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 02:19:38.946270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432-rootfs.mount: Deactivated successfully. Jun 21 02:19:38.966323 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432-shm.mount: Deactivated successfully. Jun 21 02:19:38.970227 containerd[1518]: time="2025-06-21T02:19:38.970175547Z" level=info msg="received exit event sandbox_id:\"1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432\" exit_status:137 exited_at:{seconds:1750472378 nanos:917453023}" Jun 21 02:19:39.028229 systemd-networkd[1439]: cali7198c1761f2: Link DOWN Jun 21 02:19:39.028237 systemd-networkd[1439]: cali7198c1761f2: Lost carrier Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.026 [INFO][6129] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.026 [INFO][6129] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" iface="eth0" netns="/var/run/netns/cni-4d26c776-32f5-1a34-6012-957a1e8d500f" Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.027 [INFO][6129] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" iface="eth0" netns="/var/run/netns/cni-4d26c776-32f5-1a34-6012-957a1e8d500f" Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.038 [INFO][6129] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" after=11.666864ms iface="eth0" netns="/var/run/netns/cni-4d26c776-32f5-1a34-6012-957a1e8d500f" Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.038 [INFO][6129] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.038 [INFO][6129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.058 [INFO][6143] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" HandleID="k8s-pod-network.1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Workload="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.058 [INFO][6143] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.058 [INFO][6143] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.092 [INFO][6143] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" HandleID="k8s-pod-network.1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Workload="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.093 [INFO][6143] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" HandleID="k8s-pod-network.1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Workload="localhost-k8s-calico--apiserver--6c75966784--9qsx8-eth0" Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.094 [INFO][6143] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:19:39.098441 containerd[1518]: 2025-06-21 02:19:39.096 [INFO][6129] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432" Jun 21 02:19:39.098831 containerd[1518]: time="2025-06-21T02:19:39.098711409Z" level=info msg="TearDown network for sandbox \"1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432\" successfully" Jun 21 02:19:39.098831 containerd[1518]: time="2025-06-21T02:19:39.098737529Z" level=info msg="StopPodSandbox for \"1b3ecff9ec1ad357ae68860e225683d37595d05e91284c9a388ba2a64a975432\" returns successfully" Jun 21 02:19:39.102306 systemd[1]: run-netns-cni\x2d4d26c776\x2d32f5\x2d1a34\x2d6012\x2d957a1e8d500f.mount: Deactivated successfully. Jun 21 02:19:39.226332 kubelet[2659]: I0621 02:19:39.226283 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pjw6\" (UniqueName: \"kubernetes.io/projected/13d8fede-0173-472e-b7e7-ee6276d45c03-kube-api-access-9pjw6\") pod \"13d8fede-0173-472e-b7e7-ee6276d45c03\" (UID: \"13d8fede-0173-472e-b7e7-ee6276d45c03\") " Jun 21 02:19:39.226332 kubelet[2659]: I0621 02:19:39.226330 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/13d8fede-0173-472e-b7e7-ee6276d45c03-calico-apiserver-certs\") pod \"13d8fede-0173-472e-b7e7-ee6276d45c03\" (UID: \"13d8fede-0173-472e-b7e7-ee6276d45c03\") " Jun 21 02:19:39.243353 systemd[1]: var-lib-kubelet-pods-13d8fede\x2d0173\x2d472e\x2db7e7\x2dee6276d45c03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9pjw6.mount: Deactivated successfully. Jun 21 02:19:39.245838 kubelet[2659]: I0621 02:19:39.245776 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d8fede-0173-472e-b7e7-ee6276d45c03-kube-api-access-9pjw6" (OuterVolumeSpecName: "kube-api-access-9pjw6") pod "13d8fede-0173-472e-b7e7-ee6276d45c03" (UID: "13d8fede-0173-472e-b7e7-ee6276d45c03"). InnerVolumeSpecName "kube-api-access-9pjw6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 02:19:39.245945 kubelet[2659]: I0621 02:19:39.245914 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d8fede-0173-472e-b7e7-ee6276d45c03-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "13d8fede-0173-472e-b7e7-ee6276d45c03" (UID: "13d8fede-0173-472e-b7e7-ee6276d45c03"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 21 02:19:39.327571 kubelet[2659]: I0621 02:19:39.327456 2659 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9pjw6\" (UniqueName: \"kubernetes.io/projected/13d8fede-0173-472e-b7e7-ee6276d45c03-kube-api-access-9pjw6\") on node \"localhost\" DevicePath \"\"" Jun 21 02:19:39.327571 kubelet[2659]: I0621 02:19:39.327492 2659 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/13d8fede-0173-472e-b7e7-ee6276d45c03-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jun 21 02:19:39.561941 kubelet[2659]: I0621 02:19:39.561906 2659 scope.go:117] "RemoveContainer" containerID="86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21" Jun 21 02:19:39.565115 systemd[1]: Removed slice kubepods-besteffort-pod13d8fede_0173_472e_b7e7_ee6276d45c03.slice - libcontainer container kubepods-besteffort-pod13d8fede_0173_472e_b7e7_ee6276d45c03.slice. Jun 21 02:19:39.565225 systemd[1]: kubepods-besteffort-pod13d8fede_0173_472e_b7e7_ee6276d45c03.slice: Consumed 2.113s CPU time, 40.6M memory peak, 1.3M read from disk. Jun 21 02:19:39.566029 containerd[1518]: time="2025-06-21T02:19:39.565994607Z" level=info msg="RemoveContainer for \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\"" Jun 21 02:19:39.600056 containerd[1518]: time="2025-06-21T02:19:39.599958153Z" level=info msg="RemoveContainer for \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\" returns successfully" Jun 21 02:19:39.600332 kubelet[2659]: I0621 02:19:39.600296 2659 scope.go:117] "RemoveContainer" containerID="86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21" Jun 21 02:19:39.600646 containerd[1518]: time="2025-06-21T02:19:39.600597077Z" level=error msg="ContainerStatus for \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\": not found" Jun 21 02:19:39.603642 kubelet[2659]: E0621 02:19:39.602958 2659 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\": not found" containerID="86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21" Jun 21 02:19:39.607933 kubelet[2659]: I0621 02:19:39.607833 2659 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21"} err="failed to get container status \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\": rpc error: code = NotFound desc = an error occurred when try to find container \"86c9a90e8f4cad5cc8b1a708b34dd8c25ee7155e6c7c811ce1f2ae25a6743c21\": not found" Jun 21 02:19:39.864729 systemd[1]: var-lib-kubelet-pods-13d8fede\x2d0173\x2d472e\x2db7e7\x2dee6276d45c03-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 21 02:19:40.227068 kubelet[2659]: E0621 02:19:40.227031 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 02:19:40.230251 kubelet[2659]: I0621 02:19:40.229586 2659 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d8fede-0173-472e-b7e7-ee6276d45c03" path="/var/lib/kubelet/pods/13d8fede-0173-472e-b7e7-ee6276d45c03/volumes" Jun 21 02:19:41.783760 systemd[1]: Started sshd@24-10.0.0.75:22-10.0.0.1:52434.service - OpenSSH per-connection server daemon (10.0.0.1:52434). Jun 21 02:19:41.856134 sshd[6163]: Accepted publickey for core from 10.0.0.1 port 52434 ssh2: RSA SHA256:7C6jsm4BYxuBnpyGUGr+M8fttrfZ+ALeZqEnZbBdE/w Jun 21 02:19:41.857856 sshd-session[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:19:41.861934 systemd-logind[1493]: New session 25 of user core. Jun 21 02:19:41.879357 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 21 02:19:42.081097 sshd[6165]: Connection closed by 10.0.0.1 port 52434 Jun 21 02:19:42.081382 sshd-session[6163]: pam_unix(sshd:session): session closed for user core Jun 21 02:19:42.086275 systemd-logind[1493]: Session 25 logged out. Waiting for processes to exit. Jun 21 02:19:42.086398 systemd[1]: sshd@24-10.0.0.75:22-10.0.0.1:52434.service: Deactivated successfully. Jun 21 02:19:42.088884 systemd[1]: session-25.scope: Deactivated successfully. Jun 21 02:19:42.090386 systemd-logind[1493]: Removed session 25.