Jul 10 04:56:07.777066 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 04:56:07.777087 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Jul 10 03:48:30 -00 2025 Jul 10 04:56:07.777096 kernel: KASLR enabled Jul 10 04:56:07.777102 kernel: efi: EFI v2.7 by EDK II Jul 10 04:56:07.777107 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 10 04:56:07.777112 kernel: random: crng init done Jul 10 04:56:07.777119 kernel: secureboot: Secure boot disabled Jul 10 04:56:07.777125 kernel: ACPI: Early table checksum verification disabled Jul 10 04:56:07.777130 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 10 04:56:07.777137 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 04:56:07.777143 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 04:56:07.777148 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 04:56:07.777154 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 04:56:07.777160 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 04:56:07.777167 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 04:56:07.777174 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 04:56:07.777180 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 04:56:07.777186 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 04:56:07.777192 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 04:56:07.777198 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 04:56:07.777205 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 10 04:56:07.777211 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 04:56:07.777217 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 10 04:56:07.777223 kernel: Zone ranges: Jul 10 04:56:07.777229 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 04:56:07.777236 kernel: DMA32 empty Jul 10 04:56:07.777242 kernel: Normal empty Jul 10 04:56:07.777247 kernel: Device empty Jul 10 04:56:07.777253 kernel: Movable zone start for each node Jul 10 04:56:07.777259 kernel: Early memory node ranges Jul 10 04:56:07.777265 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 10 04:56:07.777271 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 10 04:56:07.777277 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 10 04:56:07.777283 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 10 04:56:07.777289 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 10 04:56:07.777295 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 10 04:56:07.777301 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 10 04:56:07.777308 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 10 04:56:07.777314 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 10 04:56:07.777320 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 10 04:56:07.777329 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 10 04:56:07.777335 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 10 04:56:07.777342 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 10 04:56:07.777351 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 04:56:07.777357 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 04:56:07.777364 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 10 04:56:07.777370 kernel: psci: probing for conduit method from ACPI. Jul 10 04:56:07.777377 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 04:56:07.777383 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 04:56:07.777389 kernel: psci: Trusted OS migration not required Jul 10 04:56:07.777396 kernel: psci: SMC Calling Convention v1.1 Jul 10 04:56:07.777402 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 04:56:07.777409 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 10 04:56:07.777417 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 10 04:56:07.777423 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 04:56:07.777429 kernel: Detected PIPT I-cache on CPU0 Jul 10 04:56:07.777436 kernel: CPU features: detected: GIC system register CPU interface Jul 10 04:56:07.777442 kernel: CPU features: detected: Spectre-v4 Jul 10 04:56:07.777448 kernel: CPU features: detected: Spectre-BHB Jul 10 04:56:07.777454 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 04:56:07.777461 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 04:56:07.777467 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 04:56:07.777473 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 04:56:07.777479 kernel: alternatives: applying boot alternatives Jul 10 04:56:07.777487 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=874e2d0098f5b2b6ddee2985c0ed93d47404937b0b8fa9410bd21a088c57c730 Jul 10 04:56:07.777495 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 04:56:07.777501 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 04:56:07.777507 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 04:56:07.777514 kernel: Fallback order for Node 0: 0 Jul 10 04:56:07.777520 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 10 04:56:07.777526 kernel: Policy zone: DMA Jul 10 04:56:07.777532 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 04:56:07.777539 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 10 04:56:07.777545 kernel: software IO TLB: area num 4. Jul 10 04:56:07.777551 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 10 04:56:07.777558 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 10 04:56:07.777565 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 04:56:07.777572 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 04:56:07.777578 kernel: rcu: RCU event tracing is enabled. Jul 10 04:56:07.777585 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 04:56:07.777591 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 04:56:07.777597 kernel: Tracing variant of Tasks RCU enabled. Jul 10 04:56:07.777604 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 04:56:07.777610 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 04:56:07.777617 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 04:56:07.777623 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 04:56:07.777629 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 04:56:07.777637 kernel: GICv3: 256 SPIs implemented Jul 10 04:56:07.777643 kernel: GICv3: 0 Extended SPIs implemented Jul 10 04:56:07.777650 kernel: Root IRQ handler: gic_handle_irq Jul 10 04:56:07.777656 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 04:56:07.777662 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 10 04:56:07.777668 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 04:56:07.777675 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 04:56:07.777681 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 10 04:56:07.777687 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 10 04:56:07.777694 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 10 04:56:07.777700 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 10 04:56:07.777707 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 04:56:07.777714 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 04:56:07.777720 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 04:56:07.777727 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 04:56:07.777733 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 04:56:07.777740 kernel: arm-pv: using stolen time PV Jul 10 04:56:07.777746 kernel: Console: colour dummy device 80x25 Jul 10 04:56:07.777753 kernel: ACPI: Core revision 20240827 Jul 10 04:56:07.777760 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 04:56:07.777766 kernel: pid_max: default: 32768 minimum: 301 Jul 10 04:56:07.777773 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 04:56:07.777781 kernel: landlock: Up and running. Jul 10 04:56:07.777787 kernel: SELinux: Initializing. Jul 10 04:56:07.777793 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 04:56:07.777800 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 04:56:07.777807 kernel: rcu: Hierarchical SRCU implementation. Jul 10 04:56:07.777813 kernel: rcu: Max phase no-delay instances is 400. Jul 10 04:56:07.777820 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 04:56:07.777826 kernel: Remapping and enabling EFI services. Jul 10 04:56:07.777833 kernel: smp: Bringing up secondary CPUs ... Jul 10 04:56:07.777845 kernel: Detected PIPT I-cache on CPU1 Jul 10 04:56:07.777852 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 04:56:07.777858 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 10 04:56:07.777866 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 04:56:07.777873 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 04:56:07.777880 kernel: Detected PIPT I-cache on CPU2 Jul 10 04:56:07.777887 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 04:56:07.777894 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 10 04:56:07.777901 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 04:56:07.777917 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 04:56:07.777925 kernel: Detected PIPT I-cache on CPU3 Jul 10 04:56:07.777931 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 04:56:07.777938 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 10 04:56:07.777945 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 04:56:07.777952 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 04:56:07.777958 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 04:56:07.777965 kernel: SMP: Total of 4 processors activated. Jul 10 04:56:07.777982 kernel: CPU: All CPU(s) started at EL1 Jul 10 04:56:07.777992 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 04:56:07.777999 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 04:56:07.778006 kernel: CPU features: detected: Common not Private translations Jul 10 04:56:07.778013 kernel: CPU features: detected: CRC32 instructions Jul 10 04:56:07.778020 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 10 04:56:07.778027 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 04:56:07.778034 kernel: CPU features: detected: LSE atomic instructions Jul 10 04:56:07.778041 kernel: CPU features: detected: Privileged Access Never Jul 10 04:56:07.778049 kernel: CPU features: detected: RAS Extension Support Jul 10 04:56:07.778056 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 04:56:07.778063 kernel: alternatives: applying system-wide alternatives Jul 10 04:56:07.778070 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 10 04:56:07.778077 kernel: Memory: 2424032K/2572288K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 125920K reserved, 16384K cma-reserved) Jul 10 04:56:07.778084 kernel: devtmpfs: initialized Jul 10 04:56:07.778091 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 04:56:07.778098 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 04:56:07.778105 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 04:56:07.778113 kernel: 0 pages in range for non-PLT usage Jul 10 04:56:07.778120 kernel: 508448 pages in range for PLT usage Jul 10 04:56:07.778126 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 04:56:07.778133 kernel: SMBIOS 3.0.0 present. Jul 10 04:56:07.778140 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 10 04:56:07.778147 kernel: DMI: Memory slots populated: 1/1 Jul 10 04:56:07.778154 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 04:56:07.778160 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 04:56:07.778167 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 04:56:07.778176 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 04:56:07.778182 kernel: audit: initializing netlink subsys (disabled) Jul 10 04:56:07.778189 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 10 04:56:07.778196 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 04:56:07.778203 kernel: cpuidle: using governor menu Jul 10 04:56:07.778210 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 04:56:07.778217 kernel: ASID allocator initialised with 32768 entries Jul 10 04:56:07.778224 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 04:56:07.778231 kernel: Serial: AMBA PL011 UART driver Jul 10 04:56:07.778239 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 04:56:07.778247 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 04:56:07.778254 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 04:56:07.778261 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 04:56:07.778268 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 04:56:07.778274 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 04:56:07.778281 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 04:56:07.778288 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 04:56:07.778295 kernel: ACPI: Added _OSI(Module Device) Jul 10 04:56:07.778303 kernel: ACPI: Added _OSI(Processor Device) Jul 10 04:56:07.778309 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 04:56:07.778316 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 04:56:07.778323 kernel: ACPI: Interpreter enabled Jul 10 04:56:07.778330 kernel: ACPI: Using GIC for interrupt routing Jul 10 04:56:07.778337 kernel: ACPI: MCFG table detected, 1 entries Jul 10 04:56:07.778344 kernel: ACPI: CPU0 has been hot-added Jul 10 04:56:07.778350 kernel: ACPI: CPU1 has been hot-added Jul 10 04:56:07.778357 kernel: ACPI: CPU2 has been hot-added Jul 10 04:56:07.778364 kernel: ACPI: CPU3 has been hot-added Jul 10 04:56:07.778372 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 04:56:07.778379 kernel: printk: legacy console [ttyAMA0] enabled Jul 10 04:56:07.778386 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 04:56:07.778505 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 04:56:07.778570 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 04:56:07.778636 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 04:56:07.778694 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 04:56:07.778752 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 04:56:07.778762 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 04:56:07.778769 kernel: PCI host bridge to bus 0000:00 Jul 10 04:56:07.778832 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 04:56:07.778886 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 04:56:07.778946 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 04:56:07.779020 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 04:56:07.779101 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 10 04:56:07.779173 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 04:56:07.779233 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 10 04:56:07.779292 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 10 04:56:07.779349 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 04:56:07.779407 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 10 04:56:07.779466 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 10 04:56:07.779526 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 10 04:56:07.779579 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 04:56:07.779632 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 04:56:07.779684 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 04:56:07.779693 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 04:56:07.779700 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 04:56:07.779707 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 04:56:07.779716 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 04:56:07.779723 kernel: iommu: Default domain type: Translated Jul 10 04:56:07.779730 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 04:56:07.779737 kernel: efivars: Registered efivars operations Jul 10 04:56:07.779744 kernel: vgaarb: loaded Jul 10 04:56:07.779751 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 04:56:07.779758 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 04:56:07.779764 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 04:56:07.779771 kernel: pnp: PnP ACPI init Jul 10 04:56:07.779836 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 04:56:07.779846 kernel: pnp: PnP ACPI: found 1 devices Jul 10 04:56:07.779853 kernel: NET: Registered PF_INET protocol family Jul 10 04:56:07.779860 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 04:56:07.779867 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 04:56:07.779874 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 04:56:07.779881 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 04:56:07.779888 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 04:56:07.779896 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 04:56:07.779903 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 04:56:07.779920 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 04:56:07.779927 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 04:56:07.779934 kernel: PCI: CLS 0 bytes, default 64 Jul 10 04:56:07.779940 kernel: kvm [1]: HYP mode not available Jul 10 04:56:07.779947 kernel: Initialise system trusted keyrings Jul 10 04:56:07.779954 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 04:56:07.779961 kernel: Key type asymmetric registered Jul 10 04:56:07.779969 kernel: Asymmetric key parser 'x509' registered Jul 10 04:56:07.779994 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 04:56:07.780001 kernel: io scheduler mq-deadline registered Jul 10 04:56:07.780008 kernel: io scheduler kyber registered Jul 10 04:56:07.780014 kernel: io scheduler bfq registered Jul 10 04:56:07.780021 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 04:56:07.780028 kernel: ACPI: button: Power Button [PWRB] Jul 10 04:56:07.780035 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 04:56:07.780102 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 04:56:07.780114 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 04:56:07.780121 kernel: thunder_xcv, ver 1.0 Jul 10 04:56:07.780128 kernel: thunder_bgx, ver 1.0 Jul 10 04:56:07.780134 kernel: nicpf, ver 1.0 Jul 10 04:56:07.780141 kernel: nicvf, ver 1.0 Jul 10 04:56:07.780214 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 04:56:07.780271 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T04:56:07 UTC (1752123367) Jul 10 04:56:07.780280 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 04:56:07.780287 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 10 04:56:07.780296 kernel: watchdog: NMI not fully supported Jul 10 04:56:07.780303 kernel: watchdog: Hard watchdog permanently disabled Jul 10 04:56:07.780309 kernel: NET: Registered PF_INET6 protocol family Jul 10 04:56:07.780316 kernel: Segment Routing with IPv6 Jul 10 04:56:07.780323 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 04:56:07.780330 kernel: NET: Registered PF_PACKET protocol family Jul 10 04:56:07.780336 kernel: Key type dns_resolver registered Jul 10 04:56:07.780343 kernel: registered taskstats version 1 Jul 10 04:56:07.780350 kernel: Loading compiled-in X.509 certificates Jul 10 04:56:07.780358 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 7bb0b7c91c286ced2ccdfe80b8988eaf5e2c538e' Jul 10 04:56:07.780365 kernel: Demotion targets for Node 0: null Jul 10 04:56:07.780372 kernel: Key type .fscrypt registered Jul 10 04:56:07.780378 kernel: Key type fscrypt-provisioning registered Jul 10 04:56:07.780385 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 04:56:07.780392 kernel: ima: Allocated hash algorithm: sha1 Jul 10 04:56:07.780399 kernel: ima: No architecture policies found Jul 10 04:56:07.780406 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 04:56:07.780414 kernel: clk: Disabling unused clocks Jul 10 04:56:07.780421 kernel: PM: genpd: Disabling unused power domains Jul 10 04:56:07.780428 kernel: Warning: unable to open an initial console. Jul 10 04:56:07.780435 kernel: Freeing unused kernel memory: 39424K Jul 10 04:56:07.780441 kernel: Run /init as init process Jul 10 04:56:07.780448 kernel: with arguments: Jul 10 04:56:07.780455 kernel: /init Jul 10 04:56:07.780461 kernel: with environment: Jul 10 04:56:07.780468 kernel: HOME=/ Jul 10 04:56:07.780475 kernel: TERM=linux Jul 10 04:56:07.780483 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 04:56:07.780490 systemd[1]: Successfully made /usr/ read-only. Jul 10 04:56:07.780500 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 04:56:07.780508 systemd[1]: Detected virtualization kvm. Jul 10 04:56:07.780515 systemd[1]: Detected architecture arm64. Jul 10 04:56:07.780522 systemd[1]: Running in initrd. Jul 10 04:56:07.780529 systemd[1]: No hostname configured, using default hostname. Jul 10 04:56:07.780538 systemd[1]: Hostname set to . Jul 10 04:56:07.780545 systemd[1]: Initializing machine ID from VM UUID. Jul 10 04:56:07.780552 systemd[1]: Queued start job for default target initrd.target. Jul 10 04:56:07.780560 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 04:56:07.780567 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 04:56:07.780575 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 04:56:07.780583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 04:56:07.780590 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 04:56:07.780599 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 04:56:07.780607 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 04:56:07.780615 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 04:56:07.780623 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 04:56:07.780630 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 04:56:07.780637 systemd[1]: Reached target paths.target - Path Units. Jul 10 04:56:07.780645 systemd[1]: Reached target slices.target - Slice Units. Jul 10 04:56:07.780653 systemd[1]: Reached target swap.target - Swaps. Jul 10 04:56:07.780660 systemd[1]: Reached target timers.target - Timer Units. Jul 10 04:56:07.780667 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 04:56:07.780675 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 04:56:07.780682 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 04:56:07.780690 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 04:56:07.780697 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 04:56:07.780704 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 04:56:07.780713 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 04:56:07.780720 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 04:56:07.780728 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 04:56:07.780735 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 04:56:07.780742 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 04:56:07.780750 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 04:56:07.780757 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 04:56:07.780765 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 04:56:07.780772 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 04:56:07.780781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 04:56:07.780788 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 04:56:07.780796 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 04:56:07.780803 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 04:56:07.780812 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 04:56:07.780833 systemd-journald[245]: Collecting audit messages is disabled. Jul 10 04:56:07.780851 systemd-journald[245]: Journal started Jul 10 04:56:07.780870 systemd-journald[245]: Runtime Journal (/run/log/journal/79db761e74d846c28e8ec9f1b02b3763) is 6M, max 48.5M, 42.4M free. Jul 10 04:56:07.774738 systemd-modules-load[246]: Inserted module 'overlay' Jul 10 04:56:07.783033 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 04:56:07.784941 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 04:56:07.786016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 04:56:07.788604 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 04:56:07.792443 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 04:56:07.792463 kernel: Bridge firewalling registered Jul 10 04:56:07.792372 systemd-modules-load[246]: Inserted module 'br_netfilter' Jul 10 04:56:07.793048 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 04:56:07.794269 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 04:56:07.806588 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 04:56:07.807913 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 04:56:07.812271 systemd-tmpfiles[261]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 04:56:07.815717 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 04:56:07.817779 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 04:56:07.820013 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 04:56:07.823092 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 04:56:07.825081 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 04:56:07.826609 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 04:56:07.847984 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=874e2d0098f5b2b6ddee2985c0ed93d47404937b0b8fa9410bd21a088c57c730 Jul 10 04:56:07.869465 systemd-resolved[285]: Positive Trust Anchors: Jul 10 04:56:07.869482 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 04:56:07.869511 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 04:56:07.874308 systemd-resolved[285]: Defaulting to hostname 'linux'. Jul 10 04:56:07.875686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 04:56:07.877641 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 04:56:07.920013 kernel: SCSI subsystem initialized Jul 10 04:56:07.923990 kernel: Loading iSCSI transport class v2.0-870. Jul 10 04:56:07.930998 kernel: iscsi: registered transport (tcp) Jul 10 04:56:07.952542 kernel: iscsi: registered transport (qla4xxx) Jul 10 04:56:07.952562 kernel: QLogic iSCSI HBA Driver Jul 10 04:56:07.969813 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 04:56:07.993048 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 04:56:07.995125 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 04:56:08.042178 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 04:56:08.044354 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 04:56:08.117004 kernel: raid6: neonx8 gen() 15780 MB/s Jul 10 04:56:08.133998 kernel: raid6: neonx4 gen() 15814 MB/s Jul 10 04:56:08.151001 kernel: raid6: neonx2 gen() 13189 MB/s Jul 10 04:56:08.167994 kernel: raid6: neonx1 gen() 10460 MB/s Jul 10 04:56:08.184992 kernel: raid6: int64x8 gen() 6895 MB/s Jul 10 04:56:08.201992 kernel: raid6: int64x4 gen() 7341 MB/s Jul 10 04:56:08.218992 kernel: raid6: int64x2 gen() 6086 MB/s Jul 10 04:56:08.235999 kernel: raid6: int64x1 gen() 5033 MB/s Jul 10 04:56:08.236028 kernel: raid6: using algorithm neonx4 gen() 15814 MB/s Jul 10 04:56:08.253003 kernel: raid6: .... xor() 12309 MB/s, rmw enabled Jul 10 04:56:08.253031 kernel: raid6: using neon recovery algorithm Jul 10 04:56:08.258002 kernel: xor: measuring software checksum speed Jul 10 04:56:08.258027 kernel: 8regs : 21630 MB/sec Jul 10 04:56:08.259424 kernel: 32regs : 19496 MB/sec Jul 10 04:56:08.259438 kernel: arm64_neon : 28080 MB/sec Jul 10 04:56:08.259447 kernel: xor: using function: arm64_neon (28080 MB/sec) Jul 10 04:56:08.317029 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 04:56:08.325036 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 04:56:08.327249 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 04:56:08.357744 systemd-udevd[496]: Using default interface naming scheme 'v255'. Jul 10 04:56:08.362599 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 04:56:08.364266 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 04:56:08.387753 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation Jul 10 04:56:08.409264 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 04:56:08.411204 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 04:56:08.469827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 04:56:08.472076 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 04:56:08.520343 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 10 04:56:08.520487 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 04:56:08.524066 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 04:56:08.524102 kernel: GPT:9289727 != 19775487 Jul 10 04:56:08.524112 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 04:56:08.524121 kernel: GPT:9289727 != 19775487 Jul 10 04:56:08.525213 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 04:56:08.525245 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 04:56:08.527824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 04:56:08.527946 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 04:56:08.530984 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 04:56:08.532730 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 04:56:08.560558 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 04:56:08.561692 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 04:56:08.565016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 04:56:08.575432 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 04:56:08.582540 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 04:56:08.585008 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 04:56:08.592045 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 04:56:08.592944 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 04:56:08.594495 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 04:56:08.596061 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 04:56:08.598174 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 04:56:08.599614 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 04:56:08.623648 disk-uuid[590]: Primary Header is updated. Jul 10 04:56:08.623648 disk-uuid[590]: Secondary Entries is updated. Jul 10 04:56:08.623648 disk-uuid[590]: Secondary Header is updated. Jul 10 04:56:08.627581 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 04:56:08.629505 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 04:56:09.640007 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 04:56:09.640326 disk-uuid[593]: The operation has completed successfully. Jul 10 04:56:09.664440 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 04:56:09.664534 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 04:56:09.687830 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 04:56:09.713652 sh[609]: Success Jul 10 04:56:09.728417 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 04:56:09.728458 kernel: device-mapper: uevent: version 1.0.3 Jul 10 04:56:09.729649 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 04:56:09.738997 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 10 04:56:09.763720 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 04:56:09.765753 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 04:56:09.780582 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 04:56:09.786579 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 04:56:09.786618 kernel: BTRFS: device fsid bbc00cd2-d632-4117-a710-f3ce4caec9fa devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (621) Jul 10 04:56:09.787605 kernel: BTRFS info (device dm-0): first mount of filesystem bbc00cd2-d632-4117-a710-f3ce4caec9fa Jul 10 04:56:09.787634 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 04:56:09.788255 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 04:56:09.791672 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 04:56:09.792660 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 04:56:09.793682 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 04:56:09.794341 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 04:56:09.796795 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 04:56:09.811790 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (653) Jul 10 04:56:09.811824 kernel: BTRFS info (device vda6): first mount of filesystem 7074a0c9-061c-495f-8dec-b696a838eae0 Jul 10 04:56:09.812554 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 04:56:09.812581 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 04:56:09.817990 kernel: BTRFS info (device vda6): last unmount of filesystem 7074a0c9-061c-495f-8dec-b696a838eae0 Jul 10 04:56:09.818135 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 04:56:09.819804 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 04:56:09.882625 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 04:56:09.885576 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 04:56:09.936499 systemd-networkd[798]: lo: Link UP Jul 10 04:56:09.936513 systemd-networkd[798]: lo: Gained carrier Jul 10 04:56:09.937205 systemd-networkd[798]: Enumeration completed Jul 10 04:56:09.937289 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 04:56:09.937911 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 04:56:09.937915 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 04:56:09.938753 systemd-networkd[798]: eth0: Link UP Jul 10 04:56:09.938756 systemd-networkd[798]: eth0: Gained carrier Jul 10 04:56:09.938765 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 04:56:09.938797 systemd[1]: Reached target network.target - Network. Jul 10 04:56:09.951016 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 04:56:09.962180 ignition[697]: Ignition 2.21.0 Jul 10 04:56:09.962193 ignition[697]: Stage: fetch-offline Jul 10 04:56:09.962224 ignition[697]: no configs at "/usr/lib/ignition/base.d" Jul 10 04:56:09.962231 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 04:56:09.962404 ignition[697]: parsed url from cmdline: "" Jul 10 04:56:09.962407 ignition[697]: no config URL provided Jul 10 04:56:09.962411 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 04:56:09.962418 ignition[697]: no config at "/usr/lib/ignition/user.ign" Jul 10 04:56:09.962435 ignition[697]: op(1): [started] loading QEMU firmware config module Jul 10 04:56:09.962441 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 04:56:09.966720 ignition[697]: op(1): [finished] loading QEMU firmware config module Jul 10 04:56:10.004597 ignition[697]: parsing config with SHA512: afbb6ed75527f0ba606058550cfa3b68e9c7ba32e039fd2808e72212ae56f33b099c4d22f529d493ded6d5555e17effc9dec3672ca601b7c9ecf5f7ef86cad67 Jul 10 04:56:10.008543 unknown[697]: fetched base config from "system" Jul 10 04:56:10.008556 unknown[697]: fetched user config from "qemu" Jul 10 04:56:10.009027 ignition[697]: fetch-offline: fetch-offline passed Jul 10 04:56:10.009082 ignition[697]: Ignition finished successfully Jul 10 04:56:10.010696 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 04:56:10.011785 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 04:56:10.012522 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 04:56:10.042834 ignition[812]: Ignition 2.21.0 Jul 10 04:56:10.042852 ignition[812]: Stage: kargs Jul 10 04:56:10.043342 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jul 10 04:56:10.043352 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 04:56:10.044130 ignition[812]: kargs: kargs passed Jul 10 04:56:10.044177 ignition[812]: Ignition finished successfully Jul 10 04:56:10.047511 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 04:56:10.049243 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 04:56:10.070799 ignition[820]: Ignition 2.21.0 Jul 10 04:56:10.070820 ignition[820]: Stage: disks Jul 10 04:56:10.071042 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jul 10 04:56:10.071053 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 04:56:10.072536 ignition[820]: disks: disks passed Jul 10 04:56:10.072589 ignition[820]: Ignition finished successfully Jul 10 04:56:10.074133 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 04:56:10.075468 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 04:56:10.076664 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 04:56:10.078136 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 04:56:10.079500 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 04:56:10.080711 systemd[1]: Reached target basic.target - Basic System. Jul 10 04:56:10.082650 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 04:56:10.106452 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 04:56:10.110201 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 04:56:10.112298 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 04:56:10.178007 kernel: EXT4-fs (vda9): mounted filesystem a024d28f-1b04-461c-b6e8-a134f932c32a r/w with ordered data mode. Quota mode: none. Jul 10 04:56:10.178381 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 04:56:10.179389 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 04:56:10.181271 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 04:56:10.182593 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 04:56:10.183349 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 04:56:10.183386 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 04:56:10.183412 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 04:56:10.200301 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 04:56:10.202400 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 04:56:10.206797 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (839) Jul 10 04:56:10.206818 kernel: BTRFS info (device vda6): first mount of filesystem 7074a0c9-061c-495f-8dec-b696a838eae0 Jul 10 04:56:10.206833 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 04:56:10.206843 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 04:56:10.210521 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 04:56:10.248482 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 04:56:10.251691 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Jul 10 04:56:10.254527 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 04:56:10.258194 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 04:56:10.330088 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 04:56:10.331796 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 04:56:10.334165 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 04:56:10.361014 kernel: BTRFS info (device vda6): last unmount of filesystem 7074a0c9-061c-495f-8dec-b696a838eae0 Jul 10 04:56:10.379038 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 04:56:10.389576 ignition[953]: INFO : Ignition 2.21.0 Jul 10 04:56:10.389576 ignition[953]: INFO : Stage: mount Jul 10 04:56:10.391201 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 04:56:10.391201 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 04:56:10.394018 ignition[953]: INFO : mount: mount passed Jul 10 04:56:10.394018 ignition[953]: INFO : Ignition finished successfully Jul 10 04:56:10.394575 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 04:56:10.396784 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 04:56:10.786047 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 04:56:10.787446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 04:56:10.813604 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (965) Jul 10 04:56:10.813636 kernel: BTRFS info (device vda6): first mount of filesystem 7074a0c9-061c-495f-8dec-b696a838eae0 Jul 10 04:56:10.813647 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 04:56:10.814985 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 04:56:10.817249 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 04:56:10.846191 ignition[982]: INFO : Ignition 2.21.0 Jul 10 04:56:10.846191 ignition[982]: INFO : Stage: files Jul 10 04:56:10.847418 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 04:56:10.847418 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 04:56:10.849037 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Jul 10 04:56:10.849942 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 04:56:10.849942 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 04:56:10.852407 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 04:56:10.853511 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 04:56:10.854639 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 04:56:10.854521 unknown[982]: wrote ssh authorized keys file for user: core Jul 10 04:56:10.857295 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 04:56:10.858602 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 10 04:56:10.951131 systemd-networkd[798]: eth0: Gained IPv6LL Jul 10 04:56:10.984954 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 04:56:11.148854 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 04:56:11.148854 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 04:56:11.151695 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 04:56:11.151695 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 04:56:11.151695 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 04:56:11.151695 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 04:56:11.151695 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 04:56:11.151695 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 04:56:11.151695 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 04:56:11.151695 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 04:56:11.151695 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 04:56:11.162147 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 04:56:11.162147 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 04:56:11.162147 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 04:56:11.162147 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 10 04:56:11.720923 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 04:56:14.172964 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 04:56:14.172964 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 04:56:14.176031 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 04:56:14.177444 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 04:56:14.177444 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 04:56:14.177444 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 10 04:56:14.177444 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 04:56:14.177444 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 04:56:14.177444 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 10 04:56:14.177444 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 04:56:14.192830 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 04:56:14.195683 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 04:56:14.198088 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 04:56:14.198088 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 10 04:56:14.198088 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 04:56:14.198088 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 04:56:14.198088 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 04:56:14.198088 ignition[982]: INFO : files: files passed Jul 10 04:56:14.198088 ignition[982]: INFO : Ignition finished successfully Jul 10 04:56:14.201035 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 04:56:14.203547 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 04:56:14.205040 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 04:56:14.220267 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 04:56:14.220361 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 04:56:14.223542 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 04:56:14.224505 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 04:56:14.224505 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 04:56:14.226901 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 04:56:14.225948 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 04:56:14.228273 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 04:56:14.230572 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 04:56:14.259467 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 04:56:14.259575 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 04:56:14.261167 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 04:56:14.262608 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 04:56:14.263932 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 04:56:14.264670 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 04:56:14.296011 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 04:56:14.298051 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 04:56:14.316387 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 04:56:14.317299 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 04:56:14.318773 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 04:56:14.320061 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 04:56:14.320167 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 04:56:14.322039 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 04:56:14.323465 systemd[1]: Stopped target basic.target - Basic System. Jul 10 04:56:14.324639 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 04:56:14.325839 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 04:56:14.327330 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 04:56:14.328972 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 04:56:14.330408 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 04:56:14.331768 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 04:56:14.333154 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 04:56:14.334541 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 04:56:14.335759 systemd[1]: Stopped target swap.target - Swaps. Jul 10 04:56:14.336826 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 04:56:14.336995 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 04:56:14.338567 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 04:56:14.339410 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 04:56:14.340855 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 04:56:14.344066 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 04:56:14.345013 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 04:56:14.345121 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 04:56:14.347203 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 04:56:14.347319 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 04:56:14.348892 systemd[1]: Stopped target paths.target - Path Units. Jul 10 04:56:14.350001 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 04:56:14.351302 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 04:56:14.352318 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 04:56:14.353602 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 04:56:14.355283 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 04:56:14.355372 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 04:56:14.356499 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 04:56:14.356572 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 04:56:14.357713 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 04:56:14.357828 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 04:56:14.359023 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 04:56:14.359126 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 04:56:14.361226 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 04:56:14.362803 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 04:56:14.363489 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 04:56:14.363602 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 04:56:14.365118 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 04:56:14.365214 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 04:56:14.369566 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 04:56:14.375095 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 04:56:14.383751 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 04:56:14.387618 ignition[1037]: INFO : Ignition 2.21.0 Jul 10 04:56:14.387618 ignition[1037]: INFO : Stage: umount Jul 10 04:56:14.389693 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 04:56:14.389693 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 04:56:14.389693 ignition[1037]: INFO : umount: umount passed Jul 10 04:56:14.389693 ignition[1037]: INFO : Ignition finished successfully Jul 10 04:56:14.387751 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 04:56:14.387841 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 04:56:14.390938 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 04:56:14.391112 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 04:56:14.392378 systemd[1]: Stopped target network.target - Network. Jul 10 04:56:14.393557 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 04:56:14.393612 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 04:56:14.395843 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 04:56:14.395901 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 04:56:14.397223 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 04:56:14.397270 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 04:56:14.399129 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 04:56:14.399174 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 04:56:14.400354 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 04:56:14.400399 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 04:56:14.401662 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 04:56:14.402866 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 04:56:14.407634 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 04:56:14.407723 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 04:56:14.411548 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 04:56:14.411749 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 04:56:14.411837 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 04:56:14.414768 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 04:56:14.415234 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 04:56:14.416139 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 04:56:14.416174 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 04:56:14.418303 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 04:56:14.419608 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 04:56:14.419657 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 04:56:14.421280 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 04:56:14.421323 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 04:56:14.423307 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 04:56:14.423346 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 04:56:14.424815 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 04:56:14.424863 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 04:56:14.429539 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 04:56:14.433402 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 04:56:14.433461 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 04:56:14.445551 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 04:56:14.448104 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 04:56:14.449295 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 04:56:14.449328 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 04:56:14.450851 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 04:56:14.450882 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 04:56:14.452261 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 04:56:14.452307 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 04:56:14.454529 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 04:56:14.454576 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 04:56:14.456581 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 04:56:14.456634 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 04:56:14.459534 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 04:56:14.460917 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 04:56:14.460987 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 04:56:14.463774 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 04:56:14.463818 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 04:56:14.466192 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 04:56:14.466232 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 04:56:14.468533 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 04:56:14.468571 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 04:56:14.470247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 04:56:14.470287 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 04:56:14.473313 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 04:56:14.473359 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 10 04:56:14.473386 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 04:56:14.473418 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 04:56:14.473692 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 04:56:14.475006 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 04:56:14.476500 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 04:56:14.476574 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 04:56:14.478624 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 04:56:14.480331 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 04:56:14.495417 systemd[1]: Switching root. Jul 10 04:56:14.525889 systemd-journald[245]: Journal stopped Jul 10 04:56:15.266796 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jul 10 04:56:15.266922 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 04:56:15.266939 kernel: SELinux: policy capability open_perms=1 Jul 10 04:56:15.266948 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 04:56:15.266965 kernel: SELinux: policy capability always_check_network=0 Jul 10 04:56:15.270076 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 04:56:15.270113 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 04:56:15.270123 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 04:56:15.270132 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 04:56:15.270141 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 04:56:15.270150 kernel: audit: type=1403 audit(1752123374.705:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 04:56:15.270165 systemd[1]: Successfully loaded SELinux policy in 62.330ms. Jul 10 04:56:15.270182 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.948ms. Jul 10 04:56:15.270196 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 04:56:15.270206 systemd[1]: Detected virtualization kvm. Jul 10 04:56:15.270219 systemd[1]: Detected architecture arm64. Jul 10 04:56:15.270229 systemd[1]: Detected first boot. Jul 10 04:56:15.270239 systemd[1]: Initializing machine ID from VM UUID. Jul 10 04:56:15.270250 zram_generator::config[1081]: No configuration found. Jul 10 04:56:15.270261 kernel: NET: Registered PF_VSOCK protocol family Jul 10 04:56:15.270274 systemd[1]: Populated /etc with preset unit settings. Jul 10 04:56:15.270285 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 04:56:15.270296 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 04:56:15.270307 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 04:56:15.270316 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 04:56:15.270327 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 04:56:15.270337 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 04:56:15.270347 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 04:56:15.270357 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 04:56:15.270367 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 04:56:15.270378 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 04:56:15.270388 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 04:56:15.270399 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 04:56:15.270408 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 04:56:15.270419 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 04:56:15.270429 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 04:56:15.270439 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 04:56:15.270449 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 04:56:15.270459 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 04:56:15.270469 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 04:56:15.270481 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 04:56:15.270491 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 04:56:15.270502 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 04:56:15.270515 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 04:56:15.270525 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 04:56:15.270535 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 04:56:15.270545 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 04:56:15.270555 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 04:56:15.270566 systemd[1]: Reached target slices.target - Slice Units. Jul 10 04:56:15.270576 systemd[1]: Reached target swap.target - Swaps. Jul 10 04:56:15.270586 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 04:56:15.270596 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 04:56:15.270606 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 04:56:15.270617 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 04:56:15.270627 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 04:56:15.270637 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 04:56:15.270647 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 04:56:15.270656 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 04:56:15.270668 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 04:56:15.270678 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 04:56:15.270690 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 04:56:15.270699 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 04:56:15.270709 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 04:56:15.270720 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 04:56:15.270730 systemd[1]: Reached target machines.target - Containers. Jul 10 04:56:15.270740 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 04:56:15.270751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 04:56:15.270761 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 04:56:15.270771 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 04:56:15.270780 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 04:56:15.270790 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 04:56:15.270800 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 04:56:15.270810 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 04:56:15.270821 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 04:56:15.270831 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 04:56:15.270843 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 04:56:15.270853 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 04:56:15.270863 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 04:56:15.270873 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 04:56:15.270895 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 04:56:15.270908 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 04:56:15.270919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 04:56:15.270929 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 04:56:15.270941 kernel: fuse: init (API version 7.41) Jul 10 04:56:15.270951 kernel: loop: module loaded Jul 10 04:56:15.270960 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 04:56:15.270970 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 04:56:15.270989 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 04:56:15.271068 systemd-journald[1153]: Collecting audit messages is disabled. Jul 10 04:56:15.271095 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 04:56:15.271106 systemd[1]: Stopped verity-setup.service. Jul 10 04:56:15.271120 systemd-journald[1153]: Journal started Jul 10 04:56:15.271141 systemd-journald[1153]: Runtime Journal (/run/log/journal/79db761e74d846c28e8ec9f1b02b3763) is 6M, max 48.5M, 42.4M free. Jul 10 04:56:15.076317 systemd[1]: Queued start job for default target multi-user.target. Jul 10 04:56:15.097942 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 04:56:15.098346 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 04:56:15.274487 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 04:56:15.274238 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 04:56:15.275357 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 04:56:15.278735 kernel: ACPI: bus type drm_connector registered Jul 10 04:56:15.277507 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 04:56:15.278454 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 04:56:15.279388 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 04:56:15.280339 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 04:56:15.281332 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 04:56:15.282551 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 04:56:15.283859 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 04:56:15.284077 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 04:56:15.285267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 04:56:15.285418 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 04:56:15.286489 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 04:56:15.286654 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 04:56:15.287678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 04:56:15.287831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 04:56:15.289160 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 04:56:15.289311 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 04:56:15.290579 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 04:56:15.290727 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 04:56:15.291841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 04:56:15.293188 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 04:56:15.294376 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 04:56:15.295574 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 04:56:15.307267 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 04:56:15.309432 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 04:56:15.311295 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 04:56:15.312271 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 04:56:15.312298 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 04:56:15.313943 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 04:56:15.321648 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 04:56:15.325241 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 04:56:15.327642 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 04:56:15.331139 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 04:56:15.332152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 04:56:15.335116 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 04:56:15.336057 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 04:56:15.337257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 04:56:15.339237 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 04:56:15.340467 systemd-journald[1153]: Time spent on flushing to /var/log/journal/79db761e74d846c28e8ec9f1b02b3763 is 13.566ms for 887 entries. Jul 10 04:56:15.340467 systemd-journald[1153]: System Journal (/var/log/journal/79db761e74d846c28e8ec9f1b02b3763) is 8M, max 195.6M, 187.6M free. Jul 10 04:56:15.359606 systemd-journald[1153]: Received client request to flush runtime journal. Jul 10 04:56:15.341765 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 04:56:15.348169 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 04:56:15.350599 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 04:56:15.351938 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 04:56:15.363108 kernel: loop0: detected capacity change from 0 to 105936 Jul 10 04:56:15.363218 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 04:56:15.366632 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 04:56:15.368394 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 04:56:15.370466 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 04:56:15.383571 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 04:56:15.382129 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 04:56:15.386774 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jul 10 04:56:15.386792 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jul 10 04:56:15.390384 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 04:56:15.393150 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 04:56:15.411196 kernel: loop1: detected capacity change from 0 to 134232 Jul 10 04:56:15.415686 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 04:56:15.424629 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 04:56:15.428510 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 04:56:15.441045 kernel: loop2: detected capacity change from 0 to 211168 Jul 10 04:56:15.450128 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jul 10 04:56:15.450149 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jul 10 04:56:15.454061 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 04:56:15.476003 kernel: loop3: detected capacity change from 0 to 105936 Jul 10 04:56:15.481005 kernel: loop4: detected capacity change from 0 to 134232 Jul 10 04:56:15.488008 kernel: loop5: detected capacity change from 0 to 211168 Jul 10 04:56:15.491593 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 04:56:15.491966 (sd-merge)[1223]: Merged extensions into '/usr'. Jul 10 04:56:15.495088 systemd[1]: Reload requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 04:56:15.495105 systemd[1]: Reloading... Jul 10 04:56:15.551176 zram_generator::config[1250]: No configuration found. Jul 10 04:56:15.659501 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 04:56:15.660170 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 04:56:15.723103 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 04:56:15.723539 systemd[1]: Reloading finished in 228 ms. Jul 10 04:56:15.757757 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 04:56:15.760051 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 04:56:15.770172 systemd[1]: Starting ensure-sysext.service... Jul 10 04:56:15.771759 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 04:56:15.784643 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 04:56:15.784677 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 04:56:15.784929 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 04:56:15.785138 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 04:56:15.785722 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 04:56:15.785926 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 10 04:56:15.785990 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 10 04:56:15.788661 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 04:56:15.788675 systemd-tmpfiles[1286]: Skipping /boot Jul 10 04:56:15.793648 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Jul 10 04:56:15.793662 systemd[1]: Reloading... Jul 10 04:56:15.794264 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 04:56:15.794282 systemd-tmpfiles[1286]: Skipping /boot Jul 10 04:56:15.835098 zram_generator::config[1313]: No configuration found. Jul 10 04:56:15.899152 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 04:56:15.959799 systemd[1]: Reloading finished in 165 ms. Jul 10 04:56:15.982397 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 04:56:15.983604 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 04:56:15.999811 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 04:56:16.001921 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 04:56:16.003798 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 04:56:16.006740 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 04:56:16.016506 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 04:56:16.021111 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 04:56:16.024423 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 04:56:16.028442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 04:56:16.032224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 04:56:16.034257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 04:56:16.036551 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 04:56:16.037467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 04:56:16.037563 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 04:56:16.043745 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 04:56:16.048154 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 04:56:16.052176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 04:56:16.053221 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 04:56:16.054843 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 04:56:16.056963 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jul 10 04:56:16.057143 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 04:56:16.058540 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 04:56:16.058772 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 04:56:16.065644 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 04:56:16.067067 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 04:56:16.069091 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 04:56:16.072591 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 04:56:16.073655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 04:56:16.073867 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 04:56:16.084138 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 04:56:16.086757 augenrules[1386]: No rules Jul 10 04:56:16.088163 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 04:56:16.089660 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 04:56:16.089820 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 04:56:16.090988 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 04:56:16.093232 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 04:56:16.094426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 04:56:16.094571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 04:56:16.095915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 04:56:16.096058 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 04:56:16.097298 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 04:56:16.097437 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 04:56:16.105937 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 04:56:16.115158 systemd[1]: Finished ensure-sysext.service. Jul 10 04:56:16.119092 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 04:56:16.119998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 04:56:16.124190 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 04:56:16.144307 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 04:56:16.148152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 04:56:16.150282 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 04:56:16.151230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 04:56:16.151277 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 04:56:16.153262 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 04:56:16.159688 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 04:56:16.161009 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 04:56:16.162744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 04:56:16.162926 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 04:56:16.166431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 04:56:16.166582 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 04:56:16.168060 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 04:56:16.168231 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 04:56:16.169505 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 04:56:16.169651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 04:56:16.180273 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 04:56:16.180336 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 04:56:16.183358 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 04:56:16.187191 augenrules[1428]: /sbin/augenrules: No change Jul 10 04:56:16.188835 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 04:56:16.197360 augenrules[1467]: No rules Jul 10 04:56:16.200552 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 04:56:16.206260 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 04:56:16.208084 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 10 04:56:16.220089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 04:56:16.302947 systemd-resolved[1352]: Positive Trust Anchors: Jul 10 04:56:16.302968 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 04:56:16.303013 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 04:56:16.306540 systemd-networkd[1443]: lo: Link UP Jul 10 04:56:16.306550 systemd-networkd[1443]: lo: Gained carrier Jul 10 04:56:16.307341 systemd-networkd[1443]: Enumeration completed Jul 10 04:56:16.307443 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 04:56:16.307759 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 04:56:16.307762 systemd-networkd[1443]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 04:56:16.308847 systemd-networkd[1443]: eth0: Link UP Jul 10 04:56:16.308971 systemd-networkd[1443]: eth0: Gained carrier Jul 10 04:56:16.309079 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 04:56:16.309806 systemd-resolved[1352]: Defaulting to hostname 'linux'. Jul 10 04:56:16.311185 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 04:56:16.313017 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 04:56:16.314026 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 04:56:16.316121 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 04:56:16.317004 systemd[1]: Reached target network.target - Network. Jul 10 04:56:16.317660 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 04:56:16.318578 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 04:56:16.319420 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 04:56:16.322110 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 04:56:16.323289 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 04:56:16.324217 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 04:56:16.324245 systemd[1]: Reached target paths.target - Path Units. Jul 10 04:56:16.324939 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 04:56:16.326380 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 04:56:16.328169 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 04:56:16.329355 systemd[1]: Reached target timers.target - Timer Units. Jul 10 04:56:16.332052 systemd-networkd[1443]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 04:56:16.332198 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 04:56:16.334288 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 04:56:16.337297 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 04:56:16.338362 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Jul 10 04:56:16.339093 systemd-timesyncd[1445]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 04:56:16.339146 systemd-timesyncd[1445]: Initial clock synchronization to Thu 2025-07-10 04:56:16.381656 UTC. Jul 10 04:56:16.339439 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 04:56:16.340666 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 04:56:16.345464 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 04:56:16.346556 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 04:56:16.348192 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 04:56:16.355969 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 04:56:16.356706 systemd[1]: Reached target basic.target - Basic System. Jul 10 04:56:16.357448 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 04:56:16.357478 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 04:56:16.358426 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 04:56:16.360126 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 04:56:16.361720 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 04:56:16.372388 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 04:56:16.374199 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 04:56:16.375023 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 04:56:16.376067 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 04:56:16.379086 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 04:56:16.380677 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 04:56:16.382900 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 04:56:16.384569 jq[1499]: false Jul 10 04:56:16.385759 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 04:56:16.388224 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 04:56:16.389925 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 04:56:16.390366 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 04:56:16.395313 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 04:56:16.397014 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 04:56:16.399927 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 04:56:16.402595 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 04:56:16.404700 extend-filesystems[1500]: Found /dev/vda6 Jul 10 04:56:16.406643 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 04:56:16.406825 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 04:56:16.409391 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 04:56:16.413836 jq[1513]: true Jul 10 04:56:16.414129 extend-filesystems[1500]: Found /dev/vda9 Jul 10 04:56:16.414129 extend-filesystems[1500]: Checking size of /dev/vda9 Jul 10 04:56:16.411105 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 04:56:16.414843 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 04:56:16.415119 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 04:56:16.440650 jq[1530]: true Jul 10 04:56:16.445624 tar[1522]: linux-arm64/LICENSE Jul 10 04:56:16.445624 tar[1522]: linux-arm64/helm Jul 10 04:56:16.454125 extend-filesystems[1500]: Resized partition /dev/vda9 Jul 10 04:56:16.457700 extend-filesystems[1545]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 04:56:16.457601 (ntainerd)[1540]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 04:56:16.466106 update_engine[1512]: I20250710 04:56:16.457964 1512 main.cc:92] Flatcar Update Engine starting Jul 10 04:56:16.464162 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 04:56:16.464343 systemd-logind[1509]: New seat seat0. Jul 10 04:56:16.464861 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 04:56:16.470055 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 04:56:16.493895 dbus-daemon[1497]: [system] SELinux support is enabled Jul 10 04:56:16.494191 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 04:56:16.497780 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 04:56:16.497821 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 04:56:16.498814 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 04:56:16.498839 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 04:56:16.503871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 04:56:16.510999 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 04:56:16.512219 dbus-daemon[1497]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 10 04:56:16.516083 systemd[1]: Started update-engine.service - Update Engine. Jul 10 04:56:16.516382 update_engine[1512]: I20250710 04:56:16.516149 1512 update_check_scheduler.cc:74] Next update check in 5m6s Jul 10 04:56:16.527321 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 04:56:16.602645 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 04:56:16.627691 extend-filesystems[1545]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 04:56:16.627691 extend-filesystems[1545]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 04:56:16.627691 extend-filesystems[1545]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 04:56:16.631801 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Jul 10 04:56:16.629156 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 04:56:16.630162 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 04:56:16.695343 containerd[1540]: time="2025-07-10T04:56:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 04:56:16.696165 containerd[1540]: time="2025-07-10T04:56:16.696131160Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 10 04:56:16.712000 containerd[1540]: time="2025-07-10T04:56:16.711658480Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.2µs" Jul 10 04:56:16.712000 containerd[1540]: time="2025-07-10T04:56:16.711697320Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 04:56:16.712000 containerd[1540]: time="2025-07-10T04:56:16.711715360Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 04:56:16.712000 containerd[1540]: time="2025-07-10T04:56:16.711860760Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 04:56:16.712000 containerd[1540]: time="2025-07-10T04:56:16.711875200Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 04:56:16.712000 containerd[1540]: time="2025-07-10T04:56:16.711909360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 04:56:16.712000 containerd[1540]: time="2025-07-10T04:56:16.711958840Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 04:56:16.712197 containerd[1540]: time="2025-07-10T04:56:16.711970160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 04:56:16.712548 containerd[1540]: time="2025-07-10T04:56:16.712517080Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 04:56:16.712624 containerd[1540]: time="2025-07-10T04:56:16.712609320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 04:56:16.712675 containerd[1540]: time="2025-07-10T04:56:16.712661400Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 04:56:16.712721 containerd[1540]: time="2025-07-10T04:56:16.712708480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 04:56:16.712867 containerd[1540]: time="2025-07-10T04:56:16.712849280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 04:56:16.713166 containerd[1540]: time="2025-07-10T04:56:16.713142280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 04:56:16.713249 containerd[1540]: time="2025-07-10T04:56:16.713233960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 04:56:16.713295 containerd[1540]: time="2025-07-10T04:56:16.713282640Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 04:56:16.713375 containerd[1540]: time="2025-07-10T04:56:16.713360440Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 04:56:16.713676 containerd[1540]: time="2025-07-10T04:56:16.713656160Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 04:56:16.713796 containerd[1540]: time="2025-07-10T04:56:16.713780000Z" level=info msg="metadata content store policy set" policy=shared Jul 10 04:56:16.814518 sshd_keygen[1528]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 04:56:16.834092 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 04:56:16.836461 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 04:56:16.851229 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 04:56:16.851440 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 04:56:16.852907 tar[1522]: linux-arm64/README.md Jul 10 04:56:16.853905 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 04:56:16.871799 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 04:56:16.874935 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 04:56:16.877664 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 04:56:16.879639 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 04:56:16.880672 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 04:56:16.933455 containerd[1540]: time="2025-07-10T04:56:16.933360880Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 04:56:16.933455 containerd[1540]: time="2025-07-10T04:56:16.933434360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 04:56:16.933455 containerd[1540]: time="2025-07-10T04:56:16.933457240Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 04:56:16.933695 containerd[1540]: time="2025-07-10T04:56:16.933470480Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 04:56:16.933695 containerd[1540]: time="2025-07-10T04:56:16.933483320Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 04:56:16.933695 containerd[1540]: time="2025-07-10T04:56:16.933494400Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 04:56:16.933695 containerd[1540]: time="2025-07-10T04:56:16.933507440Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 04:56:16.933695 containerd[1540]: time="2025-07-10T04:56:16.933519880Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 04:56:16.933695 containerd[1540]: time="2025-07-10T04:56:16.933547520Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 04:56:16.933695 containerd[1540]: time="2025-07-10T04:56:16.933564480Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 04:56:16.933695 containerd[1540]: time="2025-07-10T04:56:16.933574360Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 04:56:16.933695 containerd[1540]: time="2025-07-10T04:56:16.933589360Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 04:56:16.933856 containerd[1540]: time="2025-07-10T04:56:16.933742320Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 04:56:16.933856 containerd[1540]: time="2025-07-10T04:56:16.933763680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 04:56:16.933856 containerd[1540]: time="2025-07-10T04:56:16.933777560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 04:56:16.933856 containerd[1540]: time="2025-07-10T04:56:16.933788320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 04:56:16.933856 containerd[1540]: time="2025-07-10T04:56:16.933798480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 04:56:16.933856 containerd[1540]: time="2025-07-10T04:56:16.933809200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 04:56:16.933856 containerd[1540]: time="2025-07-10T04:56:16.933820440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 04:56:16.933856 containerd[1540]: time="2025-07-10T04:56:16.933830520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 04:56:16.933856 containerd[1540]: time="2025-07-10T04:56:16.933842320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 04:56:16.934035 containerd[1540]: time="2025-07-10T04:56:16.933860080Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 04:56:16.934035 containerd[1540]: time="2025-07-10T04:56:16.933871280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 04:56:16.934256 containerd[1540]: time="2025-07-10T04:56:16.934214440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 04:56:16.934256 containerd[1540]: time="2025-07-10T04:56:16.934240800Z" level=info msg="Start snapshots syncer" Jul 10 04:56:16.934256 containerd[1540]: time="2025-07-10T04:56:16.934264960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 04:56:16.934580 containerd[1540]: time="2025-07-10T04:56:16.934519360Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 04:56:16.934862 containerd[1540]: time="2025-07-10T04:56:16.934585440Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 04:56:16.935440 containerd[1540]: time="2025-07-10T04:56:16.935413320Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 04:56:16.935570 bash[1563]: Updated "/home/core/.ssh/authorized_keys" Jul 10 04:56:16.935772 containerd[1540]: time="2025-07-10T04:56:16.935552400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 04:56:16.935772 containerd[1540]: time="2025-07-10T04:56:16.935578840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 04:56:16.935772 containerd[1540]: time="2025-07-10T04:56:16.935590440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 04:56:16.935772 containerd[1540]: time="2025-07-10T04:56:16.935602760Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 04:56:16.935772 containerd[1540]: time="2025-07-10T04:56:16.935619800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 04:56:16.935772 containerd[1540]: time="2025-07-10T04:56:16.935632560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 04:56:16.935772 containerd[1540]: time="2025-07-10T04:56:16.935643200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 04:56:16.935772 containerd[1540]: time="2025-07-10T04:56:16.935668200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 04:56:16.935772 containerd[1540]: time="2025-07-10T04:56:16.935680360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 04:56:16.935772 containerd[1540]: time="2025-07-10T04:56:16.935691440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 04:56:16.936479 containerd[1540]: time="2025-07-10T04:56:16.936450920Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 04:56:16.936537 containerd[1540]: time="2025-07-10T04:56:16.936486880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 04:56:16.936537 containerd[1540]: time="2025-07-10T04:56:16.936497960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 04:56:16.936537 containerd[1540]: time="2025-07-10T04:56:16.936509320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 04:56:16.936537 containerd[1540]: time="2025-07-10T04:56:16.936517080Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 04:56:16.936537 containerd[1540]: time="2025-07-10T04:56:16.936527160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 04:56:16.936537 containerd[1540]: time="2025-07-10T04:56:16.936537560Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 04:56:16.936640 containerd[1540]: time="2025-07-10T04:56:16.936619120Z" level=info msg="runtime interface created" Jul 10 04:56:16.936640 containerd[1540]: time="2025-07-10T04:56:16.936624480Z" level=info msg="created NRI interface" Jul 10 04:56:16.936640 containerd[1540]: time="2025-07-10T04:56:16.936633880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 04:56:16.936690 containerd[1540]: time="2025-07-10T04:56:16.936647120Z" level=info msg="Connect containerd service" Jul 10 04:56:16.936690 containerd[1540]: time="2025-07-10T04:56:16.936678640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 04:56:16.937250 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 04:56:16.937993 containerd[1540]: time="2025-07-10T04:56:16.937691560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 04:56:16.939325 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 04:56:17.027522 containerd[1540]: time="2025-07-10T04:56:17.027480353Z" level=info msg="Start subscribing containerd event" Jul 10 04:56:17.027626 containerd[1540]: time="2025-07-10T04:56:17.027540433Z" level=info msg="Start recovering state" Jul 10 04:56:17.027671 containerd[1540]: time="2025-07-10T04:56:17.027637772Z" level=info msg="Start event monitor" Jul 10 04:56:17.027671 containerd[1540]: time="2025-07-10T04:56:17.027652009Z" level=info msg="Start cni network conf syncer for default" Jul 10 04:56:17.027671 containerd[1540]: time="2025-07-10T04:56:17.027659309Z" level=info msg="Start streaming server" Jul 10 04:56:17.027671 containerd[1540]: time="2025-07-10T04:56:17.027667210Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 04:56:17.027734 containerd[1540]: time="2025-07-10T04:56:17.027674309Z" level=info msg="runtime interface starting up..." Jul 10 04:56:17.027734 containerd[1540]: time="2025-07-10T04:56:17.027681247Z" level=info msg="starting plugins..." Jul 10 04:56:17.027734 containerd[1540]: time="2025-07-10T04:56:17.027694162Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 04:56:17.027938 containerd[1540]: time="2025-07-10T04:56:17.027770003Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 04:56:17.027938 containerd[1540]: time="2025-07-10T04:56:17.027821099Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 04:56:17.027938 containerd[1540]: time="2025-07-10T04:56:17.027872435Z" level=info msg="containerd successfully booted in 0.333207s" Jul 10 04:56:17.028528 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 04:56:18.119206 systemd-networkd[1443]: eth0: Gained IPv6LL Jul 10 04:56:18.121495 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 04:56:18.123104 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 04:56:18.128226 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 04:56:18.130444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 04:56:18.133197 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 04:56:18.154914 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 04:56:18.155170 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 04:56:18.156599 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 04:56:18.159776 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 04:56:18.673460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 04:56:18.674787 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 04:56:18.678232 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 04:56:18.680128 systemd[1]: Startup finished in 2.022s (kernel) + 7.077s (initrd) + 4.036s (userspace) = 13.136s. Jul 10 04:56:19.074902 kubelet[1638]: E0710 04:56:19.074789 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 04:56:19.077567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 04:56:19.077695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 04:56:19.078050 systemd[1]: kubelet.service: Consumed 800ms CPU time, 258.4M memory peak. Jul 10 04:56:20.091593 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 04:56:20.092716 systemd[1]: Started sshd@0-10.0.0.20:22-10.0.0.1:57748.service - OpenSSH per-connection server daemon (10.0.0.1:57748). Jul 10 04:56:20.180847 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 57748 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:56:20.182496 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:56:20.188266 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 04:56:20.189118 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 04:56:20.195316 systemd-logind[1509]: New session 1 of user core. Jul 10 04:56:20.221894 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 04:56:20.224053 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 04:56:20.242966 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 04:56:20.244921 systemd-logind[1509]: New session c1 of user core. Jul 10 04:56:20.336454 systemd[1657]: Queued start job for default target default.target. Jul 10 04:56:20.355899 systemd[1657]: Created slice app.slice - User Application Slice. Jul 10 04:56:20.355929 systemd[1657]: Reached target paths.target - Paths. Jul 10 04:56:20.355967 systemd[1657]: Reached target timers.target - Timers. Jul 10 04:56:20.357163 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 04:56:20.367250 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 04:56:20.367448 systemd[1657]: Reached target sockets.target - Sockets. Jul 10 04:56:20.367543 systemd[1657]: Reached target basic.target - Basic System. Jul 10 04:56:20.367671 systemd[1657]: Reached target default.target - Main User Target. Jul 10 04:56:20.367707 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 04:56:20.367800 systemd[1657]: Startup finished in 117ms. Jul 10 04:56:20.368839 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 04:56:20.432587 systemd[1]: Started sshd@1-10.0.0.20:22-10.0.0.1:57754.service - OpenSSH per-connection server daemon (10.0.0.1:57754). Jul 10 04:56:20.488414 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 57754 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:56:20.489775 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:56:20.494090 systemd-logind[1509]: New session 2 of user core. Jul 10 04:56:20.503151 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 04:56:20.553535 sshd[1671]: Connection closed by 10.0.0.1 port 57754 Jul 10 04:56:20.553992 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Jul 10 04:56:20.565203 systemd[1]: sshd@1-10.0.0.20:22-10.0.0.1:57754.service: Deactivated successfully. Jul 10 04:56:20.567350 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 04:56:20.570163 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Jul 10 04:56:20.571711 systemd[1]: Started sshd@2-10.0.0.20:22-10.0.0.1:57768.service - OpenSSH per-connection server daemon (10.0.0.1:57768). Jul 10 04:56:20.572516 systemd-logind[1509]: Removed session 2. Jul 10 04:56:20.621420 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 57768 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:56:20.622912 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:56:20.626544 systemd-logind[1509]: New session 3 of user core. Jul 10 04:56:20.638112 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 04:56:20.686415 sshd[1680]: Connection closed by 10.0.0.1 port 57768 Jul 10 04:56:20.686851 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Jul 10 04:56:20.699807 systemd[1]: sshd@2-10.0.0.20:22-10.0.0.1:57768.service: Deactivated successfully. Jul 10 04:56:20.701196 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 04:56:20.702756 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Jul 10 04:56:20.704809 systemd[1]: Started sshd@3-10.0.0.20:22-10.0.0.1:57770.service - OpenSSH per-connection server daemon (10.0.0.1:57770). Jul 10 04:56:20.705480 systemd-logind[1509]: Removed session 3. Jul 10 04:56:20.760199 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 57770 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:56:20.761280 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:56:20.764721 systemd-logind[1509]: New session 4 of user core. Jul 10 04:56:20.780174 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 04:56:20.831360 sshd[1689]: Connection closed by 10.0.0.1 port 57770 Jul 10 04:56:20.831652 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Jul 10 04:56:20.841957 systemd[1]: sshd@3-10.0.0.20:22-10.0.0.1:57770.service: Deactivated successfully. Jul 10 04:56:20.843491 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 04:56:20.844170 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Jul 10 04:56:20.846192 systemd[1]: Started sshd@4-10.0.0.20:22-10.0.0.1:57784.service - OpenSSH per-connection server daemon (10.0.0.1:57784). Jul 10 04:56:20.847160 systemd-logind[1509]: Removed session 4. Jul 10 04:56:20.886123 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 57784 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:56:20.887270 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:56:20.890837 systemd-logind[1509]: New session 5 of user core. Jul 10 04:56:20.900119 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 04:56:20.961625 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 04:56:20.961916 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 04:56:20.978855 sudo[1699]: pam_unix(sudo:session): session closed for user root Jul 10 04:56:20.981443 sshd[1698]: Connection closed by 10.0.0.1 port 57784 Jul 10 04:56:20.980640 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Jul 10 04:56:20.991954 systemd[1]: sshd@4-10.0.0.20:22-10.0.0.1:57784.service: Deactivated successfully. Jul 10 04:56:20.993509 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 04:56:20.994185 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Jul 10 04:56:20.996176 systemd[1]: Started sshd@5-10.0.0.20:22-10.0.0.1:57796.service - OpenSSH per-connection server daemon (10.0.0.1:57796). Jul 10 04:56:20.997429 systemd-logind[1509]: Removed session 5. Jul 10 04:56:21.048711 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 57796 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:56:21.049931 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:56:21.054036 systemd-logind[1509]: New session 6 of user core. Jul 10 04:56:21.066148 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 04:56:21.117310 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 04:56:21.117583 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 04:56:21.122622 sudo[1710]: pam_unix(sudo:session): session closed for user root Jul 10 04:56:21.127338 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 04:56:21.127580 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 04:56:21.136900 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 04:56:21.169273 augenrules[1732]: No rules Jul 10 04:56:21.170425 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 04:56:21.170626 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 04:56:21.172188 sudo[1709]: pam_unix(sudo:session): session closed for user root Jul 10 04:56:21.173825 sshd[1708]: Connection closed by 10.0.0.1 port 57796 Jul 10 04:56:21.173684 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jul 10 04:56:21.184790 systemd[1]: sshd@5-10.0.0.20:22-10.0.0.1:57796.service: Deactivated successfully. Jul 10 04:56:21.186216 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 04:56:21.186858 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Jul 10 04:56:21.189008 systemd[1]: Started sshd@6-10.0.0.20:22-10.0.0.1:57810.service - OpenSSH per-connection server daemon (10.0.0.1:57810). Jul 10 04:56:21.189513 systemd-logind[1509]: Removed session 6. Jul 10 04:56:21.247376 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 57810 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:56:21.248602 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:56:21.252748 systemd-logind[1509]: New session 7 of user core. Jul 10 04:56:21.262142 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 04:56:21.312482 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 04:56:21.312746 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 04:56:21.680594 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 04:56:21.693266 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 04:56:21.962197 dockerd[1765]: time="2025-07-10T04:56:21.962067731Z" level=info msg="Starting up" Jul 10 04:56:21.963177 dockerd[1765]: time="2025-07-10T04:56:21.963128951Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 04:56:21.973095 dockerd[1765]: time="2025-07-10T04:56:21.973046324Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 10 04:56:22.151331 dockerd[1765]: time="2025-07-10T04:56:22.151284356Z" level=info msg="Loading containers: start." Jul 10 04:56:22.163002 kernel: Initializing XFRM netlink socket Jul 10 04:56:22.351306 systemd-networkd[1443]: docker0: Link UP Jul 10 04:56:22.354604 dockerd[1765]: time="2025-07-10T04:56:22.354568633Z" level=info msg="Loading containers: done." Jul 10 04:56:22.370818 dockerd[1765]: time="2025-07-10T04:56:22.370772765Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 04:56:22.370926 dockerd[1765]: time="2025-07-10T04:56:22.370841938Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 10 04:56:22.370926 dockerd[1765]: time="2025-07-10T04:56:22.370913556Z" level=info msg="Initializing buildkit" Jul 10 04:56:22.390234 dockerd[1765]: time="2025-07-10T04:56:22.390196739Z" level=info msg="Completed buildkit initialization" Jul 10 04:56:22.396598 dockerd[1765]: time="2025-07-10T04:56:22.396550417Z" level=info msg="Daemon has completed initialization" Jul 10 04:56:22.396748 dockerd[1765]: time="2025-07-10T04:56:22.396642835Z" level=info msg="API listen on /run/docker.sock" Jul 10 04:56:22.396858 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 04:56:23.032421 containerd[1540]: time="2025-07-10T04:56:23.032383410Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 04:56:23.682372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1245289932.mount: Deactivated successfully. Jul 10 04:56:24.997256 containerd[1540]: time="2025-07-10T04:56:24.997079701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:24.997997 containerd[1540]: time="2025-07-10T04:56:24.997910908Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 10 04:56:24.998725 containerd[1540]: time="2025-07-10T04:56:24.998695878Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:25.001709 containerd[1540]: time="2025-07-10T04:56:25.001682781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:25.003100 containerd[1540]: time="2025-07-10T04:56:25.003074510Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.970652392s" Jul 10 04:56:25.003203 containerd[1540]: time="2025-07-10T04:56:25.003187089Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 10 04:56:25.006689 containerd[1540]: time="2025-07-10T04:56:25.006658881Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 04:56:26.361629 containerd[1540]: time="2025-07-10T04:56:26.361579324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:26.362570 containerd[1540]: time="2025-07-10T04:56:26.362539193Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 10 04:56:26.363190 containerd[1540]: time="2025-07-10T04:56:26.363129912Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:26.365918 containerd[1540]: time="2025-07-10T04:56:26.365870231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:26.366999 containerd[1540]: time="2025-07-10T04:56:26.366703351Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.36001202s" Jul 10 04:56:26.366999 containerd[1540]: time="2025-07-10T04:56:26.366738844Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 10 04:56:26.367415 containerd[1540]: time="2025-07-10T04:56:26.367379398Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 04:56:27.677748 containerd[1540]: time="2025-07-10T04:56:27.677695661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:27.678760 containerd[1540]: time="2025-07-10T04:56:27.678569280Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 10 04:56:27.679371 containerd[1540]: time="2025-07-10T04:56:27.679326497Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:27.682229 containerd[1540]: time="2025-07-10T04:56:27.682184045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:27.683014 containerd[1540]: time="2025-07-10T04:56:27.682934973Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.315517319s" Jul 10 04:56:27.683014 containerd[1540]: time="2025-07-10T04:56:27.682966377Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 10 04:56:27.683487 containerd[1540]: time="2025-07-10T04:56:27.683455900Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 04:56:28.622700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount574775525.mount: Deactivated successfully. Jul 10 04:56:28.986890 containerd[1540]: time="2025-07-10T04:56:28.986842677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:28.987607 containerd[1540]: time="2025-07-10T04:56:28.987541832Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 10 04:56:28.988220 containerd[1540]: time="2025-07-10T04:56:28.988176542Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:28.990205 containerd[1540]: time="2025-07-10T04:56:28.990160258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:28.990801 containerd[1540]: time="2025-07-10T04:56:28.990664518Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.307176373s" Jul 10 04:56:28.990801 containerd[1540]: time="2025-07-10T04:56:28.990697841Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 10 04:56:28.991230 containerd[1540]: time="2025-07-10T04:56:28.991200940Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 04:56:29.137262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 04:56:29.138646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 04:56:29.266562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 04:56:29.270359 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 04:56:29.305651 kubelet[2063]: E0710 04:56:29.305600 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 04:56:29.308841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 04:56:29.308993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 04:56:29.309475 systemd[1]: kubelet.service: Consumed 140ms CPU time, 105.9M memory peak. Jul 10 04:56:29.710308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204215529.mount: Deactivated successfully. Jul 10 04:56:30.636194 containerd[1540]: time="2025-07-10T04:56:30.636141220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:30.636561 containerd[1540]: time="2025-07-10T04:56:30.636536515Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 10 04:56:30.637403 containerd[1540]: time="2025-07-10T04:56:30.637358941Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:30.640497 containerd[1540]: time="2025-07-10T04:56:30.640458386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:30.641528 containerd[1540]: time="2025-07-10T04:56:30.641498462Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.650270849s" Jul 10 04:56:30.641623 containerd[1540]: time="2025-07-10T04:56:30.641608629Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 10 04:56:30.642191 containerd[1540]: time="2025-07-10T04:56:30.642165790Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 04:56:31.076357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount590469384.mount: Deactivated successfully. Jul 10 04:56:31.079791 containerd[1540]: time="2025-07-10T04:56:31.079739632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 04:56:31.080435 containerd[1540]: time="2025-07-10T04:56:31.080409034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 10 04:56:31.081196 containerd[1540]: time="2025-07-10T04:56:31.081168933Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 04:56:31.082934 containerd[1540]: time="2025-07-10T04:56:31.082899039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 04:56:31.083877 containerd[1540]: time="2025-07-10T04:56:31.083835569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 441.641748ms" Jul 10 04:56:31.083908 containerd[1540]: time="2025-07-10T04:56:31.083873530Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 04:56:31.084389 containerd[1540]: time="2025-07-10T04:56:31.084333626Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 04:56:31.536033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2146726698.mount: Deactivated successfully. Jul 10 04:56:34.150146 containerd[1540]: time="2025-07-10T04:56:34.150063413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:34.150763 containerd[1540]: time="2025-07-10T04:56:34.150724120Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 10 04:56:34.151366 containerd[1540]: time="2025-07-10T04:56:34.151334983Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:34.153969 containerd[1540]: time="2025-07-10T04:56:34.153932091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:34.155055 containerd[1540]: time="2025-07-10T04:56:34.155020779Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.070616517s" Jul 10 04:56:34.155092 containerd[1540]: time="2025-07-10T04:56:34.155055810Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 10 04:56:39.387704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 04:56:39.389458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 04:56:39.528091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 04:56:39.531308 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 04:56:39.565215 kubelet[2214]: E0710 04:56:39.565169 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 04:56:39.567969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 04:56:39.568210 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 04:56:39.568748 systemd[1]: kubelet.service: Consumed 130ms CPU time, 109.7M memory peak. Jul 10 04:56:40.008578 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 04:56:40.008877 systemd[1]: kubelet.service: Consumed 130ms CPU time, 109.7M memory peak. Jul 10 04:56:40.010796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 04:56:40.029588 systemd[1]: Reload requested from client PID 2227 ('systemctl') (unit session-7.scope)... Jul 10 04:56:40.029603 systemd[1]: Reloading... Jul 10 04:56:40.098016 zram_generator::config[2271]: No configuration found. Jul 10 04:56:40.285820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 04:56:40.368170 systemd[1]: Reloading finished in 338 ms. Jul 10 04:56:40.420362 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 04:56:40.420634 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 04:56:40.420946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 04:56:40.421093 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95.1M memory peak. Jul 10 04:56:40.422502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 04:56:40.539593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 04:56:40.542804 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 04:56:40.575884 kubelet[2316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 04:56:40.575884 kubelet[2316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 04:56:40.575884 kubelet[2316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 04:56:40.576169 kubelet[2316]: I0710 04:56:40.575886 2316 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 04:56:41.329875 kubelet[2316]: I0710 04:56:41.329831 2316 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 04:56:41.329875 kubelet[2316]: I0710 04:56:41.329863 2316 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 04:56:41.331116 kubelet[2316]: I0710 04:56:41.330232 2316 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 04:56:41.362404 kubelet[2316]: E0710 04:56:41.362343 2316 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 04:56:41.363371 kubelet[2316]: I0710 04:56:41.363292 2316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 04:56:41.370950 kubelet[2316]: I0710 04:56:41.370930 2316 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 04:56:41.374294 kubelet[2316]: I0710 04:56:41.374264 2316 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 04:56:41.375370 kubelet[2316]: I0710 04:56:41.375318 2316 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 04:56:41.375510 kubelet[2316]: I0710 04:56:41.375366 2316 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 04:56:41.375621 kubelet[2316]: I0710 04:56:41.375609 2316 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 04:56:41.375621 kubelet[2316]: I0710 04:56:41.375622 2316 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 04:56:41.376351 kubelet[2316]: I0710 04:56:41.376317 2316 state_mem.go:36] "Initialized new in-memory state store" Jul 10 04:56:41.378701 kubelet[2316]: I0710 04:56:41.378642 2316 kubelet.go:480] "Attempting to sync node with API server" Jul 10 04:56:41.378701 kubelet[2316]: I0710 04:56:41.378670 2316 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 04:56:41.378701 kubelet[2316]: I0710 04:56:41.378700 2316 kubelet.go:386] "Adding apiserver pod source" Jul 10 04:56:41.380301 kubelet[2316]: I0710 04:56:41.380025 2316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 04:56:41.380985 kubelet[2316]: I0710 04:56:41.380953 2316 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 10 04:56:41.382276 kubelet[2316]: E0710 04:56:41.382074 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 04:56:41.382581 kubelet[2316]: E0710 04:56:41.382536 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 04:56:41.382705 kubelet[2316]: I0710 04:56:41.382680 2316 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 04:56:41.382824 kubelet[2316]: W0710 04:56:41.382809 2316 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 04:56:41.385256 kubelet[2316]: I0710 04:56:41.385236 2316 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 04:56:41.385332 kubelet[2316]: I0710 04:56:41.385278 2316 server.go:1289] "Started kubelet" Jul 10 04:56:41.385417 kubelet[2316]: I0710 04:56:41.385391 2316 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 04:56:41.388657 kubelet[2316]: I0710 04:56:41.388608 2316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 04:56:41.388991 kubelet[2316]: I0710 04:56:41.388951 2316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 04:56:41.389381 kubelet[2316]: I0710 04:56:41.389360 2316 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 04:56:41.389473 kubelet[2316]: E0710 04:56:41.389457 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 04:56:41.390026 kubelet[2316]: I0710 04:56:41.390003 2316 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 04:56:41.390092 kubelet[2316]: I0710 04:56:41.390060 2316 reconciler.go:26] "Reconciler: start to sync state" Jul 10 04:56:41.390193 kubelet[2316]: I0710 04:56:41.390123 2316 factory.go:223] Registration of the systemd container factory successfully Jul 10 04:56:41.390235 kubelet[2316]: I0710 04:56:41.390215 2316 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 04:56:41.392014 kubelet[2316]: I0710 04:56:41.390664 2316 server.go:317] "Adding debug handlers to kubelet server" Jul 10 04:56:41.392014 kubelet[2316]: I0710 04:56:41.389466 2316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 04:56:41.392014 kubelet[2316]: I0710 04:56:41.391251 2316 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 04:56:41.392014 kubelet[2316]: E0710 04:56:41.391607 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="200ms" Jul 10 04:56:41.392014 kubelet[2316]: E0710 04:56:41.391769 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 04:56:41.392759 kubelet[2316]: I0710 04:56:41.392731 2316 factory.go:223] Registration of the containerd container factory successfully Jul 10 04:56:41.394991 kubelet[2316]: E0710 04:56:41.394068 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850caf34e37957c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 04:56:41.38525222 +0000 UTC m=+0.839490429,LastTimestamp:2025-07-10 04:56:41.38525222 +0000 UTC m=+0.839490429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 04:56:41.395093 kubelet[2316]: E0710 04:56:41.395058 2316 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 04:56:41.405186 kubelet[2316]: I0710 04:56:41.405149 2316 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 04:56:41.405186 kubelet[2316]: I0710 04:56:41.405167 2316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 04:56:41.405186 kubelet[2316]: I0710 04:56:41.405188 2316 state_mem.go:36] "Initialized new in-memory state store" Jul 10 04:56:41.407881 kubelet[2316]: I0710 04:56:41.407827 2316 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 04:56:41.408893 kubelet[2316]: I0710 04:56:41.408860 2316 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 04:56:41.408939 kubelet[2316]: I0710 04:56:41.408897 2316 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 04:56:41.408939 kubelet[2316]: I0710 04:56:41.408917 2316 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 04:56:41.408939 kubelet[2316]: I0710 04:56:41.408927 2316 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 04:56:41.409292 kubelet[2316]: E0710 04:56:41.409248 2316 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 04:56:41.409518 kubelet[2316]: E0710 04:56:41.409482 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 04:56:41.483148 kubelet[2316]: I0710 04:56:41.483109 2316 policy_none.go:49] "None policy: Start" Jul 10 04:56:41.483148 kubelet[2316]: I0710 04:56:41.483140 2316 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 04:56:41.483148 kubelet[2316]: I0710 04:56:41.483153 2316 state_mem.go:35] "Initializing new in-memory state store" Jul 10 04:56:41.488554 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 04:56:41.490297 kubelet[2316]: E0710 04:56:41.490260 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 04:56:41.502939 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 04:56:41.506243 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 04:56:41.509945 kubelet[2316]: E0710 04:56:41.509924 2316 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 04:56:41.530958 kubelet[2316]: E0710 04:56:41.530932 2316 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 04:56:41.531233 kubelet[2316]: I0710 04:56:41.531216 2316 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 04:56:41.531334 kubelet[2316]: I0710 04:56:41.531303 2316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 04:56:41.531560 kubelet[2316]: I0710 04:56:41.531547 2316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 04:56:41.532780 kubelet[2316]: E0710 04:56:41.532741 2316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 04:56:41.533264 kubelet[2316]: E0710 04:56:41.532786 2316 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 04:56:41.593500 kubelet[2316]: E0710 04:56:41.592409 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="400ms" Jul 10 04:56:41.633494 kubelet[2316]: I0710 04:56:41.633458 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 04:56:41.633898 kubelet[2316]: E0710 04:56:41.633873 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Jul 10 04:56:41.721682 systemd[1]: Created slice kubepods-burstable-poda70e2cd4958778fc813145bebc1386a3.slice - libcontainer container kubepods-burstable-poda70e2cd4958778fc813145bebc1386a3.slice. Jul 10 04:56:41.742243 kubelet[2316]: E0710 04:56:41.742205 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 04:56:41.745256 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 10 04:56:41.747648 kubelet[2316]: E0710 04:56:41.747512 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850caf34e37957c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 04:56:41.38525222 +0000 UTC m=+0.839490429,LastTimestamp:2025-07-10 04:56:41.38525222 +0000 UTC m=+0.839490429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 04:56:41.762194 kubelet[2316]: E0710 04:56:41.762160 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 04:56:41.764914 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 10 04:56:41.766506 kubelet[2316]: E0710 04:56:41.766473 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 04:56:41.792024 kubelet[2316]: I0710 04:56:41.791994 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 04:56:41.792086 kubelet[2316]: I0710 04:56:41.792028 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a70e2cd4958778fc813145bebc1386a3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a70e2cd4958778fc813145bebc1386a3\") " pod="kube-system/kube-apiserver-localhost" Jul 10 04:56:41.792086 kubelet[2316]: I0710 04:56:41.792061 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a70e2cd4958778fc813145bebc1386a3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a70e2cd4958778fc813145bebc1386a3\") " pod="kube-system/kube-apiserver-localhost" Jul 10 04:56:41.792086 kubelet[2316]: I0710 04:56:41.792079 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:41.792163 kubelet[2316]: I0710 04:56:41.792096 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:41.792163 kubelet[2316]: I0710 04:56:41.792122 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:41.792163 kubelet[2316]: I0710 04:56:41.792139 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a70e2cd4958778fc813145bebc1386a3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a70e2cd4958778fc813145bebc1386a3\") " pod="kube-system/kube-apiserver-localhost" Jul 10 04:56:41.792163 kubelet[2316]: I0710 04:56:41.792153 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:41.792241 kubelet[2316]: I0710 04:56:41.792169 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:41.835317 kubelet[2316]: I0710 04:56:41.835285 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 04:56:41.835713 kubelet[2316]: E0710 04:56:41.835676 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Jul 10 04:56:41.993316 kubelet[2316]: E0710 04:56:41.993269 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="800ms" Jul 10 04:56:42.043589 kubelet[2316]: E0710 04:56:42.043559 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:42.044271 containerd[1540]: time="2025-07-10T04:56:42.044240850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a70e2cd4958778fc813145bebc1386a3,Namespace:kube-system,Attempt:0,}" Jul 10 04:56:42.060554 containerd[1540]: time="2025-07-10T04:56:42.060516525Z" level=info msg="connecting to shim 5e2c8a2eae3ea5746427a7895d9cbd752feb0bceee967452566f2f291f86cea4" address="unix:///run/containerd/s/875c9f44334edf90c204bec88efc47619b96803c73abd1a90f8e03634251aa1c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:56:42.063557 kubelet[2316]: E0710 04:56:42.063530 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:42.064233 containerd[1540]: time="2025-07-10T04:56:42.064147692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 10 04:56:42.068254 kubelet[2316]: E0710 04:56:42.068231 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:42.068606 containerd[1540]: time="2025-07-10T04:56:42.068579563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 10 04:56:42.091160 systemd[1]: Started cri-containerd-5e2c8a2eae3ea5746427a7895d9cbd752feb0bceee967452566f2f291f86cea4.scope - libcontainer container 5e2c8a2eae3ea5746427a7895d9cbd752feb0bceee967452566f2f291f86cea4. Jul 10 04:56:42.097140 containerd[1540]: time="2025-07-10T04:56:42.097100975Z" level=info msg="connecting to shim 53b761d623ea34e080b0b15049a19f1c2cbb8f40b46892b7acbd5f69a5e4fe76" address="unix:///run/containerd/s/dd1b9853a49e8860c6f937b863b59702f182d007ebc7f10d22fdcb9ce2fa7e4c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:56:42.098873 containerd[1540]: time="2025-07-10T04:56:42.098803478Z" level=info msg="connecting to shim 9e5a5e4c5a207f6c832c22c8f7d3c8aa4a42096e2330cf50fced02cc64e1f12c" address="unix:///run/containerd/s/dc9867513981b9287ea8c5b8966ac010baf1c37a436700a77c08c5f86711cac1" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:56:42.116170 systemd[1]: Started cri-containerd-53b761d623ea34e080b0b15049a19f1c2cbb8f40b46892b7acbd5f69a5e4fe76.scope - libcontainer container 53b761d623ea34e080b0b15049a19f1c2cbb8f40b46892b7acbd5f69a5e4fe76. Jul 10 04:56:42.119740 systemd[1]: Started cri-containerd-9e5a5e4c5a207f6c832c22c8f7d3c8aa4a42096e2330cf50fced02cc64e1f12c.scope - libcontainer container 9e5a5e4c5a207f6c832c22c8f7d3c8aa4a42096e2330cf50fced02cc64e1f12c. Jul 10 04:56:42.137233 containerd[1540]: time="2025-07-10T04:56:42.137184241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a70e2cd4958778fc813145bebc1386a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e2c8a2eae3ea5746427a7895d9cbd752feb0bceee967452566f2f291f86cea4\"" Jul 10 04:56:42.138699 kubelet[2316]: E0710 04:56:42.138404 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:42.144193 containerd[1540]: time="2025-07-10T04:56:42.144143854Z" level=info msg="CreateContainer within sandbox \"5e2c8a2eae3ea5746427a7895d9cbd752feb0bceee967452566f2f291f86cea4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 04:56:42.151567 containerd[1540]: time="2025-07-10T04:56:42.151525090Z" level=info msg="Container 95da2efc7b55f802fac78f7d8a2087094da458414a955db5b3395da139a549df: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:56:42.153186 containerd[1540]: time="2025-07-10T04:56:42.153131782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"53b761d623ea34e080b0b15049a19f1c2cbb8f40b46892b7acbd5f69a5e4fe76\"" Jul 10 04:56:42.153839 kubelet[2316]: E0710 04:56:42.153813 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:42.157059 containerd[1540]: time="2025-07-10T04:56:42.157024287Z" level=info msg="CreateContainer within sandbox \"53b761d623ea34e080b0b15049a19f1c2cbb8f40b46892b7acbd5f69a5e4fe76\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 04:56:42.160349 containerd[1540]: time="2025-07-10T04:56:42.160308510Z" level=info msg="CreateContainer within sandbox \"5e2c8a2eae3ea5746427a7895d9cbd752feb0bceee967452566f2f291f86cea4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"95da2efc7b55f802fac78f7d8a2087094da458414a955db5b3395da139a549df\"" Jul 10 04:56:42.161001 containerd[1540]: time="2025-07-10T04:56:42.160947129Z" level=info msg="StartContainer for \"95da2efc7b55f802fac78f7d8a2087094da458414a955db5b3395da139a549df\"" Jul 10 04:56:42.162313 containerd[1540]: time="2025-07-10T04:56:42.162268149Z" level=info msg="connecting to shim 95da2efc7b55f802fac78f7d8a2087094da458414a955db5b3395da139a549df" address="unix:///run/containerd/s/875c9f44334edf90c204bec88efc47619b96803c73abd1a90f8e03634251aa1c" protocol=ttrpc version=3 Jul 10 04:56:42.162538 containerd[1540]: time="2025-07-10T04:56:42.162495910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e5a5e4c5a207f6c832c22c8f7d3c8aa4a42096e2330cf50fced02cc64e1f12c\"" Jul 10 04:56:42.163068 kubelet[2316]: E0710 04:56:42.163049 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:42.165355 containerd[1540]: time="2025-07-10T04:56:42.165315166Z" level=info msg="Container a3f0f8313debaf314b0aa75921170088af87e5957c99ad22f5095e0ddd683f00: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:56:42.168007 containerd[1540]: time="2025-07-10T04:56:42.167859476Z" level=info msg="CreateContainer within sandbox \"9e5a5e4c5a207f6c832c22c8f7d3c8aa4a42096e2330cf50fced02cc64e1f12c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 04:56:42.176910 containerd[1540]: time="2025-07-10T04:56:42.176874179Z" level=info msg="CreateContainer within sandbox \"53b761d623ea34e080b0b15049a19f1c2cbb8f40b46892b7acbd5f69a5e4fe76\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a3f0f8313debaf314b0aa75921170088af87e5957c99ad22f5095e0ddd683f00\"" Jul 10 04:56:42.177418 containerd[1540]: time="2025-07-10T04:56:42.177371803Z" level=info msg="StartContainer for \"a3f0f8313debaf314b0aa75921170088af87e5957c99ad22f5095e0ddd683f00\"" Jul 10 04:56:42.178101 containerd[1540]: time="2025-07-10T04:56:42.178072254Z" level=info msg="Container 4d66ba57fe1593288d808b1614bd66e4a9ab0befe82952bf4e09d7b5516418ea: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:56:42.179157 containerd[1540]: time="2025-07-10T04:56:42.179129975Z" level=info msg="connecting to shim a3f0f8313debaf314b0aa75921170088af87e5957c99ad22f5095e0ddd683f00" address="unix:///run/containerd/s/dd1b9853a49e8860c6f937b863b59702f182d007ebc7f10d22fdcb9ce2fa7e4c" protocol=ttrpc version=3 Jul 10 04:56:42.182154 systemd[1]: Started cri-containerd-95da2efc7b55f802fac78f7d8a2087094da458414a955db5b3395da139a549df.scope - libcontainer container 95da2efc7b55f802fac78f7d8a2087094da458414a955db5b3395da139a549df. Jul 10 04:56:42.184194 containerd[1540]: time="2025-07-10T04:56:42.184161805Z" level=info msg="CreateContainer within sandbox \"9e5a5e4c5a207f6c832c22c8f7d3c8aa4a42096e2330cf50fced02cc64e1f12c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4d66ba57fe1593288d808b1614bd66e4a9ab0befe82952bf4e09d7b5516418ea\"" Jul 10 04:56:42.185667 containerd[1540]: time="2025-07-10T04:56:42.185627222Z" level=info msg="StartContainer for \"4d66ba57fe1593288d808b1614bd66e4a9ab0befe82952bf4e09d7b5516418ea\"" Jul 10 04:56:42.187041 containerd[1540]: time="2025-07-10T04:56:42.187008515Z" level=info msg="connecting to shim 4d66ba57fe1593288d808b1614bd66e4a9ab0befe82952bf4e09d7b5516418ea" address="unix:///run/containerd/s/dc9867513981b9287ea8c5b8966ac010baf1c37a436700a77c08c5f86711cac1" protocol=ttrpc version=3 Jul 10 04:56:42.203204 systemd[1]: Started cri-containerd-a3f0f8313debaf314b0aa75921170088af87e5957c99ad22f5095e0ddd683f00.scope - libcontainer container a3f0f8313debaf314b0aa75921170088af87e5957c99ad22f5095e0ddd683f00. Jul 10 04:56:42.205921 systemd[1]: Started cri-containerd-4d66ba57fe1593288d808b1614bd66e4a9ab0befe82952bf4e09d7b5516418ea.scope - libcontainer container 4d66ba57fe1593288d808b1614bd66e4a9ab0befe82952bf4e09d7b5516418ea. Jul 10 04:56:42.233468 containerd[1540]: time="2025-07-10T04:56:42.233411494Z" level=info msg="StartContainer for \"95da2efc7b55f802fac78f7d8a2087094da458414a955db5b3395da139a549df\" returns successfully" Jul 10 04:56:42.237590 kubelet[2316]: I0710 04:56:42.237566 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 04:56:42.238134 kubelet[2316]: E0710 04:56:42.238094 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Jul 10 04:56:42.260150 containerd[1540]: time="2025-07-10T04:56:42.259262890Z" level=info msg="StartContainer for \"4d66ba57fe1593288d808b1614bd66e4a9ab0befe82952bf4e09d7b5516418ea\" returns successfully" Jul 10 04:56:42.260150 containerd[1540]: time="2025-07-10T04:56:42.259463476Z" level=info msg="StartContainer for \"a3f0f8313debaf314b0aa75921170088af87e5957c99ad22f5095e0ddd683f00\" returns successfully" Jul 10 04:56:42.272126 kubelet[2316]: E0710 04:56:42.272087 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 04:56:42.417866 kubelet[2316]: E0710 04:56:42.417811 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 04:56:42.418521 kubelet[2316]: E0710 04:56:42.418457 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:42.421842 kubelet[2316]: E0710 04:56:42.421820 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 04:56:42.421954 kubelet[2316]: E0710 04:56:42.421936 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:42.424731 kubelet[2316]: E0710 04:56:42.424706 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 04:56:42.424825 kubelet[2316]: E0710 04:56:42.424807 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:43.041712 kubelet[2316]: I0710 04:56:43.041630 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 04:56:43.426805 kubelet[2316]: E0710 04:56:43.426779 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 04:56:43.427344 kubelet[2316]: E0710 04:56:43.427321 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:43.427461 kubelet[2316]: E0710 04:56:43.427170 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 04:56:43.427618 kubelet[2316]: E0710 04:56:43.427602 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:43.427710 kubelet[2316]: E0710 04:56:43.426785 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 04:56:43.427850 kubelet[2316]: E0710 04:56:43.427835 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:44.679153 kubelet[2316]: E0710 04:56:44.679104 2316 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 04:56:44.775093 kubelet[2316]: I0710 04:56:44.775057 2316 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 04:56:44.775346 kubelet[2316]: E0710 04:56:44.775333 2316 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 04:56:44.788147 kubelet[2316]: E0710 04:56:44.788112 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 04:56:44.888455 kubelet[2316]: E0710 04:56:44.888414 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 04:56:44.989365 kubelet[2316]: E0710 04:56:44.989026 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 04:56:45.089549 kubelet[2316]: E0710 04:56:45.089513 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 04:56:45.190087 kubelet[2316]: E0710 04:56:45.190034 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 04:56:45.290040 kubelet[2316]: I0710 04:56:45.289754 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:45.295360 kubelet[2316]: E0710 04:56:45.295185 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:45.295360 kubelet[2316]: I0710 04:56:45.295216 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 04:56:45.297006 kubelet[2316]: E0710 04:56:45.296864 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 04:56:45.297006 kubelet[2316]: I0710 04:56:45.296890 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 04:56:45.298653 kubelet[2316]: E0710 04:56:45.298620 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 04:56:45.383698 kubelet[2316]: I0710 04:56:45.383313 2316 apiserver.go:52] "Watching apiserver" Jul 10 04:56:45.390949 kubelet[2316]: I0710 04:56:45.390914 2316 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 04:56:45.469810 kubelet[2316]: I0710 04:56:45.469628 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 04:56:45.471695 kubelet[2316]: E0710 04:56:45.471652 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 04:56:45.471831 kubelet[2316]: E0710 04:56:45.471814 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:46.509394 systemd[1]: Reload requested from client PID 2601 ('systemctl') (unit session-7.scope)... Jul 10 04:56:46.509411 systemd[1]: Reloading... Jul 10 04:56:46.576038 zram_generator::config[2650]: No configuration found. Jul 10 04:56:46.636246 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 04:56:46.729756 systemd[1]: Reloading finished in 220 ms. Jul 10 04:56:46.752401 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 04:56:46.770845 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 04:56:46.771088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 04:56:46.771151 systemd[1]: kubelet.service: Consumed 1.208s CPU time, 126.9M memory peak. Jul 10 04:56:46.772664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 04:56:46.902767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 04:56:46.906397 (kubelet)[2686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 04:56:46.939852 kubelet[2686]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 04:56:46.939852 kubelet[2686]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 04:56:46.939852 kubelet[2686]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 04:56:46.940260 kubelet[2686]: I0710 04:56:46.939920 2686 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 04:56:46.946244 kubelet[2686]: I0710 04:56:46.946208 2686 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 04:56:46.946244 kubelet[2686]: I0710 04:56:46.946236 2686 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 04:56:46.946501 kubelet[2686]: I0710 04:56:46.946476 2686 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 04:56:46.948875 kubelet[2686]: I0710 04:56:46.948840 2686 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 04:56:46.951311 kubelet[2686]: I0710 04:56:46.951271 2686 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 04:56:46.955621 kubelet[2686]: I0710 04:56:46.955598 2686 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 04:56:46.958228 kubelet[2686]: I0710 04:56:46.958174 2686 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 04:56:46.958434 kubelet[2686]: I0710 04:56:46.958408 2686 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 04:56:46.958576 kubelet[2686]: I0710 04:56:46.958433 2686 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 04:56:46.958645 kubelet[2686]: I0710 04:56:46.958587 2686 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 04:56:46.958645 kubelet[2686]: I0710 04:56:46.958595 2686 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 04:56:46.958645 kubelet[2686]: I0710 04:56:46.958642 2686 state_mem.go:36] "Initialized new in-memory state store" Jul 10 04:56:46.959200 kubelet[2686]: I0710 04:56:46.958777 2686 kubelet.go:480] "Attempting to sync node with API server" Jul 10 04:56:46.959200 kubelet[2686]: I0710 04:56:46.958793 2686 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 04:56:46.959200 kubelet[2686]: I0710 04:56:46.958821 2686 kubelet.go:386] "Adding apiserver pod source" Jul 10 04:56:46.959200 kubelet[2686]: I0710 04:56:46.958835 2686 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 04:56:46.959730 kubelet[2686]: I0710 04:56:46.959701 2686 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 10 04:56:46.961955 kubelet[2686]: I0710 04:56:46.961914 2686 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 04:56:46.965164 kubelet[2686]: I0710 04:56:46.965120 2686 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 04:56:46.965241 kubelet[2686]: I0710 04:56:46.965173 2686 server.go:1289] "Started kubelet" Jul 10 04:56:46.965337 kubelet[2686]: I0710 04:56:46.965302 2686 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 04:56:46.966345 kubelet[2686]: I0710 04:56:46.966323 2686 server.go:317] "Adding debug handlers to kubelet server" Jul 10 04:56:46.969153 kubelet[2686]: I0710 04:56:46.969090 2686 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 04:56:46.969381 kubelet[2686]: I0710 04:56:46.969352 2686 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 04:56:46.971954 kubelet[2686]: I0710 04:56:46.971924 2686 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 04:56:46.975641 kubelet[2686]: I0710 04:56:46.975614 2686 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 04:56:46.976150 kubelet[2686]: E0710 04:56:46.976125 2686 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 04:56:46.976189 kubelet[2686]: I0710 04:56:46.976174 2686 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 04:56:46.976375 kubelet[2686]: I0710 04:56:46.976357 2686 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 04:56:46.976478 kubelet[2686]: I0710 04:56:46.976465 2686 reconciler.go:26] "Reconciler: start to sync state" Jul 10 04:56:46.978280 kubelet[2686]: I0710 04:56:46.978253 2686 factory.go:223] Registration of the systemd container factory successfully Jul 10 04:56:46.978470 kubelet[2686]: I0710 04:56:46.978445 2686 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 04:56:46.980922 kubelet[2686]: I0710 04:56:46.980896 2686 factory.go:223] Registration of the containerd container factory successfully Jul 10 04:56:46.981756 kubelet[2686]: E0710 04:56:46.981375 2686 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 04:56:46.989912 kubelet[2686]: I0710 04:56:46.989865 2686 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 04:56:46.991131 kubelet[2686]: I0710 04:56:46.991111 2686 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 04:56:46.991131 kubelet[2686]: I0710 04:56:46.991138 2686 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 04:56:46.991394 kubelet[2686]: I0710 04:56:46.991158 2686 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 04:56:46.991394 kubelet[2686]: I0710 04:56:46.991166 2686 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 04:56:46.991394 kubelet[2686]: E0710 04:56:46.991208 2686 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 04:56:47.013505 kubelet[2686]: I0710 04:56:47.013479 2686 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 04:56:47.013505 kubelet[2686]: I0710 04:56:47.013496 2686 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 04:56:47.013505 kubelet[2686]: I0710 04:56:47.013517 2686 state_mem.go:36] "Initialized new in-memory state store" Jul 10 04:56:47.013662 kubelet[2686]: I0710 04:56:47.013641 2686 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 04:56:47.013662 kubelet[2686]: I0710 04:56:47.013651 2686 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 04:56:47.013701 kubelet[2686]: I0710 04:56:47.013667 2686 policy_none.go:49] "None policy: Start" Jul 10 04:56:47.013701 kubelet[2686]: I0710 04:56:47.013675 2686 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 04:56:47.013701 kubelet[2686]: I0710 04:56:47.013683 2686 state_mem.go:35] "Initializing new in-memory state store" Jul 10 04:56:47.013782 kubelet[2686]: I0710 04:56:47.013761 2686 state_mem.go:75] "Updated machine memory state" Jul 10 04:56:47.017465 kubelet[2686]: E0710 04:56:47.017355 2686 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 04:56:47.017646 kubelet[2686]: I0710 04:56:47.017624 2686 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 04:56:47.017788 kubelet[2686]: I0710 04:56:47.017756 2686 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 04:56:47.018538 kubelet[2686]: I0710 04:56:47.018521 2686 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 04:56:47.018972 kubelet[2686]: E0710 04:56:47.018920 2686 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 04:56:47.093613 kubelet[2686]: I0710 04:56:47.092894 2686 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 04:56:47.093613 kubelet[2686]: I0710 04:56:47.093055 2686 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:47.094074 kubelet[2686]: I0710 04:56:47.093874 2686 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 04:56:47.121675 kubelet[2686]: I0710 04:56:47.121648 2686 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 04:56:47.127705 kubelet[2686]: I0710 04:56:47.127676 2686 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 04:56:47.127810 kubelet[2686]: I0710 04:56:47.127786 2686 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 04:56:47.178171 kubelet[2686]: I0710 04:56:47.178100 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 04:56:47.178171 kubelet[2686]: I0710 04:56:47.178138 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a70e2cd4958778fc813145bebc1386a3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a70e2cd4958778fc813145bebc1386a3\") " pod="kube-system/kube-apiserver-localhost" Jul 10 04:56:47.178171 kubelet[2686]: I0710 04:56:47.178161 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:47.178171 kubelet[2686]: I0710 04:56:47.178177 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a70e2cd4958778fc813145bebc1386a3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a70e2cd4958778fc813145bebc1386a3\") " pod="kube-system/kube-apiserver-localhost" Jul 10 04:56:47.178385 kubelet[2686]: I0710 04:56:47.178231 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a70e2cd4958778fc813145bebc1386a3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a70e2cd4958778fc813145bebc1386a3\") " pod="kube-system/kube-apiserver-localhost" Jul 10 04:56:47.178385 kubelet[2686]: I0710 04:56:47.178274 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:47.178385 kubelet[2686]: I0710 04:56:47.178301 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:47.178385 kubelet[2686]: I0710 04:56:47.178320 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:47.178385 kubelet[2686]: I0710 04:56:47.178337 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 04:56:47.400298 kubelet[2686]: E0710 04:56:47.400260 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:47.400408 kubelet[2686]: E0710 04:56:47.400260 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:47.400408 kubelet[2686]: E0710 04:56:47.400346 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:47.960024 kubelet[2686]: I0710 04:56:47.959942 2686 apiserver.go:52] "Watching apiserver" Jul 10 04:56:47.976532 kubelet[2686]: I0710 04:56:47.976488 2686 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 04:56:48.002600 kubelet[2686]: I0710 04:56:48.002560 2686 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 04:56:48.003434 kubelet[2686]: I0710 04:56:48.002904 2686 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 04:56:48.003539 kubelet[2686]: E0710 04:56:48.003491 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:48.009238 kubelet[2686]: E0710 04:56:48.009195 2686 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 04:56:48.009361 kubelet[2686]: E0710 04:56:48.009344 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:48.011229 kubelet[2686]: E0710 04:56:48.011191 2686 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 04:56:48.011329 kubelet[2686]: E0710 04:56:48.011308 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:48.043733 kubelet[2686]: I0710 04:56:48.043657 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.043641285 podStartE2EDuration="1.043641285s" podCreationTimestamp="2025-07-10 04:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 04:56:48.035876128 +0000 UTC m=+1.126353057" watchObservedRunningTime="2025-07-10 04:56:48.043641285 +0000 UTC m=+1.134118214" Jul 10 04:56:48.052117 kubelet[2686]: I0710 04:56:48.052068 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.052054036 podStartE2EDuration="1.052054036s" podCreationTimestamp="2025-07-10 04:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 04:56:48.043823271 +0000 UTC m=+1.134300160" watchObservedRunningTime="2025-07-10 04:56:48.052054036 +0000 UTC m=+1.142530965" Jul 10 04:56:48.061246 kubelet[2686]: I0710 04:56:48.061196 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.061184966 podStartE2EDuration="1.061184966s" podCreationTimestamp="2025-07-10 04:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 04:56:48.052331616 +0000 UTC m=+1.142808545" watchObservedRunningTime="2025-07-10 04:56:48.061184966 +0000 UTC m=+1.151661895" Jul 10 04:56:49.004012 kubelet[2686]: E0710 04:56:49.003955 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:49.004344 kubelet[2686]: E0710 04:56:49.004019 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:52.960729 kubelet[2686]: E0710 04:56:52.960693 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:53.009943 kubelet[2686]: E0710 04:56:53.009403 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:53.476427 kubelet[2686]: I0710 04:56:53.476397 2686 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 04:56:53.476717 containerd[1540]: time="2025-07-10T04:56:53.476663427Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 04:56:53.477704 kubelet[2686]: I0710 04:56:53.477254 2686 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 04:56:53.958900 systemd[1]: Created slice kubepods-besteffort-poddbac0803_953f_43c8_a103_3f1036b23f6b.slice - libcontainer container kubepods-besteffort-poddbac0803_953f_43c8_a103_3f1036b23f6b.slice. Jul 10 04:56:54.012394 kubelet[2686]: E0710 04:56:54.012354 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:54.030472 kubelet[2686]: I0710 04:56:54.030402 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dbac0803-953f-43c8-a103-3f1036b23f6b-kube-proxy\") pod \"kube-proxy-xbwm7\" (UID: \"dbac0803-953f-43c8-a103-3f1036b23f6b\") " pod="kube-system/kube-proxy-xbwm7" Jul 10 04:56:54.030472 kubelet[2686]: I0710 04:56:54.030450 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbac0803-953f-43c8-a103-3f1036b23f6b-xtables-lock\") pod \"kube-proxy-xbwm7\" (UID: \"dbac0803-953f-43c8-a103-3f1036b23f6b\") " pod="kube-system/kube-proxy-xbwm7" Jul 10 04:56:54.030472 kubelet[2686]: I0710 04:56:54.030475 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbac0803-953f-43c8-a103-3f1036b23f6b-lib-modules\") pod \"kube-proxy-xbwm7\" (UID: \"dbac0803-953f-43c8-a103-3f1036b23f6b\") " pod="kube-system/kube-proxy-xbwm7" Jul 10 04:56:54.030715 kubelet[2686]: I0710 04:56:54.030492 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8z7t\" (UniqueName: \"kubernetes.io/projected/dbac0803-953f-43c8-a103-3f1036b23f6b-kube-api-access-w8z7t\") pod \"kube-proxy-xbwm7\" (UID: \"dbac0803-953f-43c8-a103-3f1036b23f6b\") " pod="kube-system/kube-proxy-xbwm7" Jul 10 04:56:54.138138 kubelet[2686]: E0710 04:56:54.138095 2686 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 10 04:56:54.138138 kubelet[2686]: E0710 04:56:54.138131 2686 projected.go:194] Error preparing data for projected volume kube-api-access-w8z7t for pod kube-system/kube-proxy-xbwm7: configmap "kube-root-ca.crt" not found Jul 10 04:56:54.138287 kubelet[2686]: E0710 04:56:54.138194 2686 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dbac0803-953f-43c8-a103-3f1036b23f6b-kube-api-access-w8z7t podName:dbac0803-953f-43c8-a103-3f1036b23f6b nodeName:}" failed. No retries permitted until 2025-07-10 04:56:54.638174213 +0000 UTC m=+7.728651142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w8z7t" (UniqueName: "kubernetes.io/projected/dbac0803-953f-43c8-a103-3f1036b23f6b-kube-api-access-w8z7t") pod "kube-proxy-xbwm7" (UID: "dbac0803-953f-43c8-a103-3f1036b23f6b") : configmap "kube-root-ca.crt" not found Jul 10 04:56:54.622880 systemd[1]: Created slice kubepods-besteffort-pod67728df5_7b67_486f_82b0_466060f2829a.slice - libcontainer container kubepods-besteffort-pod67728df5_7b67_486f_82b0_466060f2829a.slice. Jul 10 04:56:54.633568 kubelet[2686]: I0710 04:56:54.633536 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67728df5-7b67-486f-82b0-466060f2829a-var-lib-calico\") pod \"tigera-operator-747864d56d-2rdsr\" (UID: \"67728df5-7b67-486f-82b0-466060f2829a\") " pod="tigera-operator/tigera-operator-747864d56d-2rdsr" Jul 10 04:56:54.633746 kubelet[2686]: I0710 04:56:54.633720 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75tgz\" (UniqueName: \"kubernetes.io/projected/67728df5-7b67-486f-82b0-466060f2829a-kube-api-access-75tgz\") pod \"tigera-operator-747864d56d-2rdsr\" (UID: \"67728df5-7b67-486f-82b0-466060f2829a\") " pod="tigera-operator/tigera-operator-747864d56d-2rdsr" Jul 10 04:56:54.876997 kubelet[2686]: E0710 04:56:54.876889 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:54.877519 containerd[1540]: time="2025-07-10T04:56:54.877379282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xbwm7,Uid:dbac0803-953f-43c8-a103-3f1036b23f6b,Namespace:kube-system,Attempt:0,}" Jul 10 04:56:54.897926 containerd[1540]: time="2025-07-10T04:56:54.897866333Z" level=info msg="connecting to shim f922dfb4893e3c5a2d957f6dba723a912c7d93b7779318dfb0a8346571dfd9c7" address="unix:///run/containerd/s/4a9c9758e4fe0c109563d64b601f8541125a19483b7f640ef2af64bf45330715" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:56:54.922142 systemd[1]: Started cri-containerd-f922dfb4893e3c5a2d957f6dba723a912c7d93b7779318dfb0a8346571dfd9c7.scope - libcontainer container f922dfb4893e3c5a2d957f6dba723a912c7d93b7779318dfb0a8346571dfd9c7. Jul 10 04:56:54.925540 containerd[1540]: time="2025-07-10T04:56:54.925471006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-2rdsr,Uid:67728df5-7b67-486f-82b0-466060f2829a,Namespace:tigera-operator,Attempt:0,}" Jul 10 04:56:54.943072 containerd[1540]: time="2025-07-10T04:56:54.942902190Z" level=info msg="connecting to shim cfe8242c3a396e785f8c73a98abc474f454136958cdef12748130af4229e664b" address="unix:///run/containerd/s/f01c4ac3e35f82d4c72157ed3aa695d25788b917d9aad2cba8f78e4e05f8f3ae" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:56:54.947268 containerd[1540]: time="2025-07-10T04:56:54.947225328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xbwm7,Uid:dbac0803-953f-43c8-a103-3f1036b23f6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f922dfb4893e3c5a2d957f6dba723a912c7d93b7779318dfb0a8346571dfd9c7\"" Jul 10 04:56:54.948913 kubelet[2686]: E0710 04:56:54.948827 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:54.954017 containerd[1540]: time="2025-07-10T04:56:54.953954094Z" level=info msg="CreateContainer within sandbox \"f922dfb4893e3c5a2d957f6dba723a912c7d93b7779318dfb0a8346571dfd9c7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 04:56:54.964514 containerd[1540]: time="2025-07-10T04:56:54.964462345Z" level=info msg="Container 5d457165525b6c6d4f0a0114a22c686fce683dbfb69765eaf6626e82cb9c4b66: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:56:54.969131 systemd[1]: Started cri-containerd-cfe8242c3a396e785f8c73a98abc474f454136958cdef12748130af4229e664b.scope - libcontainer container cfe8242c3a396e785f8c73a98abc474f454136958cdef12748130af4229e664b. Jul 10 04:56:54.971056 containerd[1540]: time="2025-07-10T04:56:54.971020429Z" level=info msg="CreateContainer within sandbox \"f922dfb4893e3c5a2d957f6dba723a912c7d93b7779318dfb0a8346571dfd9c7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d457165525b6c6d4f0a0114a22c686fce683dbfb69765eaf6626e82cb9c4b66\"" Jul 10 04:56:54.972008 containerd[1540]: time="2025-07-10T04:56:54.971955938Z" level=info msg="StartContainer for \"5d457165525b6c6d4f0a0114a22c686fce683dbfb69765eaf6626e82cb9c4b66\"" Jul 10 04:56:54.974238 containerd[1540]: time="2025-07-10T04:56:54.974208569Z" level=info msg="connecting to shim 5d457165525b6c6d4f0a0114a22c686fce683dbfb69765eaf6626e82cb9c4b66" address="unix:///run/containerd/s/4a9c9758e4fe0c109563d64b601f8541125a19483b7f640ef2af64bf45330715" protocol=ttrpc version=3 Jul 10 04:56:54.993182 systemd[1]: Started cri-containerd-5d457165525b6c6d4f0a0114a22c686fce683dbfb69765eaf6626e82cb9c4b66.scope - libcontainer container 5d457165525b6c6d4f0a0114a22c686fce683dbfb69765eaf6626e82cb9c4b66. Jul 10 04:56:55.005657 containerd[1540]: time="2025-07-10T04:56:55.005615781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-2rdsr,Uid:67728df5-7b67-486f-82b0-466060f2829a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cfe8242c3a396e785f8c73a98abc474f454136958cdef12748130af4229e664b\"" Jul 10 04:56:55.007400 containerd[1540]: time="2025-07-10T04:56:55.007372064Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 04:56:55.036505 containerd[1540]: time="2025-07-10T04:56:55.036285695Z" level=info msg="StartContainer for \"5d457165525b6c6d4f0a0114a22c686fce683dbfb69765eaf6626e82cb9c4b66\" returns successfully" Jul 10 04:56:56.023011 kubelet[2686]: E0710 04:56:56.022812 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:56.051001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1303143062.mount: Deactivated successfully. Jul 10 04:56:56.201043 kubelet[2686]: E0710 04:56:56.201009 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:56.217486 kubelet[2686]: I0710 04:56:56.217369 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xbwm7" podStartSLOduration=3.217353581 podStartE2EDuration="3.217353581s" podCreationTimestamp="2025-07-10 04:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 04:56:56.037718398 +0000 UTC m=+9.128195367" watchObservedRunningTime="2025-07-10 04:56:56.217353581 +0000 UTC m=+9.307830510" Jul 10 04:56:56.464755 containerd[1540]: time="2025-07-10T04:56:56.464714966Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:56.465187 containerd[1540]: time="2025-07-10T04:56:56.465152220Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 10 04:56:56.465847 containerd[1540]: time="2025-07-10T04:56:56.465808562Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:56.467903 containerd[1540]: time="2025-07-10T04:56:56.467864044Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:56:56.468552 containerd[1540]: time="2025-07-10T04:56:56.468528306Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.461123035s" Jul 10 04:56:56.468711 containerd[1540]: time="2025-07-10T04:56:56.468555912Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 10 04:56:56.473754 containerd[1540]: time="2025-07-10T04:56:56.473712581Z" level=info msg="CreateContainer within sandbox \"cfe8242c3a396e785f8c73a98abc474f454136958cdef12748130af4229e664b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 04:56:56.490663 containerd[1540]: time="2025-07-10T04:56:56.490572166Z" level=info msg="Container e927070308d29a46487b37c0a4f70f0024cf8d82b31c574ec59eb09939fe42bc: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:56:56.502061 containerd[1540]: time="2025-07-10T04:56:56.502022708Z" level=info msg="CreateContainer within sandbox \"cfe8242c3a396e785f8c73a98abc474f454136958cdef12748130af4229e664b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e927070308d29a46487b37c0a4f70f0024cf8d82b31c574ec59eb09939fe42bc\"" Jul 10 04:56:56.502712 containerd[1540]: time="2025-07-10T04:56:56.502670367Z" level=info msg="StartContainer for \"e927070308d29a46487b37c0a4f70f0024cf8d82b31c574ec59eb09939fe42bc\"" Jul 10 04:56:56.504118 containerd[1540]: time="2025-07-10T04:56:56.504080390Z" level=info msg="connecting to shim e927070308d29a46487b37c0a4f70f0024cf8d82b31c574ec59eb09939fe42bc" address="unix:///run/containerd/s/f01c4ac3e35f82d4c72157ed3aa695d25788b917d9aad2cba8f78e4e05f8f3ae" protocol=ttrpc version=3 Jul 10 04:56:56.563185 systemd[1]: Started cri-containerd-e927070308d29a46487b37c0a4f70f0024cf8d82b31c574ec59eb09939fe42bc.scope - libcontainer container e927070308d29a46487b37c0a4f70f0024cf8d82b31c574ec59eb09939fe42bc. Jul 10 04:56:56.589966 containerd[1540]: time="2025-07-10T04:56:56.589922927Z" level=info msg="StartContainer for \"e927070308d29a46487b37c0a4f70f0024cf8d82b31c574ec59eb09939fe42bc\" returns successfully" Jul 10 04:56:57.025411 kubelet[2686]: E0710 04:56:57.025351 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:57.025749 kubelet[2686]: E0710 04:56:57.025522 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:57.035206 kubelet[2686]: I0710 04:56:57.035155 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-2rdsr" podStartSLOduration=1.570879797 podStartE2EDuration="3.035140475s" podCreationTimestamp="2025-07-10 04:56:54 +0000 UTC" firstStartedPulling="2025-07-10 04:56:55.006827219 +0000 UTC m=+8.097304148" lastFinishedPulling="2025-07-10 04:56:56.471087897 +0000 UTC m=+9.561564826" observedRunningTime="2025-07-10 04:56:57.033905426 +0000 UTC m=+10.124382355" watchObservedRunningTime="2025-07-10 04:56:57.035140475 +0000 UTC m=+10.125617404" Jul 10 04:56:57.208573 kubelet[2686]: E0710 04:56:57.208515 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:56:58.027037 kubelet[2686]: E0710 04:56:58.026511 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:01.544273 update_engine[1512]: I20250710 04:57:01.544212 1512 update_attempter.cc:509] Updating boot flags... Jul 10 04:57:01.877930 sudo[1745]: pam_unix(sudo:session): session closed for user root Jul 10 04:57:01.886011 sshd[1744]: Connection closed by 10.0.0.1 port 57810 Jul 10 04:57:01.886687 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:01.890816 systemd[1]: sshd@6-10.0.0.20:22-10.0.0.1:57810.service: Deactivated successfully. Jul 10 04:57:01.893465 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 04:57:01.895106 systemd[1]: session-7.scope: Consumed 8.040s CPU time, 222.1M memory peak. Jul 10 04:57:01.896182 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Jul 10 04:57:01.900035 systemd-logind[1509]: Removed session 7. Jul 10 04:57:06.912960 systemd[1]: Created slice kubepods-besteffort-pod2a578042_3b00_4da9_bc16_501728feada7.slice - libcontainer container kubepods-besteffort-pod2a578042_3b00_4da9_bc16_501728feada7.slice. Jul 10 04:57:07.017830 kubelet[2686]: I0710 04:57:07.017786 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a578042-3b00-4da9-bc16-501728feada7-tigera-ca-bundle\") pod \"calico-typha-747b968cc4-llsrf\" (UID: \"2a578042-3b00-4da9-bc16-501728feada7\") " pod="calico-system/calico-typha-747b968cc4-llsrf" Jul 10 04:57:07.017830 kubelet[2686]: I0710 04:57:07.017830 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd7nv\" (UniqueName: \"kubernetes.io/projected/2a578042-3b00-4da9-bc16-501728feada7-kube-api-access-wd7nv\") pod \"calico-typha-747b968cc4-llsrf\" (UID: \"2a578042-3b00-4da9-bc16-501728feada7\") " pod="calico-system/calico-typha-747b968cc4-llsrf" Jul 10 04:57:07.018214 kubelet[2686]: I0710 04:57:07.017882 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2a578042-3b00-4da9-bc16-501728feada7-typha-certs\") pod \"calico-typha-747b968cc4-llsrf\" (UID: \"2a578042-3b00-4da9-bc16-501728feada7\") " pod="calico-system/calico-typha-747b968cc4-llsrf" Jul 10 04:57:07.217909 kubelet[2686]: E0710 04:57:07.217803 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:07.218953 containerd[1540]: time="2025-07-10T04:57:07.218906239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-747b968cc4-llsrf,Uid:2a578042-3b00-4da9-bc16-501728feada7,Namespace:calico-system,Attempt:0,}" Jul 10 04:57:07.246515 systemd[1]: Created slice kubepods-besteffort-podb7f6ffbb_6c58_4db6_b61a_b45af87aa1d9.slice - libcontainer container kubepods-besteffort-podb7f6ffbb_6c58_4db6_b61a_b45af87aa1d9.slice. Jul 10 04:57:07.272487 containerd[1540]: time="2025-07-10T04:57:07.272444339Z" level=info msg="connecting to shim 46bf69c1c82c1b2ac9dbe9a0824cfb1eb0e89234bb3ebf4d9b2e60f0f5de3a43" address="unix:///run/containerd/s/ee1304a5c253f3d1eb33463073b6378f599b69512f6a8feb90dbc660f8e43f8a" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:57:07.302452 systemd[1]: Started cri-containerd-46bf69c1c82c1b2ac9dbe9a0824cfb1eb0e89234bb3ebf4d9b2e60f0f5de3a43.scope - libcontainer container 46bf69c1c82c1b2ac9dbe9a0824cfb1eb0e89234bb3ebf4d9b2e60f0f5de3a43. Jul 10 04:57:07.320792 kubelet[2686]: I0710 04:57:07.320724 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9-cni-net-dir\") pod \"calico-node-9thrm\" (UID: \"b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9\") " pod="calico-system/calico-node-9thrm" Jul 10 04:57:07.321791 kubelet[2686]: I0710 04:57:07.320921 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9-flexvol-driver-host\") pod \"calico-node-9thrm\" (UID: \"b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9\") " pod="calico-system/calico-node-9thrm" Jul 10 04:57:07.321791 kubelet[2686]: I0710 04:57:07.320950 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9-var-run-calico\") pod \"calico-node-9thrm\" (UID: \"b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9\") " pod="calico-system/calico-node-9thrm" Jul 10 04:57:07.321791 kubelet[2686]: I0710 04:57:07.320969 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9-cni-bin-dir\") pod \"calico-node-9thrm\" (UID: \"b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9\") " pod="calico-system/calico-node-9thrm" Jul 10 04:57:07.321791 kubelet[2686]: I0710 04:57:07.321003 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9-xtables-lock\") pod \"calico-node-9thrm\" (UID: \"b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9\") " pod="calico-system/calico-node-9thrm" Jul 10 04:57:07.321791 kubelet[2686]: I0710 04:57:07.321018 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9-lib-modules\") pod \"calico-node-9thrm\" (UID: \"b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9\") " pod="calico-system/calico-node-9thrm" Jul 10 04:57:07.321997 kubelet[2686]: I0710 04:57:07.321032 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmwng\" (UniqueName: \"kubernetes.io/projected/b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9-kube-api-access-vmwng\") pod \"calico-node-9thrm\" (UID: \"b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9\") " pod="calico-system/calico-node-9thrm" Jul 10 04:57:07.321997 kubelet[2686]: I0710 04:57:07.321052 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9-node-certs\") pod \"calico-node-9thrm\" (UID: \"b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9\") " pod="calico-system/calico-node-9thrm" Jul 10 04:57:07.321997 kubelet[2686]: I0710 04:57:07.321069 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9-cni-log-dir\") pod \"calico-node-9thrm\" (UID: \"b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9\") " pod="calico-system/calico-node-9thrm" Jul 10 04:57:07.321997 kubelet[2686]: I0710 04:57:07.321416 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9-policysync\") pod \"calico-node-9thrm\" (UID: \"b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9\") " pod="calico-system/calico-node-9thrm" Jul 10 04:57:07.321997 kubelet[2686]: I0710 04:57:07.321502 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9-tigera-ca-bundle\") pod \"calico-node-9thrm\" (UID: \"b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9\") " pod="calico-system/calico-node-9thrm" Jul 10 04:57:07.322108 kubelet[2686]: I0710 04:57:07.321524 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9-var-lib-calico\") pod \"calico-node-9thrm\" (UID: \"b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9\") " pod="calico-system/calico-node-9thrm" Jul 10 04:57:07.347710 containerd[1540]: time="2025-07-10T04:57:07.347644410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-747b968cc4-llsrf,Uid:2a578042-3b00-4da9-bc16-501728feada7,Namespace:calico-system,Attempt:0,} returns sandbox id \"46bf69c1c82c1b2ac9dbe9a0824cfb1eb0e89234bb3ebf4d9b2e60f0f5de3a43\"" Jul 10 04:57:07.348966 kubelet[2686]: E0710 04:57:07.348940 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:07.349823 containerd[1540]: time="2025-07-10T04:57:07.349789277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 04:57:07.424427 kubelet[2686]: E0710 04:57:07.424357 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.424427 kubelet[2686]: W0710 04:57:07.424396 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.424427 kubelet[2686]: E0710 04:57:07.424428 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.428994 kubelet[2686]: E0710 04:57:07.428942 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.428994 kubelet[2686]: W0710 04:57:07.428958 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.428994 kubelet[2686]: E0710 04:57:07.428971 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.432952 kubelet[2686]: E0710 04:57:07.432923 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.432952 kubelet[2686]: W0710 04:57:07.432941 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.432952 kubelet[2686]: E0710 04:57:07.432954 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.534532 kubelet[2686]: E0710 04:57:07.534405 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2vr6" podUID="13104a3f-c535-4efe-b2aa-5579666df893" Jul 10 04:57:07.552345 containerd[1540]: time="2025-07-10T04:57:07.551663141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9thrm,Uid:b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9,Namespace:calico-system,Attempt:0,}" Jul 10 04:57:07.586874 containerd[1540]: time="2025-07-10T04:57:07.586485582Z" level=info msg="connecting to shim 4463db763eb689bc0ee25859e89f1bf0314247d4dc6dc019b35182c6fbe52da7" address="unix:///run/containerd/s/32ea88571b9bc98e38ffb626377660c2f2b48e27b6f235722b221a87ef5cb3a1" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:57:07.597121 kubelet[2686]: E0710 04:57:07.597053 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.597121 kubelet[2686]: W0710 04:57:07.597111 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.597260 kubelet[2686]: E0710 04:57:07.597134 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.597444 kubelet[2686]: E0710 04:57:07.597427 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.601756 kubelet[2686]: W0710 04:57:07.597440 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.601841 kubelet[2686]: E0710 04:57:07.601762 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.602795 kubelet[2686]: E0710 04:57:07.602772 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.602829 kubelet[2686]: W0710 04:57:07.602795 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.602829 kubelet[2686]: E0710 04:57:07.602810 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.603095 kubelet[2686]: E0710 04:57:07.603076 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.603095 kubelet[2686]: W0710 04:57:07.603089 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.603168 kubelet[2686]: E0710 04:57:07.603099 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.603270 kubelet[2686]: E0710 04:57:07.603254 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.603270 kubelet[2686]: W0710 04:57:07.603265 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.603337 kubelet[2686]: E0710 04:57:07.603274 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.603517 kubelet[2686]: E0710 04:57:07.603501 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.603517 kubelet[2686]: W0710 04:57:07.603514 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.603587 kubelet[2686]: E0710 04:57:07.603525 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.603654 kubelet[2686]: E0710 04:57:07.603640 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.603654 kubelet[2686]: W0710 04:57:07.603649 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.603717 kubelet[2686]: E0710 04:57:07.603657 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.603968 kubelet[2686]: E0710 04:57:07.603952 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.604003 kubelet[2686]: W0710 04:57:07.603966 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.604031 kubelet[2686]: E0710 04:57:07.603986 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.604217 kubelet[2686]: E0710 04:57:07.604202 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.604217 kubelet[2686]: W0710 04:57:07.604216 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.604305 kubelet[2686]: E0710 04:57:07.604229 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.605083 kubelet[2686]: E0710 04:57:07.604922 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.605083 kubelet[2686]: W0710 04:57:07.604939 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.605083 kubelet[2686]: E0710 04:57:07.604951 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.605780 kubelet[2686]: E0710 04:57:07.605117 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.605780 kubelet[2686]: W0710 04:57:07.605125 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.605780 kubelet[2686]: E0710 04:57:07.605133 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.605780 kubelet[2686]: E0710 04:57:07.605289 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.605780 kubelet[2686]: W0710 04:57:07.605298 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.605780 kubelet[2686]: E0710 04:57:07.605306 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.606416 kubelet[2686]: E0710 04:57:07.606393 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.606416 kubelet[2686]: W0710 04:57:07.606410 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.606501 kubelet[2686]: E0710 04:57:07.606422 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.606799 kubelet[2686]: E0710 04:57:07.606742 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.606799 kubelet[2686]: W0710 04:57:07.606758 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.606799 kubelet[2686]: E0710 04:57:07.606772 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.607069 kubelet[2686]: E0710 04:57:07.606949 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.607069 kubelet[2686]: W0710 04:57:07.606960 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.607069 kubelet[2686]: E0710 04:57:07.606968 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.607265 kubelet[2686]: E0710 04:57:07.607236 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.607265 kubelet[2686]: W0710 04:57:07.607252 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.607265 kubelet[2686]: E0710 04:57:07.607263 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.607708 kubelet[2686]: E0710 04:57:07.607688 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.607708 kubelet[2686]: W0710 04:57:07.607704 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.607785 kubelet[2686]: E0710 04:57:07.607717 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.607968 kubelet[2686]: E0710 04:57:07.607950 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.607968 kubelet[2686]: W0710 04:57:07.607963 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.608148 kubelet[2686]: E0710 04:57:07.607984 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.608313 kubelet[2686]: E0710 04:57:07.608270 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.608313 kubelet[2686]: W0710 04:57:07.608285 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.608313 kubelet[2686]: E0710 04:57:07.608296 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.608525 kubelet[2686]: E0710 04:57:07.608504 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.608525 kubelet[2686]: W0710 04:57:07.608516 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.608525 kubelet[2686]: E0710 04:57:07.608525 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.625180 kubelet[2686]: E0710 04:57:07.625143 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.625180 kubelet[2686]: W0710 04:57:07.625164 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.625336 kubelet[2686]: E0710 04:57:07.625192 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.625336 kubelet[2686]: I0710 04:57:07.625223 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjw7c\" (UniqueName: \"kubernetes.io/projected/13104a3f-c535-4efe-b2aa-5579666df893-kube-api-access-vjw7c\") pod \"csi-node-driver-s2vr6\" (UID: \"13104a3f-c535-4efe-b2aa-5579666df893\") " pod="calico-system/csi-node-driver-s2vr6" Jul 10 04:57:07.625439 kubelet[2686]: E0710 04:57:07.625405 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.625439 kubelet[2686]: W0710 04:57:07.625426 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.625439 kubelet[2686]: E0710 04:57:07.625436 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.625515 kubelet[2686]: I0710 04:57:07.625456 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/13104a3f-c535-4efe-b2aa-5579666df893-registration-dir\") pod \"csi-node-driver-s2vr6\" (UID: \"13104a3f-c535-4efe-b2aa-5579666df893\") " pod="calico-system/csi-node-driver-s2vr6" Jul 10 04:57:07.625724 kubelet[2686]: E0710 04:57:07.625704 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.625759 kubelet[2686]: W0710 04:57:07.625723 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.625759 kubelet[2686]: E0710 04:57:07.625737 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.625898 kubelet[2686]: E0710 04:57:07.625885 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.625898 kubelet[2686]: W0710 04:57:07.625896 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.626027 kubelet[2686]: E0710 04:57:07.625905 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.626668 kubelet[2686]: E0710 04:57:07.626105 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.626668 kubelet[2686]: W0710 04:57:07.626117 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.626668 kubelet[2686]: E0710 04:57:07.626126 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.626668 kubelet[2686]: E0710 04:57:07.626271 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.626668 kubelet[2686]: W0710 04:57:07.626279 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.626668 kubelet[2686]: E0710 04:57:07.626287 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.626668 kubelet[2686]: E0710 04:57:07.626440 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.626668 kubelet[2686]: W0710 04:57:07.626447 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.626668 kubelet[2686]: E0710 04:57:07.626458 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.626934 kubelet[2686]: I0710 04:57:07.626481 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/13104a3f-c535-4efe-b2aa-5579666df893-socket-dir\") pod \"csi-node-driver-s2vr6\" (UID: \"13104a3f-c535-4efe-b2aa-5579666df893\") " pod="calico-system/csi-node-driver-s2vr6" Jul 10 04:57:07.626934 kubelet[2686]: E0710 04:57:07.626706 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.626934 kubelet[2686]: W0710 04:57:07.626721 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.626934 kubelet[2686]: E0710 04:57:07.626734 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.627556 kubelet[2686]: E0710 04:57:07.627518 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.627556 kubelet[2686]: W0710 04:57:07.627539 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.627556 kubelet[2686]: E0710 04:57:07.627554 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.627805 kubelet[2686]: E0710 04:57:07.627788 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.627805 kubelet[2686]: W0710 04:57:07.627804 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.627866 kubelet[2686]: E0710 04:57:07.627821 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.627866 kubelet[2686]: I0710 04:57:07.627845 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13104a3f-c535-4efe-b2aa-5579666df893-kubelet-dir\") pod \"csi-node-driver-s2vr6\" (UID: \"13104a3f-c535-4efe-b2aa-5579666df893\") " pod="calico-system/csi-node-driver-s2vr6" Jul 10 04:57:07.628030 kubelet[2686]: E0710 04:57:07.628012 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.628030 kubelet[2686]: W0710 04:57:07.628029 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.628085 kubelet[2686]: E0710 04:57:07.628039 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.628085 kubelet[2686]: I0710 04:57:07.628073 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/13104a3f-c535-4efe-b2aa-5579666df893-varrun\") pod \"csi-node-driver-s2vr6\" (UID: \"13104a3f-c535-4efe-b2aa-5579666df893\") " pod="calico-system/csi-node-driver-s2vr6" Jul 10 04:57:07.628157 systemd[1]: Started cri-containerd-4463db763eb689bc0ee25859e89f1bf0314247d4dc6dc019b35182c6fbe52da7.scope - libcontainer container 4463db763eb689bc0ee25859e89f1bf0314247d4dc6dc019b35182c6fbe52da7. Jul 10 04:57:07.628470 kubelet[2686]: E0710 04:57:07.628296 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.628470 kubelet[2686]: W0710 04:57:07.628314 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.628470 kubelet[2686]: E0710 04:57:07.628323 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.628932 kubelet[2686]: E0710 04:57:07.628857 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.628932 kubelet[2686]: W0710 04:57:07.628880 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.628932 kubelet[2686]: E0710 04:57:07.628893 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.629305 kubelet[2686]: E0710 04:57:07.629075 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.629305 kubelet[2686]: W0710 04:57:07.629099 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.629305 kubelet[2686]: E0710 04:57:07.629110 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.629701 kubelet[2686]: E0710 04:57:07.629359 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.629701 kubelet[2686]: W0710 04:57:07.629382 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.629701 kubelet[2686]: E0710 04:57:07.629393 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.715834 containerd[1540]: time="2025-07-10T04:57:07.715791614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9thrm,Uid:b7f6ffbb-6c58-4db6-b61a-b45af87aa1d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"4463db763eb689bc0ee25859e89f1bf0314247d4dc6dc019b35182c6fbe52da7\"" Jul 10 04:57:07.728768 kubelet[2686]: E0710 04:57:07.728732 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.728768 kubelet[2686]: W0710 04:57:07.728756 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.728893 kubelet[2686]: E0710 04:57:07.728774 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.729027 kubelet[2686]: E0710 04:57:07.729001 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.729027 kubelet[2686]: W0710 04:57:07.729014 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.729027 kubelet[2686]: E0710 04:57:07.729024 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.729280 kubelet[2686]: E0710 04:57:07.729259 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.729280 kubelet[2686]: W0710 04:57:07.729278 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.729339 kubelet[2686]: E0710 04:57:07.729292 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.729586 kubelet[2686]: E0710 04:57:07.729551 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.729586 kubelet[2686]: W0710 04:57:07.729565 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.729586 kubelet[2686]: E0710 04:57:07.729584 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.729771 kubelet[2686]: E0710 04:57:07.729754 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.729771 kubelet[2686]: W0710 04:57:07.729765 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.729821 kubelet[2686]: E0710 04:57:07.729775 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.730011 kubelet[2686]: E0710 04:57:07.729993 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.730011 kubelet[2686]: W0710 04:57:07.730010 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.730074 kubelet[2686]: E0710 04:57:07.730023 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.730216 kubelet[2686]: E0710 04:57:07.730200 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.730216 kubelet[2686]: W0710 04:57:07.730214 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.730268 kubelet[2686]: E0710 04:57:07.730225 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.730409 kubelet[2686]: E0710 04:57:07.730396 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.730409 kubelet[2686]: W0710 04:57:07.730407 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.730465 kubelet[2686]: E0710 04:57:07.730416 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.730608 kubelet[2686]: E0710 04:57:07.730579 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.730608 kubelet[2686]: W0710 04:57:07.730591 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.730608 kubelet[2686]: E0710 04:57:07.730599 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.730797 kubelet[2686]: E0710 04:57:07.730782 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.730797 kubelet[2686]: W0710 04:57:07.730793 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.730852 kubelet[2686]: E0710 04:57:07.730802 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.731001 kubelet[2686]: E0710 04:57:07.730988 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.731001 kubelet[2686]: W0710 04:57:07.730999 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.731060 kubelet[2686]: E0710 04:57:07.731007 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.731184 kubelet[2686]: E0710 04:57:07.731171 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.731184 kubelet[2686]: W0710 04:57:07.731182 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.731240 kubelet[2686]: E0710 04:57:07.731190 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.731376 kubelet[2686]: E0710 04:57:07.731364 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.731376 kubelet[2686]: W0710 04:57:07.731376 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.731429 kubelet[2686]: E0710 04:57:07.731385 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.731544 kubelet[2686]: E0710 04:57:07.731532 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.731544 kubelet[2686]: W0710 04:57:07.731543 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.731637 kubelet[2686]: E0710 04:57:07.731551 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.731749 kubelet[2686]: E0710 04:57:07.731734 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.731749 kubelet[2686]: W0710 04:57:07.731747 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.731796 kubelet[2686]: E0710 04:57:07.731756 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.732015 kubelet[2686]: E0710 04:57:07.731997 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.732015 kubelet[2686]: W0710 04:57:07.732011 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.732095 kubelet[2686]: E0710 04:57:07.732022 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.732192 kubelet[2686]: E0710 04:57:07.732177 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.732192 kubelet[2686]: W0710 04:57:07.732189 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.732192 kubelet[2686]: E0710 04:57:07.732197 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.732415 kubelet[2686]: E0710 04:57:07.732401 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.732415 kubelet[2686]: W0710 04:57:07.732413 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.732472 kubelet[2686]: E0710 04:57:07.732423 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.732668 kubelet[2686]: E0710 04:57:07.732654 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.732668 kubelet[2686]: W0710 04:57:07.732666 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.732725 kubelet[2686]: E0710 04:57:07.732676 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.732847 kubelet[2686]: E0710 04:57:07.732833 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.732847 kubelet[2686]: W0710 04:57:07.732846 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.732897 kubelet[2686]: E0710 04:57:07.732854 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.733918 kubelet[2686]: E0710 04:57:07.733891 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.733918 kubelet[2686]: W0710 04:57:07.733907 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.733996 kubelet[2686]: E0710 04:57:07.733943 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.734288 kubelet[2686]: E0710 04:57:07.734263 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.734288 kubelet[2686]: W0710 04:57:07.734278 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.734349 kubelet[2686]: E0710 04:57:07.734291 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.734698 kubelet[2686]: E0710 04:57:07.734644 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.734735 kubelet[2686]: W0710 04:57:07.734698 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.734735 kubelet[2686]: E0710 04:57:07.734712 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.734919 kubelet[2686]: E0710 04:57:07.734904 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.734919 kubelet[2686]: W0710 04:57:07.734917 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.734971 kubelet[2686]: E0710 04:57:07.734926 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.735122 kubelet[2686]: E0710 04:57:07.735109 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.735122 kubelet[2686]: W0710 04:57:07.735120 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.735173 kubelet[2686]: E0710 04:57:07.735128 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:07.742165 kubelet[2686]: E0710 04:57:07.742135 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:07.742165 kubelet[2686]: W0710 04:57:07.742154 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:07.742165 kubelet[2686]: E0710 04:57:07.742167 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:08.265604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276951976.mount: Deactivated successfully. Jul 10 04:57:08.920431 containerd[1540]: time="2025-07-10T04:57:08.920381338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:08.921390 containerd[1540]: time="2025-07-10T04:57:08.921352274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 10 04:57:08.924066 containerd[1540]: time="2025-07-10T04:57:08.924000697Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:08.926746 containerd[1540]: time="2025-07-10T04:57:08.926706445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:08.927255 containerd[1540]: time="2025-07-10T04:57:08.927230177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.577390895s" Jul 10 04:57:08.927298 containerd[1540]: time="2025-07-10T04:57:08.927261060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 10 04:57:08.928495 containerd[1540]: time="2025-07-10T04:57:08.928172310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 04:57:08.938805 containerd[1540]: time="2025-07-10T04:57:08.938740198Z" level=info msg="CreateContainer within sandbox \"46bf69c1c82c1b2ac9dbe9a0824cfb1eb0e89234bb3ebf4d9b2e60f0f5de3a43\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 04:57:08.944265 containerd[1540]: time="2025-07-10T04:57:08.944233222Z" level=info msg="Container dc4883aebbb0352371f73e1f82384f77c06c91f1a9a10c60453510145e54d525: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:08.950833 containerd[1540]: time="2025-07-10T04:57:08.950778231Z" level=info msg="CreateContainer within sandbox \"46bf69c1c82c1b2ac9dbe9a0824cfb1eb0e89234bb3ebf4d9b2e60f0f5de3a43\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dc4883aebbb0352371f73e1f82384f77c06c91f1a9a10c60453510145e54d525\"" Jul 10 04:57:08.951349 containerd[1540]: time="2025-07-10T04:57:08.951326845Z" level=info msg="StartContainer for \"dc4883aebbb0352371f73e1f82384f77c06c91f1a9a10c60453510145e54d525\"" Jul 10 04:57:08.957785 containerd[1540]: time="2025-07-10T04:57:08.957719999Z" level=info msg="connecting to shim dc4883aebbb0352371f73e1f82384f77c06c91f1a9a10c60453510145e54d525" address="unix:///run/containerd/s/ee1304a5c253f3d1eb33463073b6378f599b69512f6a8feb90dbc660f8e43f8a" protocol=ttrpc version=3 Jul 10 04:57:08.979113 systemd[1]: Started cri-containerd-dc4883aebbb0352371f73e1f82384f77c06c91f1a9a10c60453510145e54d525.scope - libcontainer container dc4883aebbb0352371f73e1f82384f77c06c91f1a9a10c60453510145e54d525. Jul 10 04:57:08.992620 kubelet[2686]: E0710 04:57:08.992460 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2vr6" podUID="13104a3f-c535-4efe-b2aa-5579666df893" Jul 10 04:57:09.020665 containerd[1540]: time="2025-07-10T04:57:09.020616072Z" level=info msg="StartContainer for \"dc4883aebbb0352371f73e1f82384f77c06c91f1a9a10c60453510145e54d525\" returns successfully" Jul 10 04:57:09.064440 kubelet[2686]: E0710 04:57:09.064406 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:09.072908 kubelet[2686]: I0710 04:57:09.072839 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-747b968cc4-llsrf" podStartSLOduration=1.4944085249999999 podStartE2EDuration="3.072813482s" podCreationTimestamp="2025-07-10 04:57:06 +0000 UTC" firstStartedPulling="2025-07-10 04:57:07.349507247 +0000 UTC m=+20.439984136" lastFinishedPulling="2025-07-10 04:57:08.927912164 +0000 UTC m=+22.018389093" observedRunningTime="2025-07-10 04:57:09.072726674 +0000 UTC m=+22.163203643" watchObservedRunningTime="2025-07-10 04:57:09.072813482 +0000 UTC m=+22.163290411" Jul 10 04:57:09.116591 kubelet[2686]: E0710 04:57:09.116559 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.116591 kubelet[2686]: W0710 04:57:09.116582 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.116742 kubelet[2686]: E0710 04:57:09.116611 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.116998 kubelet[2686]: E0710 04:57:09.116835 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.116998 kubelet[2686]: W0710 04:57:09.116849 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.116998 kubelet[2686]: E0710 04:57:09.116860 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.117393 kubelet[2686]: E0710 04:57:09.117036 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.117393 kubelet[2686]: W0710 04:57:09.117046 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.117393 kubelet[2686]: E0710 04:57:09.117055 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.117393 kubelet[2686]: E0710 04:57:09.117229 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.117393 kubelet[2686]: W0710 04:57:09.117239 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.117393 kubelet[2686]: E0710 04:57:09.117247 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.117594 kubelet[2686]: E0710 04:57:09.117444 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.117594 kubelet[2686]: W0710 04:57:09.117452 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.117594 kubelet[2686]: E0710 04:57:09.117465 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.117728 kubelet[2686]: E0710 04:57:09.117700 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.117728 kubelet[2686]: W0710 04:57:09.117712 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.117728 kubelet[2686]: E0710 04:57:09.117721 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.117919 kubelet[2686]: E0710 04:57:09.117866 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.117919 kubelet[2686]: W0710 04:57:09.117874 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.117919 kubelet[2686]: E0710 04:57:09.117882 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.118251 kubelet[2686]: E0710 04:57:09.118234 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.118251 kubelet[2686]: W0710 04:57:09.118249 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.118312 kubelet[2686]: E0710 04:57:09.118263 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.118463 kubelet[2686]: E0710 04:57:09.118449 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.118463 kubelet[2686]: W0710 04:57:09.118462 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.118531 kubelet[2686]: E0710 04:57:09.118472 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.118644 kubelet[2686]: E0710 04:57:09.118632 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.118670 kubelet[2686]: W0710 04:57:09.118645 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.118670 kubelet[2686]: E0710 04:57:09.118655 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.118811 kubelet[2686]: E0710 04:57:09.118794 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.118811 kubelet[2686]: W0710 04:57:09.118806 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.118925 kubelet[2686]: E0710 04:57:09.118815 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.119046 kubelet[2686]: E0710 04:57:09.118963 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.119046 kubelet[2686]: W0710 04:57:09.118997 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.119046 kubelet[2686]: E0710 04:57:09.119012 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.119189 kubelet[2686]: E0710 04:57:09.119169 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.119189 kubelet[2686]: W0710 04:57:09.119179 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.119189 kubelet[2686]: E0710 04:57:09.119188 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.119363 kubelet[2686]: E0710 04:57:09.119348 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.119389 kubelet[2686]: W0710 04:57:09.119363 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.119389 kubelet[2686]: E0710 04:57:09.119373 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.119637 kubelet[2686]: E0710 04:57:09.119620 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.119688 kubelet[2686]: W0710 04:57:09.119636 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.119688 kubelet[2686]: E0710 04:57:09.119649 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.143394 kubelet[2686]: E0710 04:57:09.143353 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.143394 kubelet[2686]: W0710 04:57:09.143377 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.143394 kubelet[2686]: E0710 04:57:09.143412 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.144418 kubelet[2686]: E0710 04:57:09.144393 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.144418 kubelet[2686]: W0710 04:57:09.144413 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.144615 kubelet[2686]: E0710 04:57:09.144433 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.145136 kubelet[2686]: E0710 04:57:09.145115 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.145136 kubelet[2686]: W0710 04:57:09.145132 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.145711 kubelet[2686]: E0710 04:57:09.145147 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.145711 kubelet[2686]: E0710 04:57:09.145566 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.145711 kubelet[2686]: W0710 04:57:09.145577 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.145711 kubelet[2686]: E0710 04:57:09.145588 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.146091 kubelet[2686]: E0710 04:57:09.145762 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.146091 kubelet[2686]: W0710 04:57:09.145770 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.146091 kubelet[2686]: E0710 04:57:09.145778 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.146446 kubelet[2686]: E0710 04:57:09.146397 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.146446 kubelet[2686]: W0710 04:57:09.146441 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.146536 kubelet[2686]: E0710 04:57:09.146453 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.146760 kubelet[2686]: E0710 04:57:09.146741 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.146760 kubelet[2686]: W0710 04:57:09.146753 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.147054 kubelet[2686]: E0710 04:57:09.146763 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.147054 kubelet[2686]: E0710 04:57:09.146965 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.147054 kubelet[2686]: W0710 04:57:09.146990 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.147054 kubelet[2686]: E0710 04:57:09.147002 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.147527 kubelet[2686]: E0710 04:57:09.147128 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.147527 kubelet[2686]: W0710 04:57:09.147136 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.147527 kubelet[2686]: E0710 04:57:09.147187 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.147527 kubelet[2686]: E0710 04:57:09.147312 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.147527 kubelet[2686]: W0710 04:57:09.147320 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.147527 kubelet[2686]: E0710 04:57:09.147328 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.148442 kubelet[2686]: E0710 04:57:09.147815 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.148442 kubelet[2686]: W0710 04:57:09.147835 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.148442 kubelet[2686]: E0710 04:57:09.147848 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.149328 kubelet[2686]: E0710 04:57:09.148838 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.149328 kubelet[2686]: W0710 04:57:09.148858 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.149328 kubelet[2686]: E0710 04:57:09.148875 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.149728 kubelet[2686]: E0710 04:57:09.149712 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.150366 kubelet[2686]: W0710 04:57:09.149796 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.150366 kubelet[2686]: E0710 04:57:09.149814 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.150700 kubelet[2686]: E0710 04:57:09.150684 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.150763 kubelet[2686]: W0710 04:57:09.150751 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.150823 kubelet[2686]: E0710 04:57:09.150811 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.151694 kubelet[2686]: E0710 04:57:09.151676 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.152187 kubelet[2686]: W0710 04:57:09.151785 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.152187 kubelet[2686]: E0710 04:57:09.151817 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.152933 kubelet[2686]: E0710 04:57:09.152647 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.153089 kubelet[2686]: W0710 04:57:09.153027 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.153089 kubelet[2686]: E0710 04:57:09.153058 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.153990 kubelet[2686]: E0710 04:57:09.153907 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.153990 kubelet[2686]: W0710 04:57:09.153922 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.153990 kubelet[2686]: E0710 04:57:09.153936 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:09.154883 kubelet[2686]: E0710 04:57:09.154812 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:09.154883 kubelet[2686]: W0710 04:57:09.154838 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:09.154883 kubelet[2686]: E0710 04:57:09.154854 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.057744 containerd[1540]: time="2025-07-10T04:57:10.057685593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:10.058439 containerd[1540]: time="2025-07-10T04:57:10.058398935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 10 04:57:10.059284 containerd[1540]: time="2025-07-10T04:57:10.059248729Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:10.061618 containerd[1540]: time="2025-07-10T04:57:10.061585613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:10.062491 containerd[1540]: time="2025-07-10T04:57:10.062460449Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.134256857s" Jul 10 04:57:10.062575 containerd[1540]: time="2025-07-10T04:57:10.062549417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 10 04:57:10.066089 kubelet[2686]: I0710 04:57:10.066057 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 04:57:10.066544 kubelet[2686]: E0710 04:57:10.066386 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:10.066583 containerd[1540]: time="2025-07-10T04:57:10.066325626Z" level=info msg="CreateContainer within sandbox \"4463db763eb689bc0ee25859e89f1bf0314247d4dc6dc019b35182c6fbe52da7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 04:57:10.074017 containerd[1540]: time="2025-07-10T04:57:10.073540935Z" level=info msg="Container 2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:10.080992 containerd[1540]: time="2025-07-10T04:57:10.080898455Z" level=info msg="CreateContainer within sandbox \"4463db763eb689bc0ee25859e89f1bf0314247d4dc6dc019b35182c6fbe52da7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec\"" Jul 10 04:57:10.081608 containerd[1540]: time="2025-07-10T04:57:10.081503628Z" level=info msg="StartContainer for \"2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec\"" Jul 10 04:57:10.083196 containerd[1540]: time="2025-07-10T04:57:10.083159933Z" level=info msg="connecting to shim 2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec" address="unix:///run/containerd/s/32ea88571b9bc98e38ffb626377660c2f2b48e27b6f235722b221a87ef5cb3a1" protocol=ttrpc version=3 Jul 10 04:57:10.105179 systemd[1]: Started cri-containerd-2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec.scope - libcontainer container 2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec. Jul 10 04:57:10.124745 kubelet[2686]: E0710 04:57:10.124709 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.125235 kubelet[2686]: W0710 04:57:10.124878 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.125235 kubelet[2686]: E0710 04:57:10.124906 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.125235 kubelet[2686]: E0710 04:57:10.125104 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.125235 kubelet[2686]: W0710 04:57:10.125114 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.125235 kubelet[2686]: E0710 04:57:10.125155 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.125481 kubelet[2686]: E0710 04:57:10.125438 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.125481 kubelet[2686]: W0710 04:57:10.125451 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.125640 kubelet[2686]: E0710 04:57:10.125460 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.125795 kubelet[2686]: E0710 04:57:10.125777 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.125860 kubelet[2686]: W0710 04:57:10.125848 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.126035 kubelet[2686]: E0710 04:57:10.125901 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.126150 kubelet[2686]: E0710 04:57:10.126138 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.126230 kubelet[2686]: W0710 04:57:10.126218 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.126382 kubelet[2686]: E0710 04:57:10.126270 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.126582 kubelet[2686]: E0710 04:57:10.126504 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.126582 kubelet[2686]: W0710 04:57:10.126522 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.126582 kubelet[2686]: E0710 04:57:10.126533 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.126871 kubelet[2686]: E0710 04:57:10.126856 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.127055 kubelet[2686]: W0710 04:57:10.126909 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.127055 kubelet[2686]: E0710 04:57:10.126931 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.127299 kubelet[2686]: E0710 04:57:10.127277 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.127477 kubelet[2686]: W0710 04:57:10.127364 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.127477 kubelet[2686]: E0710 04:57:10.127380 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.127648 kubelet[2686]: E0710 04:57:10.127601 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.127648 kubelet[2686]: W0710 04:57:10.127614 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.127648 kubelet[2686]: E0710 04:57:10.127624 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.127906 kubelet[2686]: E0710 04:57:10.127894 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.128047 kubelet[2686]: W0710 04:57:10.127972 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.128047 kubelet[2686]: E0710 04:57:10.128023 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.128282 kubelet[2686]: E0710 04:57:10.128259 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.128438 kubelet[2686]: W0710 04:57:10.128341 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.128438 kubelet[2686]: E0710 04:57:10.128370 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.128669 kubelet[2686]: E0710 04:57:10.128613 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.128669 kubelet[2686]: W0710 04:57:10.128625 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.128669 kubelet[2686]: E0710 04:57:10.128634 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.129076 kubelet[2686]: E0710 04:57:10.129040 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.129076 kubelet[2686]: W0710 04:57:10.129052 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.129254 kubelet[2686]: E0710 04:57:10.129174 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.129453 kubelet[2686]: E0710 04:57:10.129440 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.129570 kubelet[2686]: W0710 04:57:10.129520 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.129570 kubelet[2686]: E0710 04:57:10.129535 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.129831 kubelet[2686]: E0710 04:57:10.129819 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.130243 kubelet[2686]: W0710 04:57:10.130144 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.130243 kubelet[2686]: E0710 04:57:10.130162 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.146000 containerd[1540]: time="2025-07-10T04:57:10.145872076Z" level=info msg="StartContainer for \"2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec\" returns successfully" Jul 10 04:57:10.154366 kubelet[2686]: E0710 04:57:10.154338 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.154366 kubelet[2686]: W0710 04:57:10.154357 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.154545 kubelet[2686]: E0710 04:57:10.154375 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.154619 kubelet[2686]: E0710 04:57:10.154604 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.154619 kubelet[2686]: W0710 04:57:10.154617 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.154704 kubelet[2686]: E0710 04:57:10.154628 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.154888 kubelet[2686]: E0710 04:57:10.154870 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.154948 kubelet[2686]: W0710 04:57:10.154892 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.154948 kubelet[2686]: E0710 04:57:10.154907 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.155155 kubelet[2686]: E0710 04:57:10.155144 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.155155 kubelet[2686]: W0710 04:57:10.155155 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.155244 kubelet[2686]: E0710 04:57:10.155165 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.155341 kubelet[2686]: E0710 04:57:10.155329 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.155341 kubelet[2686]: W0710 04:57:10.155341 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.155446 kubelet[2686]: E0710 04:57:10.155350 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.155547 kubelet[2686]: E0710 04:57:10.155536 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.155595 kubelet[2686]: W0710 04:57:10.155547 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.155595 kubelet[2686]: E0710 04:57:10.155556 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.155887 kubelet[2686]: E0710 04:57:10.155869 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.156105 kubelet[2686]: W0710 04:57:10.155969 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.156105 kubelet[2686]: E0710 04:57:10.156009 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.156381 kubelet[2686]: E0710 04:57:10.156368 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.156445 kubelet[2686]: W0710 04:57:10.156432 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.156496 kubelet[2686]: E0710 04:57:10.156485 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.156794 kubelet[2686]: E0710 04:57:10.156735 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.156965 kubelet[2686]: W0710 04:57:10.156846 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.156965 kubelet[2686]: E0710 04:57:10.156862 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.157247 kubelet[2686]: E0710 04:57:10.157208 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.157247 kubelet[2686]: W0710 04:57:10.157223 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.157421 kubelet[2686]: E0710 04:57:10.157372 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.157738 kubelet[2686]: E0710 04:57:10.157684 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.157738 kubelet[2686]: W0710 04:57:10.157699 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.157738 kubelet[2686]: E0710 04:57:10.157710 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.158262 kubelet[2686]: E0710 04:57:10.158119 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.158262 kubelet[2686]: W0710 04:57:10.158130 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.158262 kubelet[2686]: E0710 04:57:10.158141 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.158408 kubelet[2686]: E0710 04:57:10.158398 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.158460 kubelet[2686]: W0710 04:57:10.158449 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.158514 kubelet[2686]: E0710 04:57:10.158503 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.158760 kubelet[2686]: E0710 04:57:10.158747 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.158833 kubelet[2686]: W0710 04:57:10.158822 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.158889 kubelet[2686]: E0710 04:57:10.158878 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.159344 kubelet[2686]: E0710 04:57:10.159195 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.159344 kubelet[2686]: W0710 04:57:10.159209 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.159344 kubelet[2686]: E0710 04:57:10.159219 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.159618 kubelet[2686]: E0710 04:57:10.159604 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.159811 kubelet[2686]: W0710 04:57:10.159668 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.159811 kubelet[2686]: E0710 04:57:10.159684 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.160043 kubelet[2686]: E0710 04:57:10.160026 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.160043 kubelet[2686]: W0710 04:57:10.160042 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.160152 kubelet[2686]: E0710 04:57:10.160053 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.160234 kubelet[2686]: E0710 04:57:10.160201 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 04:57:10.160234 kubelet[2686]: W0710 04:57:10.160209 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 04:57:10.160234 kubelet[2686]: E0710 04:57:10.160217 2686 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 04:57:10.170354 systemd[1]: cri-containerd-2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec.scope: Deactivated successfully. Jul 10 04:57:10.198860 containerd[1540]: time="2025-07-10T04:57:10.198714159Z" level=info msg="received exit event container_id:\"2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec\" id:\"2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec\" pid:3399 exited_at:{seconds:1752123430 nanos:195488278}" Jul 10 04:57:10.199488 containerd[1540]: time="2025-07-10T04:57:10.199436382Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec\" id:\"2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec\" pid:3399 exited_at:{seconds:1752123430 nanos:195488278}" Jul 10 04:57:10.237491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c914f5699fd7a2739c437b38d73cbb0968de9d937958f75eee45aea96ee73ec-rootfs.mount: Deactivated successfully. Jul 10 04:57:10.992054 kubelet[2686]: E0710 04:57:10.991930 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2vr6" podUID="13104a3f-c535-4efe-b2aa-5579666df893" Jul 10 04:57:11.067621 kubelet[2686]: E0710 04:57:11.067563 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:11.068694 containerd[1540]: time="2025-07-10T04:57:11.068435801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 04:57:12.069780 kubelet[2686]: E0710 04:57:12.069719 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:12.992550 kubelet[2686]: E0710 04:57:12.992235 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2vr6" podUID="13104a3f-c535-4efe-b2aa-5579666df893" Jul 10 04:57:13.989653 containerd[1540]: time="2025-07-10T04:57:13.989542861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:13.990111 containerd[1540]: time="2025-07-10T04:57:13.990080779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 10 04:57:13.990789 containerd[1540]: time="2025-07-10T04:57:13.990765988Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:13.992412 containerd[1540]: time="2025-07-10T04:57:13.992382584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:13.993294 containerd[1540]: time="2025-07-10T04:57:13.993262968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.924756721s" Jul 10 04:57:13.993338 containerd[1540]: time="2025-07-10T04:57:13.993295690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 10 04:57:13.996525 containerd[1540]: time="2025-07-10T04:57:13.996481599Z" level=info msg="CreateContainer within sandbox \"4463db763eb689bc0ee25859e89f1bf0314247d4dc6dc019b35182c6fbe52da7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 04:57:14.007119 containerd[1540]: time="2025-07-10T04:57:14.007074572Z" level=info msg="Container f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:14.015253 containerd[1540]: time="2025-07-10T04:57:14.015196398Z" level=info msg="CreateContainer within sandbox \"4463db763eb689bc0ee25859e89f1bf0314247d4dc6dc019b35182c6fbe52da7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40\"" Jul 10 04:57:14.016116 containerd[1540]: time="2025-07-10T04:57:14.016075097Z" level=info msg="StartContainer for \"f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40\"" Jul 10 04:57:14.018024 containerd[1540]: time="2025-07-10T04:57:14.017968265Z" level=info msg="connecting to shim f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40" address="unix:///run/containerd/s/32ea88571b9bc98e38ffb626377660c2f2b48e27b6f235722b221a87ef5cb3a1" protocol=ttrpc version=3 Jul 10 04:57:14.044134 systemd[1]: Started cri-containerd-f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40.scope - libcontainer container f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40. Jul 10 04:57:14.082256 containerd[1540]: time="2025-07-10T04:57:14.082202268Z" level=info msg="StartContainer for \"f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40\" returns successfully" Jul 10 04:57:14.723299 systemd[1]: cri-containerd-f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40.scope: Deactivated successfully. Jul 10 04:57:14.724393 systemd[1]: cri-containerd-f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40.scope: Consumed 487ms CPU time, 175.3M memory peak, 3.8M read from disk, 165.8M written to disk. Jul 10 04:57:14.726713 containerd[1540]: time="2025-07-10T04:57:14.726651878Z" level=info msg="received exit event container_id:\"f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40\" id:\"f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40\" pid:3493 exited_at:{seconds:1752123434 nanos:726456425}" Jul 10 04:57:14.727223 containerd[1540]: time="2025-07-10T04:57:14.727190555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40\" id:\"f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40\" pid:3493 exited_at:{seconds:1752123434 nanos:726456425}" Jul 10 04:57:14.745999 kubelet[2686]: I0710 04:57:14.745955 2686 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 04:57:14.746568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f413387fb2557596c72bafc39d7d6b0cc422776d8c4b18edf71d435d785c6a40-rootfs.mount: Deactivated successfully. Jul 10 04:57:14.801519 systemd[1]: Created slice kubepods-burstable-pode92b6327_56f3_4b3e_98a8_b60c5582a40f.slice - libcontainer container kubepods-burstable-pode92b6327_56f3_4b3e_98a8_b60c5582a40f.slice. Jul 10 04:57:14.814402 systemd[1]: Created slice kubepods-besteffort-pod4dd64e13_1516_462b_8e90_59008a6a95ee.slice - libcontainer container kubepods-besteffort-pod4dd64e13_1516_462b_8e90_59008a6a95ee.slice. Jul 10 04:57:14.821384 systemd[1]: Created slice kubepods-burstable-podc0885256_e6c0_4f6d_ad17_234f0a42947d.slice - libcontainer container kubepods-burstable-podc0885256_e6c0_4f6d_ad17_234f0a42947d.slice. Jul 10 04:57:14.830320 systemd[1]: Created slice kubepods-besteffort-pode5d6aa03_7d0e_4cf3_a9e5_c418263a4555.slice - libcontainer container kubepods-besteffort-pode5d6aa03_7d0e_4cf3_a9e5_c418263a4555.slice. Jul 10 04:57:14.837167 systemd[1]: Created slice kubepods-besteffort-pod70941dce_7e88_44af_942f_3cad8a49ca87.slice - libcontainer container kubepods-besteffort-pod70941dce_7e88_44af_942f_3cad8a49ca87.slice. Jul 10 04:57:14.847508 systemd[1]: Created slice kubepods-besteffort-podc01ff8e9_73d0_4553_9316_985ce4242995.slice - libcontainer container kubepods-besteffort-podc01ff8e9_73d0_4553_9316_985ce4242995.slice. Jul 10 04:57:14.853320 systemd[1]: Created slice kubepods-besteffort-pod560ec9b9_9f2a_4afb_a6c7_78920c635be3.slice - libcontainer container kubepods-besteffort-pod560ec9b9_9f2a_4afb_a6c7_78920c635be3.slice. Jul 10 04:57:14.887299 kubelet[2686]: I0710 04:57:14.887212 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92b6327-56f3-4b3e-98a8-b60c5582a40f-config-volume\") pod \"coredns-674b8bbfcf-9r28g\" (UID: \"e92b6327-56f3-4b3e-98a8-b60c5582a40f\") " pod="kube-system/coredns-674b8bbfcf-9r28g" Jul 10 04:57:14.887299 kubelet[2686]: I0710 04:57:14.887260 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/560ec9b9-9f2a-4afb-a6c7-78920c635be3-tigera-ca-bundle\") pod \"calico-kube-controllers-769b95776c-5sv6r\" (UID: \"560ec9b9-9f2a-4afb-a6c7-78920c635be3\") " pod="calico-system/calico-kube-controllers-769b95776c-5sv6r" Jul 10 04:57:14.887299 kubelet[2686]: I0710 04:57:14.887292 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c01ff8e9-73d0-4553-9316-985ce4242995-whisker-ca-bundle\") pod \"whisker-596884fd4b-mx25q\" (UID: \"c01ff8e9-73d0-4553-9316-985ce4242995\") " pod="calico-system/whisker-596884fd4b-mx25q" Jul 10 04:57:14.887509 kubelet[2686]: I0710 04:57:14.887312 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfjm\" (UniqueName: \"kubernetes.io/projected/560ec9b9-9f2a-4afb-a6c7-78920c635be3-kube-api-access-pwfjm\") pod \"calico-kube-controllers-769b95776c-5sv6r\" (UID: \"560ec9b9-9f2a-4afb-a6c7-78920c635be3\") " pod="calico-system/calico-kube-controllers-769b95776c-5sv6r" Jul 10 04:57:14.887509 kubelet[2686]: I0710 04:57:14.887394 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/70941dce-7e88-44af-942f-3cad8a49ca87-calico-apiserver-certs\") pod \"calico-apiserver-676c4b66fd-55hj2\" (UID: \"70941dce-7e88-44af-942f-3cad8a49ca87\") " pod="calico-apiserver/calico-apiserver-676c4b66fd-55hj2" Jul 10 04:57:14.887509 kubelet[2686]: I0710 04:57:14.887456 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0885256-e6c0-4f6d-ad17-234f0a42947d-config-volume\") pod \"coredns-674b8bbfcf-gnzxc\" (UID: \"c0885256-e6c0-4f6d-ad17-234f0a42947d\") " pod="kube-system/coredns-674b8bbfcf-gnzxc" Jul 10 04:57:14.887509 kubelet[2686]: I0710 04:57:14.887477 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c01ff8e9-73d0-4553-9316-985ce4242995-whisker-backend-key-pair\") pod \"whisker-596884fd4b-mx25q\" (UID: \"c01ff8e9-73d0-4553-9316-985ce4242995\") " pod="calico-system/whisker-596884fd4b-mx25q" Jul 10 04:57:14.887509 kubelet[2686]: I0710 04:57:14.887493 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48dfh\" (UniqueName: \"kubernetes.io/projected/c01ff8e9-73d0-4553-9316-985ce4242995-kube-api-access-48dfh\") pod \"whisker-596884fd4b-mx25q\" (UID: \"c01ff8e9-73d0-4553-9316-985ce4242995\") " pod="calico-system/whisker-596884fd4b-mx25q" Jul 10 04:57:14.887613 kubelet[2686]: I0710 04:57:14.887523 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7phgj\" (UniqueName: \"kubernetes.io/projected/e5d6aa03-7d0e-4cf3-a9e5-c418263a4555-kube-api-access-7phgj\") pod \"goldmane-768f4c5c69-9tzsm\" (UID: \"e5d6aa03-7d0e-4cf3-a9e5-c418263a4555\") " pod="calico-system/goldmane-768f4c5c69-9tzsm" Jul 10 04:57:14.887613 kubelet[2686]: I0710 04:57:14.887542 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqxm6\" (UniqueName: \"kubernetes.io/projected/70941dce-7e88-44af-942f-3cad8a49ca87-kube-api-access-gqxm6\") pod \"calico-apiserver-676c4b66fd-55hj2\" (UID: \"70941dce-7e88-44af-942f-3cad8a49ca87\") " pod="calico-apiserver/calico-apiserver-676c4b66fd-55hj2" Jul 10 04:57:14.887613 kubelet[2686]: I0710 04:57:14.887557 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4dd64e13-1516-462b-8e90-59008a6a95ee-calico-apiserver-certs\") pod \"calico-apiserver-676c4b66fd-cqdbk\" (UID: \"4dd64e13-1516-462b-8e90-59008a6a95ee\") " pod="calico-apiserver/calico-apiserver-676c4b66fd-cqdbk" Jul 10 04:57:14.887673 kubelet[2686]: I0710 04:57:14.887615 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d6aa03-7d0e-4cf3-a9e5-c418263a4555-config\") pod \"goldmane-768f4c5c69-9tzsm\" (UID: \"e5d6aa03-7d0e-4cf3-a9e5-c418263a4555\") " pod="calico-system/goldmane-768f4c5c69-9tzsm" Jul 10 04:57:14.887673 kubelet[2686]: I0710 04:57:14.887665 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5d6aa03-7d0e-4cf3-a9e5-c418263a4555-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-9tzsm\" (UID: \"e5d6aa03-7d0e-4cf3-a9e5-c418263a4555\") " pod="calico-system/goldmane-768f4c5c69-9tzsm" Jul 10 04:57:14.887718 kubelet[2686]: I0710 04:57:14.887687 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e5d6aa03-7d0e-4cf3-a9e5-c418263a4555-goldmane-key-pair\") pod \"goldmane-768f4c5c69-9tzsm\" (UID: \"e5d6aa03-7d0e-4cf3-a9e5-c418263a4555\") " pod="calico-system/goldmane-768f4c5c69-9tzsm" Jul 10 04:57:14.887740 kubelet[2686]: I0710 04:57:14.887716 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmds2\" (UniqueName: \"kubernetes.io/projected/4dd64e13-1516-462b-8e90-59008a6a95ee-kube-api-access-qmds2\") pod \"calico-apiserver-676c4b66fd-cqdbk\" (UID: \"4dd64e13-1516-462b-8e90-59008a6a95ee\") " pod="calico-apiserver/calico-apiserver-676c4b66fd-cqdbk" Jul 10 04:57:14.887740 kubelet[2686]: I0710 04:57:14.887732 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt8xh\" (UniqueName: \"kubernetes.io/projected/e92b6327-56f3-4b3e-98a8-b60c5582a40f-kube-api-access-tt8xh\") pod \"coredns-674b8bbfcf-9r28g\" (UID: \"e92b6327-56f3-4b3e-98a8-b60c5582a40f\") " pod="kube-system/coredns-674b8bbfcf-9r28g" Jul 10 04:57:14.887803 kubelet[2686]: I0710 04:57:14.887781 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78nnb\" (UniqueName: \"kubernetes.io/projected/c0885256-e6c0-4f6d-ad17-234f0a42947d-kube-api-access-78nnb\") pod \"coredns-674b8bbfcf-gnzxc\" (UID: \"c0885256-e6c0-4f6d-ad17-234f0a42947d\") " pod="kube-system/coredns-674b8bbfcf-gnzxc" Jul 10 04:57:15.031146 systemd[1]: Created slice kubepods-besteffort-pod13104a3f_c535_4efe_b2aa_5579666df893.slice - libcontainer container kubepods-besteffort-pod13104a3f_c535_4efe_b2aa_5579666df893.slice. Jul 10 04:57:15.034089 containerd[1540]: time="2025-07-10T04:57:15.034051987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2vr6,Uid:13104a3f-c535-4efe-b2aa-5579666df893,Namespace:calico-system,Attempt:0,}" Jul 10 04:57:15.098612 containerd[1540]: time="2025-07-10T04:57:15.096413721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 04:57:15.106120 kubelet[2686]: E0710 04:57:15.106083 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:15.106874 containerd[1540]: time="2025-07-10T04:57:15.106840179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9r28g,Uid:e92b6327-56f3-4b3e-98a8-b60c5582a40f,Namespace:kube-system,Attempt:0,}" Jul 10 04:57:15.119487 containerd[1540]: time="2025-07-10T04:57:15.119186318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676c4b66fd-cqdbk,Uid:4dd64e13-1516-462b-8e90-59008a6a95ee,Namespace:calico-apiserver,Attempt:0,}" Jul 10 04:57:15.133806 containerd[1540]: time="2025-07-10T04:57:15.133759238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9tzsm,Uid:e5d6aa03-7d0e-4cf3-a9e5-c418263a4555,Namespace:calico-system,Attempt:0,}" Jul 10 04:57:15.144159 kubelet[2686]: E0710 04:57:15.144122 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:15.146222 containerd[1540]: time="2025-07-10T04:57:15.146176501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gnzxc,Uid:c0885256-e6c0-4f6d-ad17-234f0a42947d,Namespace:kube-system,Attempt:0,}" Jul 10 04:57:15.146676 containerd[1540]: time="2025-07-10T04:57:15.146647611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676c4b66fd-55hj2,Uid:70941dce-7e88-44af-942f-3cad8a49ca87,Namespace:calico-apiserver,Attempt:0,}" Jul 10 04:57:15.150892 containerd[1540]: time="2025-07-10T04:57:15.150855196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596884fd4b-mx25q,Uid:c01ff8e9-73d0-4553-9316-985ce4242995,Namespace:calico-system,Attempt:0,}" Jul 10 04:57:15.156469 containerd[1540]: time="2025-07-10T04:57:15.156443509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-769b95776c-5sv6r,Uid:560ec9b9-9f2a-4afb-a6c7-78920c635be3,Namespace:calico-system,Attempt:0,}" Jul 10 04:57:15.470585 containerd[1540]: time="2025-07-10T04:57:15.470469402Z" level=error msg="Failed to destroy network for sandbox \"c6b8ce2da12ddd20eba614501419c6cdbc37ef6349cc341d2b30cb06029ae833\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.471319 containerd[1540]: time="2025-07-10T04:57:15.471184447Z" level=error msg="Failed to destroy network for sandbox \"09316426efa30953fd15a81afd9b667bb39a3713e2e36a0d109a070954da000d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.475521 containerd[1540]: time="2025-07-10T04:57:15.475463477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9r28g,Uid:e92b6327-56f3-4b3e-98a8-b60c5582a40f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6b8ce2da12ddd20eba614501419c6cdbc37ef6349cc341d2b30cb06029ae833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.475924 kubelet[2686]: E0710 04:57:15.475876 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6b8ce2da12ddd20eba614501419c6cdbc37ef6349cc341d2b30cb06029ae833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.476059 kubelet[2686]: E0710 04:57:15.475949 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6b8ce2da12ddd20eba614501419c6cdbc37ef6349cc341d2b30cb06029ae833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9r28g" Jul 10 04:57:15.477014 containerd[1540]: time="2025-07-10T04:57:15.476933169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-769b95776c-5sv6r,Uid:560ec9b9-9f2a-4afb-a6c7-78920c635be3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09316426efa30953fd15a81afd9b667bb39a3713e2e36a0d109a070954da000d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.477791 kubelet[2686]: E0710 04:57:15.477139 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09316426efa30953fd15a81afd9b667bb39a3713e2e36a0d109a070954da000d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.477791 kubelet[2686]: E0710 04:57:15.477182 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09316426efa30953fd15a81afd9b667bb39a3713e2e36a0d109a070954da000d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-769b95776c-5sv6r" Jul 10 04:57:15.478044 kubelet[2686]: E0710 04:57:15.478012 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6b8ce2da12ddd20eba614501419c6cdbc37ef6349cc341d2b30cb06029ae833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9r28g" Jul 10 04:57:15.481505 kubelet[2686]: E0710 04:57:15.481464 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09316426efa30953fd15a81afd9b667bb39a3713e2e36a0d109a070954da000d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-769b95776c-5sv6r" Jul 10 04:57:15.481665 kubelet[2686]: E0710 04:57:15.481628 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-769b95776c-5sv6r_calico-system(560ec9b9-9f2a-4afb-a6c7-78920c635be3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-769b95776c-5sv6r_calico-system(560ec9b9-9f2a-4afb-a6c7-78920c635be3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09316426efa30953fd15a81afd9b667bb39a3713e2e36a0d109a070954da000d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-769b95776c-5sv6r" podUID="560ec9b9-9f2a-4afb-a6c7-78920c635be3" Jul 10 04:57:15.482002 kubelet[2686]: E0710 04:57:15.481946 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9r28g_kube-system(e92b6327-56f3-4b3e-98a8-b60c5582a40f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9r28g_kube-system(e92b6327-56f3-4b3e-98a8-b60c5582a40f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6b8ce2da12ddd20eba614501419c6cdbc37ef6349cc341d2b30cb06029ae833\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9r28g" podUID="e92b6327-56f3-4b3e-98a8-b60c5582a40f" Jul 10 04:57:15.482086 containerd[1540]: time="2025-07-10T04:57:15.482018850Z" level=error msg="Failed to destroy network for sandbox \"27c5e96dec9fe6e9154a5d2d575d1459c7529179c9ff655c5da72d96ac917afa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.486225 containerd[1540]: time="2025-07-10T04:57:15.486184513Z" level=error msg="Failed to destroy network for sandbox \"2e43b61166396d5e4c94cf82d8af3a2070dfbc651887c1157d21e054b6fc71a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.486602 containerd[1540]: time="2025-07-10T04:57:15.486556216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676c4b66fd-55hj2,Uid:70941dce-7e88-44af-942f-3cad8a49ca87,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c5e96dec9fe6e9154a5d2d575d1459c7529179c9ff655c5da72d96ac917afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.487270 kubelet[2686]: E0710 04:57:15.487165 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c5e96dec9fe6e9154a5d2d575d1459c7529179c9ff655c5da72d96ac917afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.487723 kubelet[2686]: E0710 04:57:15.487589 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c5e96dec9fe6e9154a5d2d575d1459c7529179c9ff655c5da72d96ac917afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676c4b66fd-55hj2" Jul 10 04:57:15.487801 containerd[1540]: time="2025-07-10T04:57:15.487703089Z" level=error msg="Failed to destroy network for sandbox \"b9f717996f57b8b24c666b1ccb1f20c4a47a1d6457614901e3c8d65e4846636f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.487891 kubelet[2686]: E0710 04:57:15.487870 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c5e96dec9fe6e9154a5d2d575d1459c7529179c9ff655c5da72d96ac917afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676c4b66fd-55hj2" Jul 10 04:57:15.488209 kubelet[2686]: E0710 04:57:15.488090 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-676c4b66fd-55hj2_calico-apiserver(70941dce-7e88-44af-942f-3cad8a49ca87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-676c4b66fd-55hj2_calico-apiserver(70941dce-7e88-44af-942f-3cad8a49ca87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27c5e96dec9fe6e9154a5d2d575d1459c7529179c9ff655c5da72d96ac917afa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-676c4b66fd-55hj2" podUID="70941dce-7e88-44af-942f-3cad8a49ca87" Jul 10 04:57:15.489475 containerd[1540]: time="2025-07-10T04:57:15.489434718Z" level=error msg="Failed to destroy network for sandbox \"cc5d132d799230b04b8431e1ef48d274e7a24fc5d4bc7931dfe04b06beac33ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.490296 containerd[1540]: time="2025-07-10T04:57:15.490255730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9tzsm,Uid:e5d6aa03-7d0e-4cf3-a9e5-c418263a4555,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e43b61166396d5e4c94cf82d8af3a2070dfbc651887c1157d21e054b6fc71a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.490578 kubelet[2686]: E0710 04:57:15.490539 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e43b61166396d5e4c94cf82d8af3a2070dfbc651887c1157d21e054b6fc71a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.490633 kubelet[2686]: E0710 04:57:15.490594 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e43b61166396d5e4c94cf82d8af3a2070dfbc651887c1157d21e054b6fc71a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-9tzsm" Jul 10 04:57:15.490633 kubelet[2686]: E0710 04:57:15.490612 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e43b61166396d5e4c94cf82d8af3a2070dfbc651887c1157d21e054b6fc71a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-9tzsm" Jul 10 04:57:15.490724 kubelet[2686]: E0710 04:57:15.490653 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-9tzsm_calico-system(e5d6aa03-7d0e-4cf3-a9e5-c418263a4555)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-9tzsm_calico-system(e5d6aa03-7d0e-4cf3-a9e5-c418263a4555)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e43b61166396d5e4c94cf82d8af3a2070dfbc651887c1157d21e054b6fc71a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-9tzsm" podUID="e5d6aa03-7d0e-4cf3-a9e5-c418263a4555" Jul 10 04:57:15.492046 containerd[1540]: time="2025-07-10T04:57:15.492008841Z" level=error msg="Failed to destroy network for sandbox \"60b1bf3b1d116aca5b289119594d6084c8d14b2b430afb40a668c980b131ca1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.492880 containerd[1540]: time="2025-07-10T04:57:15.492843613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676c4b66fd-cqdbk,Uid:4dd64e13-1516-462b-8e90-59008a6a95ee,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f717996f57b8b24c666b1ccb1f20c4a47a1d6457614901e3c8d65e4846636f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.493214 kubelet[2686]: E0710 04:57:15.493027 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f717996f57b8b24c666b1ccb1f20c4a47a1d6457614901e3c8d65e4846636f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.493214 kubelet[2686]: E0710 04:57:15.493065 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f717996f57b8b24c666b1ccb1f20c4a47a1d6457614901e3c8d65e4846636f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676c4b66fd-cqdbk" Jul 10 04:57:15.493214 kubelet[2686]: E0710 04:57:15.493081 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f717996f57b8b24c666b1ccb1f20c4a47a1d6457614901e3c8d65e4846636f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676c4b66fd-cqdbk" Jul 10 04:57:15.493313 kubelet[2686]: E0710 04:57:15.493122 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-676c4b66fd-cqdbk_calico-apiserver(4dd64e13-1516-462b-8e90-59008a6a95ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-676c4b66fd-cqdbk_calico-apiserver(4dd64e13-1516-462b-8e90-59008a6a95ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9f717996f57b8b24c666b1ccb1f20c4a47a1d6457614901e3c8d65e4846636f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-676c4b66fd-cqdbk" podUID="4dd64e13-1516-462b-8e90-59008a6a95ee" Jul 10 04:57:15.493362 containerd[1540]: time="2025-07-10T04:57:15.493315323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2vr6,Uid:13104a3f-c535-4efe-b2aa-5579666df893,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc5d132d799230b04b8431e1ef48d274e7a24fc5d4bc7931dfe04b06beac33ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.493525 kubelet[2686]: E0710 04:57:15.493457 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc5d132d799230b04b8431e1ef48d274e7a24fc5d4bc7931dfe04b06beac33ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.493525 kubelet[2686]: E0710 04:57:15.493519 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc5d132d799230b04b8431e1ef48d274e7a24fc5d4bc7931dfe04b06beac33ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s2vr6" Jul 10 04:57:15.493580 kubelet[2686]: E0710 04:57:15.493535 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc5d132d799230b04b8431e1ef48d274e7a24fc5d4bc7931dfe04b06beac33ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s2vr6" Jul 10 04:57:15.493604 kubelet[2686]: E0710 04:57:15.493573 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s2vr6_calico-system(13104a3f-c535-4efe-b2aa-5579666df893)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s2vr6_calico-system(13104a3f-c535-4efe-b2aa-5579666df893)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc5d132d799230b04b8431e1ef48d274e7a24fc5d4bc7931dfe04b06beac33ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s2vr6" podUID="13104a3f-c535-4efe-b2aa-5579666df893" Jul 10 04:57:15.494426 containerd[1540]: time="2025-07-10T04:57:15.494292985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596884fd4b-mx25q,Uid:c01ff8e9-73d0-4553-9316-985ce4242995,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60b1bf3b1d116aca5b289119594d6084c8d14b2b430afb40a668c980b131ca1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.494786 kubelet[2686]: E0710 04:57:15.494506 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60b1bf3b1d116aca5b289119594d6084c8d14b2b430afb40a668c980b131ca1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.494786 kubelet[2686]: E0710 04:57:15.494560 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60b1bf3b1d116aca5b289119594d6084c8d14b2b430afb40a668c980b131ca1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-596884fd4b-mx25q" Jul 10 04:57:15.494786 kubelet[2686]: E0710 04:57:15.494578 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60b1bf3b1d116aca5b289119594d6084c8d14b2b430afb40a668c980b131ca1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-596884fd4b-mx25q" Jul 10 04:57:15.494924 kubelet[2686]: E0710 04:57:15.494612 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-596884fd4b-mx25q_calico-system(c01ff8e9-73d0-4553-9316-985ce4242995)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-596884fd4b-mx25q_calico-system(c01ff8e9-73d0-4553-9316-985ce4242995)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60b1bf3b1d116aca5b289119594d6084c8d14b2b430afb40a668c980b131ca1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-596884fd4b-mx25q" podUID="c01ff8e9-73d0-4553-9316-985ce4242995" Jul 10 04:57:15.495891 containerd[1540]: time="2025-07-10T04:57:15.495860083Z" level=error msg="Failed to destroy network for sandbox \"b69afcf3d4adfc74a557e84e8194115547e26c65c2f52d24bc651aa9658b15b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.496711 containerd[1540]: time="2025-07-10T04:57:15.496679335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gnzxc,Uid:c0885256-e6c0-4f6d-ad17-234f0a42947d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b69afcf3d4adfc74a557e84e8194115547e26c65c2f52d24bc651aa9658b15b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.496846 kubelet[2686]: E0710 04:57:15.496818 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b69afcf3d4adfc74a557e84e8194115547e26c65c2f52d24bc651aa9658b15b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 04:57:15.496897 kubelet[2686]: E0710 04:57:15.496857 2686 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b69afcf3d4adfc74a557e84e8194115547e26c65c2f52d24bc651aa9658b15b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gnzxc" Jul 10 04:57:15.496897 kubelet[2686]: E0710 04:57:15.496875 2686 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b69afcf3d4adfc74a557e84e8194115547e26c65c2f52d24bc651aa9658b15b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gnzxc" Jul 10 04:57:15.496948 kubelet[2686]: E0710 04:57:15.496913 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gnzxc_kube-system(c0885256-e6c0-4f6d-ad17-234f0a42947d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gnzxc_kube-system(c0885256-e6c0-4f6d-ad17-234f0a42947d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b69afcf3d4adfc74a557e84e8194115547e26c65c2f52d24bc651aa9658b15b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gnzxc" podUID="c0885256-e6c0-4f6d-ad17-234f0a42947d" Jul 10 04:57:16.006108 systemd[1]: run-netns-cni\x2db71cc7ea\x2dbbd5\x2d44a2\x2dd258\x2df0573b65100b.mount: Deactivated successfully. Jul 10 04:57:16.006199 systemd[1]: run-netns-cni\x2db25d7b14\x2d0370\x2d3cbc\x2d5160\x2d1840172b0977.mount: Deactivated successfully. Jul 10 04:57:19.337882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291917689.mount: Deactivated successfully. Jul 10 04:57:19.568320 containerd[1540]: time="2025-07-10T04:57:19.551293003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 10 04:57:19.568320 containerd[1540]: time="2025-07-10T04:57:19.554485639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.458033635s" Jul 10 04:57:19.569409 containerd[1540]: time="2025-07-10T04:57:19.568352434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 10 04:57:19.569409 containerd[1540]: time="2025-07-10T04:57:19.558681483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:19.569409 containerd[1540]: time="2025-07-10T04:57:19.569029067Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:19.569481 containerd[1540]: time="2025-07-10T04:57:19.569440447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:19.585423 containerd[1540]: time="2025-07-10T04:57:19.585372824Z" level=info msg="CreateContainer within sandbox \"4463db763eb689bc0ee25859e89f1bf0314247d4dc6dc019b35182c6fbe52da7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 04:57:19.598274 containerd[1540]: time="2025-07-10T04:57:19.598155727Z" level=info msg="Container a9bf7583f664e66876af314850012f89fa18f2a8d829e3921f99c6d49f74c5a7: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:19.612458 containerd[1540]: time="2025-07-10T04:57:19.612400661Z" level=info msg="CreateContainer within sandbox \"4463db763eb689bc0ee25859e89f1bf0314247d4dc6dc019b35182c6fbe52da7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a9bf7583f664e66876af314850012f89fa18f2a8d829e3921f99c6d49f74c5a7\"" Jul 10 04:57:19.612998 containerd[1540]: time="2025-07-10T04:57:19.612955168Z" level=info msg="StartContainer for \"a9bf7583f664e66876af314850012f89fa18f2a8d829e3921f99c6d49f74c5a7\"" Jul 10 04:57:19.614394 containerd[1540]: time="2025-07-10T04:57:19.614366837Z" level=info msg="connecting to shim a9bf7583f664e66876af314850012f89fa18f2a8d829e3921f99c6d49f74c5a7" address="unix:///run/containerd/s/32ea88571b9bc98e38ffb626377660c2f2b48e27b6f235722b221a87ef5cb3a1" protocol=ttrpc version=3 Jul 10 04:57:19.638152 systemd[1]: Started cri-containerd-a9bf7583f664e66876af314850012f89fa18f2a8d829e3921f99c6d49f74c5a7.scope - libcontainer container a9bf7583f664e66876af314850012f89fa18f2a8d829e3921f99c6d49f74c5a7. Jul 10 04:57:19.705318 containerd[1540]: time="2025-07-10T04:57:19.705281428Z" level=info msg="StartContainer for \"a9bf7583f664e66876af314850012f89fa18f2a8d829e3921f99c6d49f74c5a7\" returns successfully" Jul 10 04:57:19.899998 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 04:57:19.900251 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 04:57:20.020302 kubelet[2686]: I0710 04:57:20.020181 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48dfh\" (UniqueName: \"kubernetes.io/projected/c01ff8e9-73d0-4553-9316-985ce4242995-kube-api-access-48dfh\") pod \"c01ff8e9-73d0-4553-9316-985ce4242995\" (UID: \"c01ff8e9-73d0-4553-9316-985ce4242995\") " Jul 10 04:57:20.020911 kubelet[2686]: I0710 04:57:20.020442 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c01ff8e9-73d0-4553-9316-985ce4242995-whisker-ca-bundle\") pod \"c01ff8e9-73d0-4553-9316-985ce4242995\" (UID: \"c01ff8e9-73d0-4553-9316-985ce4242995\") " Jul 10 04:57:20.020911 kubelet[2686]: I0710 04:57:20.020492 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c01ff8e9-73d0-4553-9316-985ce4242995-whisker-backend-key-pair\") pod \"c01ff8e9-73d0-4553-9316-985ce4242995\" (UID: \"c01ff8e9-73d0-4553-9316-985ce4242995\") " Jul 10 04:57:20.037824 kubelet[2686]: I0710 04:57:20.037567 2686 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c01ff8e9-73d0-4553-9316-985ce4242995-kube-api-access-48dfh" (OuterVolumeSpecName: "kube-api-access-48dfh") pod "c01ff8e9-73d0-4553-9316-985ce4242995" (UID: "c01ff8e9-73d0-4553-9316-985ce4242995"). InnerVolumeSpecName "kube-api-access-48dfh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 04:57:20.039757 kubelet[2686]: I0710 04:57:20.039112 2686 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c01ff8e9-73d0-4553-9316-985ce4242995-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c01ff8e9-73d0-4553-9316-985ce4242995" (UID: "c01ff8e9-73d0-4553-9316-985ce4242995"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 04:57:20.041557 kubelet[2686]: I0710 04:57:20.041506 2686 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c01ff8e9-73d0-4553-9316-985ce4242995-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c01ff8e9-73d0-4553-9316-985ce4242995" (UID: "c01ff8e9-73d0-4553-9316-985ce4242995"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 04:57:20.120026 systemd[1]: Removed slice kubepods-besteffort-podc01ff8e9_73d0_4553_9316_985ce4242995.slice - libcontainer container kubepods-besteffort-podc01ff8e9_73d0_4553_9316_985ce4242995.slice. Jul 10 04:57:20.121243 kubelet[2686]: I0710 04:57:20.120796 2686 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c01ff8e9-73d0-4553-9316-985ce4242995-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 04:57:20.121243 kubelet[2686]: I0710 04:57:20.121011 2686 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-48dfh\" (UniqueName: \"kubernetes.io/projected/c01ff8e9-73d0-4553-9316-985ce4242995-kube-api-access-48dfh\") on node \"localhost\" DevicePath \"\"" Jul 10 04:57:20.121243 kubelet[2686]: I0710 04:57:20.121042 2686 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c01ff8e9-73d0-4553-9316-985ce4242995-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 04:57:20.137772 kubelet[2686]: I0710 04:57:20.137705 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9thrm" podStartSLOduration=1.285439823 podStartE2EDuration="13.137686566s" podCreationTimestamp="2025-07-10 04:57:07 +0000 UTC" firstStartedPulling="2025-07-10 04:57:07.71688741 +0000 UTC m=+20.807364339" lastFinishedPulling="2025-07-10 04:57:19.569134153 +0000 UTC m=+32.659611082" observedRunningTime="2025-07-10 04:57:20.136428029 +0000 UTC m=+33.226904958" watchObservedRunningTime="2025-07-10 04:57:20.137686566 +0000 UTC m=+33.228163535" Jul 10 04:57:20.186559 systemd[1]: Created slice kubepods-besteffort-pod6334d89c_a293_4df5_951a_0b376ac4da58.slice - libcontainer container kubepods-besteffort-pod6334d89c_a293_4df5_951a_0b376ac4da58.slice. Jul 10 04:57:20.221674 kubelet[2686]: I0710 04:57:20.221626 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkpjm\" (UniqueName: \"kubernetes.io/projected/6334d89c-a293-4df5-951a-0b376ac4da58-kube-api-access-qkpjm\") pod \"whisker-fbc597646-qg5kx\" (UID: \"6334d89c-a293-4df5-951a-0b376ac4da58\") " pod="calico-system/whisker-fbc597646-qg5kx" Jul 10 04:57:20.221674 kubelet[2686]: I0710 04:57:20.221680 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6334d89c-a293-4df5-951a-0b376ac4da58-whisker-backend-key-pair\") pod \"whisker-fbc597646-qg5kx\" (UID: \"6334d89c-a293-4df5-951a-0b376ac4da58\") " pod="calico-system/whisker-fbc597646-qg5kx" Jul 10 04:57:20.221827 kubelet[2686]: I0710 04:57:20.221726 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6334d89c-a293-4df5-951a-0b376ac4da58-whisker-ca-bundle\") pod \"whisker-fbc597646-qg5kx\" (UID: \"6334d89c-a293-4df5-951a-0b376ac4da58\") " pod="calico-system/whisker-fbc597646-qg5kx" Jul 10 04:57:20.340312 systemd[1]: var-lib-kubelet-pods-c01ff8e9\x2d73d0\x2d4553\x2d9316\x2d985ce4242995-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d48dfh.mount: Deactivated successfully. Jul 10 04:57:20.340404 systemd[1]: var-lib-kubelet-pods-c01ff8e9\x2d73d0\x2d4553\x2d9316\x2d985ce4242995-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 04:57:20.491316 containerd[1540]: time="2025-07-10T04:57:20.491194332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fbc597646-qg5kx,Uid:6334d89c-a293-4df5-951a-0b376ac4da58,Namespace:calico-system,Attempt:0,}" Jul 10 04:57:20.688732 systemd-networkd[1443]: calic3b4989fd4e: Link UP Jul 10 04:57:20.688941 systemd-networkd[1443]: calic3b4989fd4e: Gained carrier Jul 10 04:57:20.701786 containerd[1540]: 2025-07-10 04:57:20.515 [INFO][3865] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 04:57:20.701786 containerd[1540]: 2025-07-10 04:57:20.573 [INFO][3865] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--fbc597646--qg5kx-eth0 whisker-fbc597646- calico-system 6334d89c-a293-4df5-951a-0b376ac4da58 894 0 2025-07-10 04:57:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fbc597646 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-fbc597646-qg5kx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic3b4989fd4e [] [] }} ContainerID="e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" Namespace="calico-system" Pod="whisker-fbc597646-qg5kx" WorkloadEndpoint="localhost-k8s-whisker--fbc597646--qg5kx-" Jul 10 04:57:20.701786 containerd[1540]: 2025-07-10 04:57:20.573 [INFO][3865] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" Namespace="calico-system" Pod="whisker-fbc597646-qg5kx" WorkloadEndpoint="localhost-k8s-whisker--fbc597646--qg5kx-eth0" Jul 10 04:57:20.701786 containerd[1540]: 2025-07-10 04:57:20.646 [INFO][3880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" HandleID="k8s-pod-network.e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" Workload="localhost-k8s-whisker--fbc597646--qg5kx-eth0" Jul 10 04:57:20.702819 containerd[1540]: 2025-07-10 04:57:20.646 [INFO][3880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" HandleID="k8s-pod-network.e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" Workload="localhost-k8s-whisker--fbc597646--qg5kx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a10b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-fbc597646-qg5kx", "timestamp":"2025-07-10 04:57:20.646599314 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 04:57:20.702819 containerd[1540]: 2025-07-10 04:57:20.646 [INFO][3880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 04:57:20.702819 containerd[1540]: 2025-07-10 04:57:20.646 [INFO][3880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 04:57:20.702819 containerd[1540]: 2025-07-10 04:57:20.646 [INFO][3880] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 04:57:20.702819 containerd[1540]: 2025-07-10 04:57:20.657 [INFO][3880] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" host="localhost" Jul 10 04:57:20.702819 containerd[1540]: 2025-07-10 04:57:20.662 [INFO][3880] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 04:57:20.702819 containerd[1540]: 2025-07-10 04:57:20.666 [INFO][3880] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 04:57:20.702819 containerd[1540]: 2025-07-10 04:57:20.668 [INFO][3880] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:20.702819 containerd[1540]: 2025-07-10 04:57:20.670 [INFO][3880] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:20.702819 containerd[1540]: 2025-07-10 04:57:20.670 [INFO][3880] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" host="localhost" Jul 10 04:57:20.703133 containerd[1540]: 2025-07-10 04:57:20.671 [INFO][3880] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910 Jul 10 04:57:20.703133 containerd[1540]: 2025-07-10 04:57:20.674 [INFO][3880] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" host="localhost" Jul 10 04:57:20.703133 containerd[1540]: 2025-07-10 04:57:20.678 [INFO][3880] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" host="localhost" Jul 10 04:57:20.703133 containerd[1540]: 2025-07-10 04:57:20.678 [INFO][3880] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" host="localhost" Jul 10 04:57:20.703133 containerd[1540]: 2025-07-10 04:57:20.678 [INFO][3880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 04:57:20.703133 containerd[1540]: 2025-07-10 04:57:20.678 [INFO][3880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" HandleID="k8s-pod-network.e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" Workload="localhost-k8s-whisker--fbc597646--qg5kx-eth0" Jul 10 04:57:20.703263 containerd[1540]: 2025-07-10 04:57:20.681 [INFO][3865] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" Namespace="calico-system" Pod="whisker-fbc597646-qg5kx" WorkloadEndpoint="localhost-k8s-whisker--fbc597646--qg5kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--fbc597646--qg5kx-eth0", GenerateName:"whisker-fbc597646-", Namespace:"calico-system", SelfLink:"", UID:"6334d89c-a293-4df5-951a-0b376ac4da58", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fbc597646", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-fbc597646-qg5kx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic3b4989fd4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:20.703263 containerd[1540]: 2025-07-10 04:57:20.681 [INFO][3865] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" Namespace="calico-system" Pod="whisker-fbc597646-qg5kx" WorkloadEndpoint="localhost-k8s-whisker--fbc597646--qg5kx-eth0" Jul 10 04:57:20.703334 containerd[1540]: 2025-07-10 04:57:20.681 [INFO][3865] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3b4989fd4e ContainerID="e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" Namespace="calico-system" Pod="whisker-fbc597646-qg5kx" WorkloadEndpoint="localhost-k8s-whisker--fbc597646--qg5kx-eth0" Jul 10 04:57:20.703334 containerd[1540]: 2025-07-10 04:57:20.689 [INFO][3865] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" Namespace="calico-system" Pod="whisker-fbc597646-qg5kx" WorkloadEndpoint="localhost-k8s-whisker--fbc597646--qg5kx-eth0" Jul 10 04:57:20.703380 containerd[1540]: 2025-07-10 04:57:20.690 [INFO][3865] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" Namespace="calico-system" Pod="whisker-fbc597646-qg5kx" WorkloadEndpoint="localhost-k8s-whisker--fbc597646--qg5kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--fbc597646--qg5kx-eth0", GenerateName:"whisker-fbc597646-", Namespace:"calico-system", SelfLink:"", UID:"6334d89c-a293-4df5-951a-0b376ac4da58", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fbc597646", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910", Pod:"whisker-fbc597646-qg5kx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic3b4989fd4e", MAC:"c6:37:63:bb:21:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:20.703429 containerd[1540]: 2025-07-10 04:57:20.699 [INFO][3865] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" Namespace="calico-system" Pod="whisker-fbc597646-qg5kx" WorkloadEndpoint="localhost-k8s-whisker--fbc597646--qg5kx-eth0" Jul 10 04:57:20.746584 containerd[1540]: time="2025-07-10T04:57:20.746326751Z" level=info msg="connecting to shim e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910" address="unix:///run/containerd/s/c65c077795eac6a5e25416b8372e6ae00d8eb5f6664993f832a3c9d8f5056b4a" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:57:20.772123 systemd[1]: Started cri-containerd-e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910.scope - libcontainer container e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910. Jul 10 04:57:20.783330 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 04:57:20.803684 containerd[1540]: time="2025-07-10T04:57:20.803644217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fbc597646-qg5kx,Uid:6334d89c-a293-4df5-951a-0b376ac4da58,Namespace:calico-system,Attempt:0,} returns sandbox id \"e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910\"" Jul 10 04:57:20.805048 containerd[1540]: time="2025-07-10T04:57:20.805018405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 04:57:20.996542 kubelet[2686]: I0710 04:57:20.996485 2686 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c01ff8e9-73d0-4553-9316-985ce4242995" path="/var/lib/kubelet/pods/c01ff8e9-73d0-4553-9316-985ce4242995/volumes" Jul 10 04:57:21.234506 containerd[1540]: time="2025-07-10T04:57:21.234466138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9bf7583f664e66876af314850012f89fa18f2a8d829e3921f99c6d49f74c5a7\" id:\"93c35c0b37e49a79c6d2ebc1bba0dd4284aa4640947148e3b2871e8902b78a5a\" pid:3950 exit_status:1 exited_at:{seconds:1752123441 nanos:228400821}" Jul 10 04:57:21.651691 systemd-networkd[1443]: vxlan.calico: Link UP Jul 10 04:57:21.651809 systemd-networkd[1443]: vxlan.calico: Gained carrier Jul 10 04:57:21.863788 systemd-networkd[1443]: calic3b4989fd4e: Gained IPv6LL Jul 10 04:57:21.875366 containerd[1540]: time="2025-07-10T04:57:21.875327574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:21.875859 containerd[1540]: time="2025-07-10T04:57:21.875829470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 10 04:57:21.876512 containerd[1540]: time="2025-07-10T04:57:21.876488691Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:21.878333 containerd[1540]: time="2025-07-10T04:57:21.878308511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:21.879893 containerd[1540]: time="2025-07-10T04:57:21.879794679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.074727752s" Jul 10 04:57:21.879893 containerd[1540]: time="2025-07-10T04:57:21.879833640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 10 04:57:21.884848 containerd[1540]: time="2025-07-10T04:57:21.884818642Z" level=info msg="CreateContainer within sandbox \"e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 04:57:21.890170 containerd[1540]: time="2025-07-10T04:57:21.890127615Z" level=info msg="Container c670f29a830dd0de6a47617e0f8b83567fc74a9e9bcd3dfffda1fb6b993968d7: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:21.896359 containerd[1540]: time="2025-07-10T04:57:21.896317656Z" level=info msg="CreateContainer within sandbox \"e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c670f29a830dd0de6a47617e0f8b83567fc74a9e9bcd3dfffda1fb6b993968d7\"" Jul 10 04:57:21.896832 containerd[1540]: time="2025-07-10T04:57:21.896770671Z" level=info msg="StartContainer for \"c670f29a830dd0de6a47617e0f8b83567fc74a9e9bcd3dfffda1fb6b993968d7\"" Jul 10 04:57:21.897839 containerd[1540]: time="2025-07-10T04:57:21.897812545Z" level=info msg="connecting to shim c670f29a830dd0de6a47617e0f8b83567fc74a9e9bcd3dfffda1fb6b993968d7" address="unix:///run/containerd/s/c65c077795eac6a5e25416b8372e6ae00d8eb5f6664993f832a3c9d8f5056b4a" protocol=ttrpc version=3 Jul 10 04:57:21.921126 systemd[1]: Started cri-containerd-c670f29a830dd0de6a47617e0f8b83567fc74a9e9bcd3dfffda1fb6b993968d7.scope - libcontainer container c670f29a830dd0de6a47617e0f8b83567fc74a9e9bcd3dfffda1fb6b993968d7. Jul 10 04:57:21.952442 containerd[1540]: time="2025-07-10T04:57:21.952406280Z" level=info msg="StartContainer for \"c670f29a830dd0de6a47617e0f8b83567fc74a9e9bcd3dfffda1fb6b993968d7\" returns successfully" Jul 10 04:57:21.953956 containerd[1540]: time="2025-07-10T04:57:21.953744203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 04:57:22.193308 containerd[1540]: time="2025-07-10T04:57:22.193204055Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9bf7583f664e66876af314850012f89fa18f2a8d829e3921f99c6d49f74c5a7\" id:\"9809e60c5231244a7f110e66f5af0e254f11881297d5e29a95c253fcb0f56677\" pid:4215 exit_status:1 exited_at:{seconds:1752123442 nanos:192924606}" Jul 10 04:57:22.823125 systemd-networkd[1443]: vxlan.calico: Gained IPv6LL Jul 10 04:57:23.766748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322006177.mount: Deactivated successfully. Jul 10 04:57:23.789706 containerd[1540]: time="2025-07-10T04:57:23.789645587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:23.790515 containerd[1540]: time="2025-07-10T04:57:23.790481013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 10 04:57:23.791073 containerd[1540]: time="2025-07-10T04:57:23.791048190Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:23.793333 containerd[1540]: time="2025-07-10T04:57:23.793278419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:23.794006 containerd[1540]: time="2025-07-10T04:57:23.793865117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.840071793s" Jul 10 04:57:23.794006 containerd[1540]: time="2025-07-10T04:57:23.793896478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 10 04:57:23.798710 containerd[1540]: time="2025-07-10T04:57:23.798472259Z" level=info msg="CreateContainer within sandbox \"e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 04:57:23.805767 containerd[1540]: time="2025-07-10T04:57:23.805196585Z" level=info msg="Container 5e0f25088adf739ff6c734664f713bb6bfefd006b063b17a49b09d774cf46015: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:23.814171 containerd[1540]: time="2025-07-10T04:57:23.814129740Z" level=info msg="CreateContainer within sandbox \"e3a1c273d32202b79bcfdb8613b60768f4db7c6b345dff9ad4cff66f11aad910\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5e0f25088adf739ff6c734664f713bb6bfefd006b063b17a49b09d774cf46015\"" Jul 10 04:57:23.814671 containerd[1540]: time="2025-07-10T04:57:23.814641556Z" level=info msg="StartContainer for \"5e0f25088adf739ff6c734664f713bb6bfefd006b063b17a49b09d774cf46015\"" Jul 10 04:57:23.816003 containerd[1540]: time="2025-07-10T04:57:23.815951316Z" level=info msg="connecting to shim 5e0f25088adf739ff6c734664f713bb6bfefd006b063b17a49b09d774cf46015" address="unix:///run/containerd/s/c65c077795eac6a5e25416b8372e6ae00d8eb5f6664993f832a3c9d8f5056b4a" protocol=ttrpc version=3 Jul 10 04:57:23.836144 systemd[1]: Started cri-containerd-5e0f25088adf739ff6c734664f713bb6bfefd006b063b17a49b09d774cf46015.scope - libcontainer container 5e0f25088adf739ff6c734664f713bb6bfefd006b063b17a49b09d774cf46015. Jul 10 04:57:23.870944 containerd[1540]: time="2025-07-10T04:57:23.870901925Z" level=info msg="StartContainer for \"5e0f25088adf739ff6c734664f713bb6bfefd006b063b17a49b09d774cf46015\" returns successfully" Jul 10 04:57:24.140309 kubelet[2686]: I0710 04:57:24.140065 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-fbc597646-qg5kx" podStartSLOduration=1.150214171 podStartE2EDuration="4.140049678s" podCreationTimestamp="2025-07-10 04:57:20 +0000 UTC" firstStartedPulling="2025-07-10 04:57:20.804798474 +0000 UTC m=+33.895275403" lastFinishedPulling="2025-07-10 04:57:23.794633981 +0000 UTC m=+36.885110910" observedRunningTime="2025-07-10 04:57:24.139201093 +0000 UTC m=+37.229678062" watchObservedRunningTime="2025-07-10 04:57:24.140049678 +0000 UTC m=+37.230526607" Jul 10 04:57:26.962857 systemd[1]: Started sshd@7-10.0.0.20:22-10.0.0.1:60588.service - OpenSSH per-connection server daemon (10.0.0.1:60588). Jul 10 04:57:26.993297 containerd[1540]: time="2025-07-10T04:57:26.993251526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676c4b66fd-cqdbk,Uid:4dd64e13-1516-462b-8e90-59008a6a95ee,Namespace:calico-apiserver,Attempt:0,}" Jul 10 04:57:26.993297 containerd[1540]: time="2025-07-10T04:57:26.993292287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9tzsm,Uid:e5d6aa03-7d0e-4cf3-a9e5-c418263a4555,Namespace:calico-system,Attempt:0,}" Jul 10 04:57:27.028429 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 60588 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:57:27.031332 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:57:27.037002 systemd-logind[1509]: New session 8 of user core. Jul 10 04:57:27.043902 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 04:57:27.138951 systemd-networkd[1443]: calic2f005e4832: Link UP Jul 10 04:57:27.139143 systemd-networkd[1443]: calic2f005e4832: Gained carrier Jul 10 04:57:27.160596 containerd[1540]: 2025-07-10 04:57:27.049 [INFO][4292] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0 calico-apiserver-676c4b66fd- calico-apiserver 4dd64e13-1516-462b-8e90-59008a6a95ee 835 0 2025-07-10 04:57:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:676c4b66fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-676c4b66fd-cqdbk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic2f005e4832 [] [] }} ContainerID="7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-cqdbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-" Jul 10 04:57:27.160596 containerd[1540]: 2025-07-10 04:57:27.049 [INFO][4292] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-cqdbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0" Jul 10 04:57:27.160596 containerd[1540]: 2025-07-10 04:57:27.088 [INFO][4324] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" HandleID="k8s-pod-network.7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" Workload="localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0" Jul 10 04:57:27.160805 containerd[1540]: 2025-07-10 04:57:27.088 [INFO][4324] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" HandleID="k8s-pod-network.7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" Workload="localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-676c4b66fd-cqdbk", "timestamp":"2025-07-10 04:57:27.088609274 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 04:57:27.160805 containerd[1540]: 2025-07-10 04:57:27.088 [INFO][4324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 04:57:27.160805 containerd[1540]: 2025-07-10 04:57:27.088 [INFO][4324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 04:57:27.160805 containerd[1540]: 2025-07-10 04:57:27.088 [INFO][4324] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 04:57:27.160805 containerd[1540]: 2025-07-10 04:57:27.100 [INFO][4324] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" host="localhost" Jul 10 04:57:27.160805 containerd[1540]: 2025-07-10 04:57:27.106 [INFO][4324] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 04:57:27.160805 containerd[1540]: 2025-07-10 04:57:27.114 [INFO][4324] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 04:57:27.160805 containerd[1540]: 2025-07-10 04:57:27.116 [INFO][4324] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:27.160805 containerd[1540]: 2025-07-10 04:57:27.118 [INFO][4324] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:27.160805 containerd[1540]: 2025-07-10 04:57:27.118 [INFO][4324] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" host="localhost" Jul 10 04:57:27.161055 containerd[1540]: 2025-07-10 04:57:27.120 [INFO][4324] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8 Jul 10 04:57:27.161055 containerd[1540]: 2025-07-10 04:57:27.123 [INFO][4324] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" host="localhost" Jul 10 04:57:27.161055 containerd[1540]: 2025-07-10 04:57:27.129 [INFO][4324] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" host="localhost" Jul 10 04:57:27.161055 containerd[1540]: 2025-07-10 04:57:27.130 [INFO][4324] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" host="localhost" Jul 10 04:57:27.161055 containerd[1540]: 2025-07-10 04:57:27.130 [INFO][4324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 04:57:27.161055 containerd[1540]: 2025-07-10 04:57:27.130 [INFO][4324] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" HandleID="k8s-pod-network.7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" Workload="localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0" Jul 10 04:57:27.161177 containerd[1540]: 2025-07-10 04:57:27.136 [INFO][4292] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-cqdbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0", GenerateName:"calico-apiserver-676c4b66fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4dd64e13-1516-462b-8e90-59008a6a95ee", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"676c4b66fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-676c4b66fd-cqdbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2f005e4832", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:27.161243 containerd[1540]: 2025-07-10 04:57:27.136 [INFO][4292] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-cqdbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0" Jul 10 04:57:27.161243 containerd[1540]: 2025-07-10 04:57:27.136 [INFO][4292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2f005e4832 ContainerID="7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-cqdbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0" Jul 10 04:57:27.161243 containerd[1540]: 2025-07-10 04:57:27.139 [INFO][4292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-cqdbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0" Jul 10 04:57:27.161303 containerd[1540]: 2025-07-10 04:57:27.141 [INFO][4292] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-cqdbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0", GenerateName:"calico-apiserver-676c4b66fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4dd64e13-1516-462b-8e90-59008a6a95ee", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"676c4b66fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8", Pod:"calico-apiserver-676c4b66fd-cqdbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2f005e4832", MAC:"5e:95:61:9b:44:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:27.161355 containerd[1540]: 2025-07-10 04:57:27.155 [INFO][4292] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-cqdbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--cqdbk-eth0" Jul 10 04:57:27.231102 containerd[1540]: time="2025-07-10T04:57:27.230691021Z" level=info msg="connecting to shim 7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8" address="unix:///run/containerd/s/53469bf29060ffa015244095421743f402c60aa99827f06a5f761b2553347e6f" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:57:27.247485 systemd-networkd[1443]: cali3738f2d562f: Link UP Jul 10 04:57:27.247690 systemd-networkd[1443]: cali3738f2d562f: Gained carrier Jul 10 04:57:27.267162 systemd[1]: Started cri-containerd-7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8.scope - libcontainer container 7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8. Jul 10 04:57:27.271522 containerd[1540]: 2025-07-10 04:57:27.065 [INFO][4304] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0 goldmane-768f4c5c69- calico-system e5d6aa03-7d0e-4cf3-a9e5-c418263a4555 836 0 2025-07-10 04:57:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-9tzsm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3738f2d562f [] [] }} ContainerID="a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" Namespace="calico-system" Pod="goldmane-768f4c5c69-9tzsm" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9tzsm-" Jul 10 04:57:27.271522 containerd[1540]: 2025-07-10 04:57:27.065 [INFO][4304] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" Namespace="calico-system" Pod="goldmane-768f4c5c69-9tzsm" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0" Jul 10 04:57:27.271522 containerd[1540]: 2025-07-10 04:57:27.092 [INFO][4330] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" HandleID="k8s-pod-network.a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" Workload="localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0" Jul 10 04:57:27.272131 containerd[1540]: 2025-07-10 04:57:27.092 [INFO][4330] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" HandleID="k8s-pod-network.a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" Workload="localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c31c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-9tzsm", "timestamp":"2025-07-10 04:57:27.092715707 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 04:57:27.272131 containerd[1540]: 2025-07-10 04:57:27.092 [INFO][4330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 04:57:27.272131 containerd[1540]: 2025-07-10 04:57:27.130 [INFO][4330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 04:57:27.272131 containerd[1540]: 2025-07-10 04:57:27.130 [INFO][4330] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 04:57:27.272131 containerd[1540]: 2025-07-10 04:57:27.200 [INFO][4330] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" host="localhost" Jul 10 04:57:27.272131 containerd[1540]: 2025-07-10 04:57:27.205 [INFO][4330] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 04:57:27.272131 containerd[1540]: 2025-07-10 04:57:27.213 [INFO][4330] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 04:57:27.272131 containerd[1540]: 2025-07-10 04:57:27.214 [INFO][4330] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:27.272131 containerd[1540]: 2025-07-10 04:57:27.217 [INFO][4330] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:27.272131 containerd[1540]: 2025-07-10 04:57:27.217 [INFO][4330] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" host="localhost" Jul 10 04:57:27.272858 containerd[1540]: 2025-07-10 04:57:27.218 [INFO][4330] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197 Jul 10 04:57:27.272858 containerd[1540]: 2025-07-10 04:57:27.223 [INFO][4330] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" host="localhost" Jul 10 04:57:27.272858 containerd[1540]: 2025-07-10 04:57:27.231 [INFO][4330] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" host="localhost" Jul 10 04:57:27.272858 containerd[1540]: 2025-07-10 04:57:27.231 [INFO][4330] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" host="localhost" Jul 10 04:57:27.272858 containerd[1540]: 2025-07-10 04:57:27.231 [INFO][4330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 04:57:27.272858 containerd[1540]: 2025-07-10 04:57:27.231 [INFO][4330] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" HandleID="k8s-pod-network.a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" Workload="localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0" Jul 10 04:57:27.273225 containerd[1540]: 2025-07-10 04:57:27.235 [INFO][4304] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" Namespace="calico-system" Pod="goldmane-768f4c5c69-9tzsm" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e5d6aa03-7d0e-4cf3-a9e5-c418263a4555", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-9tzsm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3738f2d562f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:27.273225 containerd[1540]: 2025-07-10 04:57:27.235 [INFO][4304] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" Namespace="calico-system" Pod="goldmane-768f4c5c69-9tzsm" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0" Jul 10 04:57:27.273308 containerd[1540]: 2025-07-10 04:57:27.235 [INFO][4304] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3738f2d562f ContainerID="a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" Namespace="calico-system" Pod="goldmane-768f4c5c69-9tzsm" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0" Jul 10 04:57:27.273308 containerd[1540]: 2025-07-10 04:57:27.248 [INFO][4304] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" Namespace="calico-system" Pod="goldmane-768f4c5c69-9tzsm" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0" Jul 10 04:57:27.273343 containerd[1540]: 2025-07-10 04:57:27.248 [INFO][4304] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" Namespace="calico-system" Pod="goldmane-768f4c5c69-9tzsm" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e5d6aa03-7d0e-4cf3-a9e5-c418263a4555", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197", Pod:"goldmane-768f4c5c69-9tzsm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3738f2d562f", MAC:"1a:28:6e:ee:ed:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:27.273393 containerd[1540]: 2025-07-10 04:57:27.264 [INFO][4304] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" Namespace="calico-system" Pod="goldmane-768f4c5c69-9tzsm" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9tzsm-eth0" Jul 10 04:57:27.288951 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 04:57:27.302092 containerd[1540]: time="2025-07-10T04:57:27.302034583Z" level=info msg="connecting to shim a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197" address="unix:///run/containerd/s/9673090b4f3b469b685e9e64328b884ebee9eae52680735e88af5f6d67523f02" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:57:27.325688 containerd[1540]: time="2025-07-10T04:57:27.325642713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676c4b66fd-cqdbk,Uid:4dd64e13-1516-462b-8e90-59008a6a95ee,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8\"" Jul 10 04:57:27.327490 containerd[1540]: time="2025-07-10T04:57:27.327465723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 04:57:27.351154 systemd[1]: Started cri-containerd-a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197.scope - libcontainer container a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197. Jul 10 04:57:27.365014 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 04:57:27.402145 sshd[4319]: Connection closed by 10.0.0.1 port 60588 Jul 10 04:57:27.402549 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:27.404727 containerd[1540]: time="2025-07-10T04:57:27.404685966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9tzsm,Uid:e5d6aa03-7d0e-4cf3-a9e5-c418263a4555,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197\"" Jul 10 04:57:27.407058 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Jul 10 04:57:27.407944 systemd[1]: sshd@7-10.0.0.20:22-10.0.0.1:60588.service: Deactivated successfully. Jul 10 04:57:27.410545 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 04:57:27.413230 systemd-logind[1509]: Removed session 8. Jul 10 04:57:27.992411 containerd[1540]: time="2025-07-10T04:57:27.992357167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676c4b66fd-55hj2,Uid:70941dce-7e88-44af-942f-3cad8a49ca87,Namespace:calico-apiserver,Attempt:0,}" Jul 10 04:57:27.992674 containerd[1540]: time="2025-07-10T04:57:27.992371287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2vr6,Uid:13104a3f-c535-4efe-b2aa-5579666df893,Namespace:calico-system,Attempt:0,}" Jul 10 04:57:28.103320 systemd-networkd[1443]: cali9bbaec9d68f: Link UP Jul 10 04:57:28.103508 systemd-networkd[1443]: cali9bbaec9d68f: Gained carrier Jul 10 04:57:28.115004 containerd[1540]: 2025-07-10 04:57:28.030 [INFO][4481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s2vr6-eth0 csi-node-driver- calico-system 13104a3f-c535-4efe-b2aa-5579666df893 729 0 2025-07-10 04:57:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s2vr6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9bbaec9d68f [] [] }} ContainerID="477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" Namespace="calico-system" Pod="csi-node-driver-s2vr6" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2vr6-" Jul 10 04:57:28.115004 containerd[1540]: 2025-07-10 04:57:28.030 [INFO][4481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" Namespace="calico-system" Pod="csi-node-driver-s2vr6" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2vr6-eth0" Jul 10 04:57:28.115004 containerd[1540]: 2025-07-10 04:57:28.064 [INFO][4500] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" HandleID="k8s-pod-network.477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" Workload="localhost-k8s-csi--node--driver--s2vr6-eth0" Jul 10 04:57:28.115540 containerd[1540]: 2025-07-10 04:57:28.064 [INFO][4500] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" HandleID="k8s-pod-network.477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" Workload="localhost-k8s-csi--node--driver--s2vr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s2vr6", "timestamp":"2025-07-10 04:57:28.064389181 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 04:57:28.115540 containerd[1540]: 2025-07-10 04:57:28.064 [INFO][4500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 04:57:28.115540 containerd[1540]: 2025-07-10 04:57:28.064 [INFO][4500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 04:57:28.115540 containerd[1540]: 2025-07-10 04:57:28.064 [INFO][4500] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 04:57:28.115540 containerd[1540]: 2025-07-10 04:57:28.075 [INFO][4500] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" host="localhost" Jul 10 04:57:28.115540 containerd[1540]: 2025-07-10 04:57:28.081 [INFO][4500] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 04:57:28.115540 containerd[1540]: 2025-07-10 04:57:28.085 [INFO][4500] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 04:57:28.115540 containerd[1540]: 2025-07-10 04:57:28.087 [INFO][4500] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:28.115540 containerd[1540]: 2025-07-10 04:57:28.089 [INFO][4500] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:28.115540 containerd[1540]: 2025-07-10 04:57:28.089 [INFO][4500] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" host="localhost" Jul 10 04:57:28.115817 containerd[1540]: 2025-07-10 04:57:28.090 [INFO][4500] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516 Jul 10 04:57:28.115817 containerd[1540]: 2025-07-10 04:57:28.093 [INFO][4500] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" host="localhost" Jul 10 04:57:28.115817 containerd[1540]: 2025-07-10 04:57:28.098 [INFO][4500] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" host="localhost" Jul 10 04:57:28.115817 containerd[1540]: 2025-07-10 04:57:28.098 [INFO][4500] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" host="localhost" Jul 10 04:57:28.115817 containerd[1540]: 2025-07-10 04:57:28.098 [INFO][4500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 04:57:28.115817 containerd[1540]: 2025-07-10 04:57:28.098 [INFO][4500] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" HandleID="k8s-pod-network.477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" Workload="localhost-k8s-csi--node--driver--s2vr6-eth0" Jul 10 04:57:28.115958 containerd[1540]: 2025-07-10 04:57:28.100 [INFO][4481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" Namespace="calico-system" Pod="csi-node-driver-s2vr6" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2vr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s2vr6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13104a3f-c535-4efe-b2aa-5579666df893", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 57, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s2vr6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9bbaec9d68f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:28.116019 containerd[1540]: 2025-07-10 04:57:28.100 [INFO][4481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" Namespace="calico-system" Pod="csi-node-driver-s2vr6" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2vr6-eth0" Jul 10 04:57:28.116019 containerd[1540]: 2025-07-10 04:57:28.100 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bbaec9d68f ContainerID="477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" Namespace="calico-system" Pod="csi-node-driver-s2vr6" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2vr6-eth0" Jul 10 04:57:28.116019 containerd[1540]: 2025-07-10 04:57:28.103 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" Namespace="calico-system" Pod="csi-node-driver-s2vr6" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2vr6-eth0" Jul 10 04:57:28.116082 containerd[1540]: 2025-07-10 04:57:28.104 [INFO][4481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" Namespace="calico-system" Pod="csi-node-driver-s2vr6" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2vr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s2vr6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13104a3f-c535-4efe-b2aa-5579666df893", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 57, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516", Pod:"csi-node-driver-s2vr6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9bbaec9d68f", MAC:"ae:a1:f5:75:3e:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:28.116128 containerd[1540]: 2025-07-10 04:57:28.112 [INFO][4481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" Namespace="calico-system" Pod="csi-node-driver-s2vr6" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2vr6-eth0" Jul 10 04:57:28.132126 containerd[1540]: time="2025-07-10T04:57:28.132056351Z" level=info msg="connecting to shim 477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516" address="unix:///run/containerd/s/f3cb8e960e7655a9cea96d1121359cd226f3d736d710ddb5bb133f8020517ccb" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:57:28.162230 systemd[1]: Started cri-containerd-477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516.scope - libcontainer container 477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516. Jul 10 04:57:28.173235 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 04:57:28.191879 containerd[1540]: time="2025-07-10T04:57:28.191649105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2vr6,Uid:13104a3f-c535-4efe-b2aa-5579666df893,Namespace:calico-system,Attempt:0,} returns sandbox id \"477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516\"" Jul 10 04:57:28.214375 systemd-networkd[1443]: calic1706ec8b5a: Link UP Jul 10 04:57:28.215227 systemd-networkd[1443]: calic1706ec8b5a: Gained carrier Jul 10 04:57:28.229434 containerd[1540]: 2025-07-10 04:57:28.037 [INFO][4470] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0 calico-apiserver-676c4b66fd- calico-apiserver 70941dce-7e88-44af-942f-3cad8a49ca87 839 0 2025-07-10 04:57:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:676c4b66fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-676c4b66fd-55hj2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic1706ec8b5a [] [] }} ContainerID="ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-55hj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--55hj2-" Jul 10 04:57:28.229434 containerd[1540]: 2025-07-10 04:57:28.038 [INFO][4470] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-55hj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0" Jul 10 04:57:28.229434 containerd[1540]: 2025-07-10 04:57:28.067 [INFO][4506] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" HandleID="k8s-pod-network.ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" Workload="localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0" Jul 10 04:57:28.229611 containerd[1540]: 2025-07-10 04:57:28.067 [INFO][4506] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" HandleID="k8s-pod-network.ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" Workload="localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-676c4b66fd-55hj2", "timestamp":"2025-07-10 04:57:28.067238417 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 04:57:28.229611 containerd[1540]: 2025-07-10 04:57:28.067 [INFO][4506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 04:57:28.229611 containerd[1540]: 2025-07-10 04:57:28.098 [INFO][4506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 04:57:28.229611 containerd[1540]: 2025-07-10 04:57:28.098 [INFO][4506] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 04:57:28.229611 containerd[1540]: 2025-07-10 04:57:28.177 [INFO][4506] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" host="localhost" Jul 10 04:57:28.229611 containerd[1540]: 2025-07-10 04:57:28.183 [INFO][4506] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 04:57:28.229611 containerd[1540]: 2025-07-10 04:57:28.190 [INFO][4506] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 04:57:28.229611 containerd[1540]: 2025-07-10 04:57:28.192 [INFO][4506] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:28.229611 containerd[1540]: 2025-07-10 04:57:28.195 [INFO][4506] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:28.229611 containerd[1540]: 2025-07-10 04:57:28.195 [INFO][4506] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" host="localhost" Jul 10 04:57:28.229814 containerd[1540]: 2025-07-10 04:57:28.196 [INFO][4506] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1 Jul 10 04:57:28.229814 containerd[1540]: 2025-07-10 04:57:28.202 [INFO][4506] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" host="localhost" Jul 10 04:57:28.229814 containerd[1540]: 2025-07-10 04:57:28.207 [INFO][4506] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" host="localhost" Jul 10 04:57:28.229814 containerd[1540]: 2025-07-10 04:57:28.207 [INFO][4506] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" host="localhost" Jul 10 04:57:28.229814 containerd[1540]: 2025-07-10 04:57:28.207 [INFO][4506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 04:57:28.229814 containerd[1540]: 2025-07-10 04:57:28.207 [INFO][4506] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" HandleID="k8s-pod-network.ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" Workload="localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0" Jul 10 04:57:28.229916 containerd[1540]: 2025-07-10 04:57:28.211 [INFO][4470] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-55hj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0", GenerateName:"calico-apiserver-676c4b66fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"70941dce-7e88-44af-942f-3cad8a49ca87", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"676c4b66fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-676c4b66fd-55hj2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1706ec8b5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:28.230314 containerd[1540]: 2025-07-10 04:57:28.211 [INFO][4470] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-55hj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0" Jul 10 04:57:28.230314 containerd[1540]: 2025-07-10 04:57:28.211 [INFO][4470] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1706ec8b5a ContainerID="ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-55hj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0" Jul 10 04:57:28.230314 containerd[1540]: 2025-07-10 04:57:28.215 [INFO][4470] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-55hj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0" Jul 10 04:57:28.230516 containerd[1540]: 2025-07-10 04:57:28.215 [INFO][4470] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-55hj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0", GenerateName:"calico-apiserver-676c4b66fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"70941dce-7e88-44af-942f-3cad8a49ca87", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"676c4b66fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1", Pod:"calico-apiserver-676c4b66fd-55hj2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1706ec8b5a", MAC:"76:b0:3f:7f:6d:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:28.230603 containerd[1540]: 2025-07-10 04:57:28.226 [INFO][4470] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" Namespace="calico-apiserver" Pod="calico-apiserver-676c4b66fd-55hj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--676c4b66fd--55hj2-eth0" Jul 10 04:57:28.251706 containerd[1540]: time="2025-07-10T04:57:28.250855729Z" level=info msg="connecting to shim ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1" address="unix:///run/containerd/s/6660f37f94b09fbf06cbf7fe484526b2a6564e297ce98faf674fe87455af15cd" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:57:28.263434 systemd-networkd[1443]: calic2f005e4832: Gained IPv6LL Jul 10 04:57:28.292168 systemd[1]: Started cri-containerd-ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1.scope - libcontainer container ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1. Jul 10 04:57:28.309220 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 04:57:28.331378 containerd[1540]: time="2025-07-10T04:57:28.331209559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676c4b66fd-55hj2,Uid:70941dce-7e88-44af-942f-3cad8a49ca87,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1\"" Jul 10 04:57:28.840176 systemd-networkd[1443]: cali3738f2d562f: Gained IPv6LL Jul 10 04:57:28.979243 containerd[1540]: time="2025-07-10T04:57:28.979199415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:28.979627 containerd[1540]: time="2025-07-10T04:57:28.979591345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 10 04:57:28.980397 containerd[1540]: time="2025-07-10T04:57:28.980357326Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:28.982372 containerd[1540]: time="2025-07-10T04:57:28.982333939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:28.983033 containerd[1540]: time="2025-07-10T04:57:28.983001037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.65537039s" Jul 10 04:57:28.983199 containerd[1540]: time="2025-07-10T04:57:28.983112120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 04:57:28.984481 containerd[1540]: time="2025-07-10T04:57:28.984417474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 04:57:28.987649 containerd[1540]: time="2025-07-10T04:57:28.987612880Z" level=info msg="CreateContainer within sandbox \"7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 04:57:28.993425 containerd[1540]: time="2025-07-10T04:57:28.993395195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-769b95776c-5sv6r,Uid:560ec9b9-9f2a-4afb-a6c7-78920c635be3,Namespace:calico-system,Attempt:0,}" Jul 10 04:57:29.003471 containerd[1540]: time="2025-07-10T04:57:29.002840046Z" level=info msg="Container 12490fdf6f3bd906232f100b937f15b91c5e9a49a04e07a3b04ac6e2055789e4: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:29.015047 containerd[1540]: time="2025-07-10T04:57:29.014999683Z" level=info msg="CreateContainer within sandbox \"7c3cb7284defc2109c270350bfa5a49e0ae32ba612bd4c7fa1bc52b3fd7cedd8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"12490fdf6f3bd906232f100b937f15b91c5e9a49a04e07a3b04ac6e2055789e4\"" Jul 10 04:57:29.017748 containerd[1540]: time="2025-07-10T04:57:29.017620551Z" level=info msg="StartContainer for \"12490fdf6f3bd906232f100b937f15b91c5e9a49a04e07a3b04ac6e2055789e4\"" Jul 10 04:57:29.019774 containerd[1540]: time="2025-07-10T04:57:29.019744526Z" level=info msg="connecting to shim 12490fdf6f3bd906232f100b937f15b91c5e9a49a04e07a3b04ac6e2055789e4" address="unix:///run/containerd/s/53469bf29060ffa015244095421743f402c60aa99827f06a5f761b2553347e6f" protocol=ttrpc version=3 Jul 10 04:57:29.046128 systemd[1]: Started cri-containerd-12490fdf6f3bd906232f100b937f15b91c5e9a49a04e07a3b04ac6e2055789e4.scope - libcontainer container 12490fdf6f3bd906232f100b937f15b91c5e9a49a04e07a3b04ac6e2055789e4. Jul 10 04:57:29.104407 containerd[1540]: time="2025-07-10T04:57:29.104261006Z" level=info msg="StartContainer for \"12490fdf6f3bd906232f100b937f15b91c5e9a49a04e07a3b04ac6e2055789e4\" returns successfully" Jul 10 04:57:29.121185 systemd-networkd[1443]: cali71aad9dbe4a: Link UP Jul 10 04:57:29.121358 systemd-networkd[1443]: cali71aad9dbe4a: Gained carrier Jul 10 04:57:29.134390 containerd[1540]: 2025-07-10 04:57:29.039 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0 calico-kube-controllers-769b95776c- calico-system 560ec9b9-9f2a-4afb-a6c7-78920c635be3 837 0 2025-07-10 04:57:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:769b95776c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-769b95776c-5sv6r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali71aad9dbe4a [] [] }} ContainerID="959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" Namespace="calico-system" Pod="calico-kube-controllers-769b95776c-5sv6r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-" Jul 10 04:57:29.134390 containerd[1540]: 2025-07-10 04:57:29.040 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" Namespace="calico-system" Pod="calico-kube-controllers-769b95776c-5sv6r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0" Jul 10 04:57:29.134390 containerd[1540]: 2025-07-10 04:57:29.068 [INFO][4662] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" HandleID="k8s-pod-network.959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" Workload="localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0" Jul 10 04:57:29.134921 containerd[1540]: 2025-07-10 04:57:29.068 [INFO][4662] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" HandleID="k8s-pod-network.959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" Workload="localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003214a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-769b95776c-5sv6r", "timestamp":"2025-07-10 04:57:29.068399553 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 04:57:29.134921 containerd[1540]: 2025-07-10 04:57:29.068 [INFO][4662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 04:57:29.134921 containerd[1540]: 2025-07-10 04:57:29.068 [INFO][4662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 04:57:29.134921 containerd[1540]: 2025-07-10 04:57:29.068 [INFO][4662] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 04:57:29.134921 containerd[1540]: 2025-07-10 04:57:29.082 [INFO][4662] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" host="localhost" Jul 10 04:57:29.134921 containerd[1540]: 2025-07-10 04:57:29.089 [INFO][4662] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 04:57:29.134921 containerd[1540]: 2025-07-10 04:57:29.096 [INFO][4662] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 04:57:29.134921 containerd[1540]: 2025-07-10 04:57:29.098 [INFO][4662] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:29.134921 containerd[1540]: 2025-07-10 04:57:29.100 [INFO][4662] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:29.134921 containerd[1540]: 2025-07-10 04:57:29.100 [INFO][4662] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" host="localhost" Jul 10 04:57:29.135457 containerd[1540]: 2025-07-10 04:57:29.102 [INFO][4662] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77 Jul 10 04:57:29.135457 containerd[1540]: 2025-07-10 04:57:29.107 [INFO][4662] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" host="localhost" Jul 10 04:57:29.135457 containerd[1540]: 2025-07-10 04:57:29.112 [INFO][4662] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" host="localhost" Jul 10 04:57:29.135457 containerd[1540]: 2025-07-10 04:57:29.112 [INFO][4662] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" host="localhost" Jul 10 04:57:29.135457 containerd[1540]: 2025-07-10 04:57:29.112 [INFO][4662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 04:57:29.135457 containerd[1540]: 2025-07-10 04:57:29.112 [INFO][4662] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" HandleID="k8s-pod-network.959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" Workload="localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0" Jul 10 04:57:29.135711 containerd[1540]: 2025-07-10 04:57:29.116 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" Namespace="calico-system" Pod="calico-kube-controllers-769b95776c-5sv6r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0", GenerateName:"calico-kube-controllers-769b95776c-", Namespace:"calico-system", SelfLink:"", UID:"560ec9b9-9f2a-4afb-a6c7-78920c635be3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 57, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"769b95776c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-769b95776c-5sv6r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali71aad9dbe4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:29.135847 containerd[1540]: 2025-07-10 04:57:29.116 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" Namespace="calico-system" Pod="calico-kube-controllers-769b95776c-5sv6r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0" Jul 10 04:57:29.135847 containerd[1540]: 2025-07-10 04:57:29.116 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71aad9dbe4a ContainerID="959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" Namespace="calico-system" Pod="calico-kube-controllers-769b95776c-5sv6r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0" Jul 10 04:57:29.135847 containerd[1540]: 2025-07-10 04:57:29.119 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" Namespace="calico-system" Pod="calico-kube-controllers-769b95776c-5sv6r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0" Jul 10 04:57:29.136115 containerd[1540]: 2025-07-10 04:57:29.119 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" Namespace="calico-system" Pod="calico-kube-controllers-769b95776c-5sv6r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0", GenerateName:"calico-kube-controllers-769b95776c-", Namespace:"calico-system", SelfLink:"", UID:"560ec9b9-9f2a-4afb-a6c7-78920c635be3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 57, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"769b95776c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77", Pod:"calico-kube-controllers-769b95776c-5sv6r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali71aad9dbe4a", MAC:"fa:c5:61:aa:15:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:29.136207 containerd[1540]: 2025-07-10 04:57:29.130 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" Namespace="calico-system" Pod="calico-kube-controllers-769b95776c-5sv6r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b95776c--5sv6r-eth0" Jul 10 04:57:29.170359 containerd[1540]: time="2025-07-10T04:57:29.170309885Z" level=info msg="connecting to shim 959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77" address="unix:///run/containerd/s/13030b843aca78a4e02cebd8a650ddf0aad156922825a54548ac45459c7d2cc7" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:57:29.232151 systemd[1]: Started cri-containerd-959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77.scope - libcontainer container 959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77. Jul 10 04:57:29.243264 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 04:57:29.264498 containerd[1540]: time="2025-07-10T04:57:29.264436815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-769b95776c-5sv6r,Uid:560ec9b9-9f2a-4afb-a6c7-78920c635be3,Namespace:calico-system,Attempt:0,} returns sandbox id \"959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77\"" Jul 10 04:57:29.480542 systemd-networkd[1443]: calic1706ec8b5a: Gained IPv6LL Jul 10 04:57:29.543524 systemd-networkd[1443]: cali9bbaec9d68f: Gained IPv6LL Jul 10 04:57:29.991892 kubelet[2686]: E0710 04:57:29.991838 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:29.992710 containerd[1540]: time="2025-07-10T04:57:29.992563169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9r28g,Uid:e92b6327-56f3-4b3e-98a8-b60c5582a40f,Namespace:kube-system,Attempt:0,}" Jul 10 04:57:30.137574 systemd-networkd[1443]: cali57959148d67: Link UP Jul 10 04:57:30.138620 systemd-networkd[1443]: cali57959148d67: Gained carrier Jul 10 04:57:30.156742 kubelet[2686]: I0710 04:57:30.156633 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-676c4b66fd-cqdbk" podStartSLOduration=25.500010748 podStartE2EDuration="27.15661365s" podCreationTimestamp="2025-07-10 04:57:03 +0000 UTC" firstStartedPulling="2025-07-10 04:57:27.327211956 +0000 UTC m=+40.417688885" lastFinishedPulling="2025-07-10 04:57:28.983814818 +0000 UTC m=+42.074291787" observedRunningTime="2025-07-10 04:57:29.163045456 +0000 UTC m=+42.253522425" watchObservedRunningTime="2025-07-10 04:57:30.15661365 +0000 UTC m=+43.247090539" Jul 10 04:57:30.159288 containerd[1540]: 2025-07-10 04:57:30.043 [INFO][4754] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--9r28g-eth0 coredns-674b8bbfcf- kube-system e92b6327-56f3-4b3e-98a8-b60c5582a40f 829 0 2025-07-10 04:56:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-9r28g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali57959148d67 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-9r28g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9r28g-" Jul 10 04:57:30.159288 containerd[1540]: 2025-07-10 04:57:30.043 [INFO][4754] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-9r28g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9r28g-eth0" Jul 10 04:57:30.159288 containerd[1540]: 2025-07-10 04:57:30.070 [INFO][4768] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" HandleID="k8s-pod-network.5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" Workload="localhost-k8s-coredns--674b8bbfcf--9r28g-eth0" Jul 10 04:57:30.159627 containerd[1540]: 2025-07-10 04:57:30.070 [INFO][4768] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" HandleID="k8s-pod-network.5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" Workload="localhost-k8s-coredns--674b8bbfcf--9r28g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400057e400), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-9r28g", "timestamp":"2025-07-10 04:57:30.070467948 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 04:57:30.159627 containerd[1540]: 2025-07-10 04:57:30.070 [INFO][4768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 04:57:30.159627 containerd[1540]: 2025-07-10 04:57:30.070 [INFO][4768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 04:57:30.159627 containerd[1540]: 2025-07-10 04:57:30.070 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 04:57:30.159627 containerd[1540]: 2025-07-10 04:57:30.082 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" host="localhost" Jul 10 04:57:30.159627 containerd[1540]: 2025-07-10 04:57:30.088 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 04:57:30.159627 containerd[1540]: 2025-07-10 04:57:30.096 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 04:57:30.159627 containerd[1540]: 2025-07-10 04:57:30.098 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:30.159627 containerd[1540]: 2025-07-10 04:57:30.102 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:30.159627 containerd[1540]: 2025-07-10 04:57:30.102 [INFO][4768] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" host="localhost" Jul 10 04:57:30.159843 containerd[1540]: 2025-07-10 04:57:30.104 [INFO][4768] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5 Jul 10 04:57:30.159843 containerd[1540]: 2025-07-10 04:57:30.109 [INFO][4768] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" host="localhost" Jul 10 04:57:30.159843 containerd[1540]: 2025-07-10 04:57:30.130 [INFO][4768] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" host="localhost" Jul 10 04:57:30.159843 containerd[1540]: 2025-07-10 04:57:30.130 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" host="localhost" Jul 10 04:57:30.159843 containerd[1540]: 2025-07-10 04:57:30.130 [INFO][4768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 04:57:30.159843 containerd[1540]: 2025-07-10 04:57:30.130 [INFO][4768] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" HandleID="k8s-pod-network.5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" Workload="localhost-k8s-coredns--674b8bbfcf--9r28g-eth0" Jul 10 04:57:30.161104 containerd[1540]: 2025-07-10 04:57:30.133 [INFO][4754] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-9r28g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9r28g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9r28g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e92b6327-56f3-4b3e-98a8-b60c5582a40f", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-9r28g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57959148d67", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:30.161198 containerd[1540]: 2025-07-10 04:57:30.133 [INFO][4754] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-9r28g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9r28g-eth0" Jul 10 04:57:30.161198 containerd[1540]: 2025-07-10 04:57:30.133 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57959148d67 ContainerID="5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-9r28g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9r28g-eth0" Jul 10 04:57:30.161198 containerd[1540]: 2025-07-10 04:57:30.139 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-9r28g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9r28g-eth0" Jul 10 04:57:30.161259 containerd[1540]: 2025-07-10 04:57:30.139 [INFO][4754] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-9r28g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9r28g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9r28g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e92b6327-56f3-4b3e-98a8-b60c5582a40f", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5", Pod:"coredns-674b8bbfcf-9r28g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57959148d67", MAC:"ba:e1:fc:42:2e:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:30.161259 containerd[1540]: 2025-07-10 04:57:30.155 [INFO][4754] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-9r28g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9r28g-eth0" Jul 10 04:57:30.163333 kubelet[2686]: I0710 04:57:30.163299 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 04:57:30.191188 containerd[1540]: time="2025-07-10T04:57:30.191107844Z" level=info msg="connecting to shim 5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5" address="unix:///run/containerd/s/a44ffa9a616f7d0643c014dc3e3e51cd34f1cfd52ec2f048087952ddbd544a15" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:57:30.218145 systemd[1]: Started cri-containerd-5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5.scope - libcontainer container 5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5. Jul 10 04:57:30.232213 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 04:57:30.256788 containerd[1540]: time="2025-07-10T04:57:30.256690025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9r28g,Uid:e92b6327-56f3-4b3e-98a8-b60c5582a40f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5\"" Jul 10 04:57:30.257961 kubelet[2686]: E0710 04:57:30.257920 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:30.262186 containerd[1540]: time="2025-07-10T04:57:30.262116522Z" level=info msg="CreateContainer within sandbox \"5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 04:57:30.274309 containerd[1540]: time="2025-07-10T04:57:30.273641294Z" level=info msg="Container 25a540dc8c87d7b6ef3ef9a338ba4444ee967d10a2b38c71858397e75ee689a5: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:30.282226 containerd[1540]: time="2025-07-10T04:57:30.282176870Z" level=info msg="CreateContainer within sandbox \"5416b9d81e9da7252e886f301ce61e508df1b7b73caf1deb6966bff3b21214d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25a540dc8c87d7b6ef3ef9a338ba4444ee967d10a2b38c71858397e75ee689a5\"" Jul 10 04:57:30.284022 containerd[1540]: time="2025-07-10T04:57:30.282947690Z" level=info msg="StartContainer for \"25a540dc8c87d7b6ef3ef9a338ba4444ee967d10a2b38c71858397e75ee689a5\"" Jul 10 04:57:30.284594 containerd[1540]: time="2025-07-10T04:57:30.284569651Z" level=info msg="connecting to shim 25a540dc8c87d7b6ef3ef9a338ba4444ee967d10a2b38c71858397e75ee689a5" address="unix:///run/containerd/s/a44ffa9a616f7d0643c014dc3e3e51cd34f1cfd52ec2f048087952ddbd544a15" protocol=ttrpc version=3 Jul 10 04:57:30.314214 systemd[1]: Started cri-containerd-25a540dc8c87d7b6ef3ef9a338ba4444ee967d10a2b38c71858397e75ee689a5.scope - libcontainer container 25a540dc8c87d7b6ef3ef9a338ba4444ee967d10a2b38c71858397e75ee689a5. Jul 10 04:57:30.384451 containerd[1540]: time="2025-07-10T04:57:30.384393780Z" level=info msg="StartContainer for \"25a540dc8c87d7b6ef3ef9a338ba4444ee967d10a2b38c71858397e75ee689a5\" returns successfully" Jul 10 04:57:30.952606 systemd-networkd[1443]: cali71aad9dbe4a: Gained IPv6LL Jul 10 04:57:30.992017 kubelet[2686]: E0710 04:57:30.991951 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:30.992371 containerd[1540]: time="2025-07-10T04:57:30.992323778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gnzxc,Uid:c0885256-e6c0-4f6d-ad17-234f0a42947d,Namespace:kube-system,Attempt:0,}" Jul 10 04:57:31.169299 kubelet[2686]: E0710 04:57:31.169187 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:31.201936 systemd-networkd[1443]: cali1b548c6826b: Link UP Jul 10 04:57:31.203656 kubelet[2686]: I0710 04:57:31.203449 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9r28g" podStartSLOduration=37.203433308 podStartE2EDuration="37.203433308s" podCreationTimestamp="2025-07-10 04:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 04:57:31.186311926 +0000 UTC m=+44.276788855" watchObservedRunningTime="2025-07-10 04:57:31.203433308 +0000 UTC m=+44.293910237" Jul 10 04:57:31.204260 systemd-networkd[1443]: cali1b548c6826b: Gained carrier Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.104 [INFO][4876] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0 coredns-674b8bbfcf- kube-system c0885256-e6c0-4f6d-ad17-234f0a42947d 833 0 2025-07-10 04:56:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-gnzxc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1b548c6826b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gnzxc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gnzxc-" Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.104 [INFO][4876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gnzxc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0" Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.145 [INFO][4889] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" HandleID="k8s-pod-network.c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" Workload="localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0" Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.145 [INFO][4889] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" HandleID="k8s-pod-network.c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" Workload="localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-gnzxc", "timestamp":"2025-07-10 04:57:31.145776967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.146 [INFO][4889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.146 [INFO][4889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.146 [INFO][4889] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.157 [INFO][4889] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" host="localhost" Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.165 [INFO][4889] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.170 [INFO][4889] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.172 [INFO][4889] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.175 [INFO][4889] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.175 [INFO][4889] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" host="localhost" Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.178 [INFO][4889] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.183 [INFO][4889] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" host="localhost" Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.192 [INFO][4889] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" host="localhost" Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.192 [INFO][4889] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" host="localhost" Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.192 [INFO][4889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 04:57:31.227765 containerd[1540]: 2025-07-10 04:57:31.192 [INFO][4889] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" HandleID="k8s-pod-network.c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" Workload="localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0" Jul 10 04:57:31.228655 containerd[1540]: 2025-07-10 04:57:31.197 [INFO][4876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gnzxc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c0885256-e6c0-4f6d-ad17-234f0a42947d", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-gnzxc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b548c6826b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:31.228655 containerd[1540]: 2025-07-10 04:57:31.197 [INFO][4876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gnzxc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0" Jul 10 04:57:31.228655 containerd[1540]: 2025-07-10 04:57:31.197 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b548c6826b ContainerID="c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gnzxc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0" Jul 10 04:57:31.228655 containerd[1540]: 2025-07-10 04:57:31.206 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gnzxc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0" Jul 10 04:57:31.228655 containerd[1540]: 2025-07-10 04:57:31.206 [INFO][4876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gnzxc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c0885256-e6c0-4f6d-ad17-234f0a42947d", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 4, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e", Pod:"coredns-674b8bbfcf-gnzxc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b548c6826b", MAC:"6a:50:55:d5:bd:8e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 04:57:31.228655 containerd[1540]: 2025-07-10 04:57:31.222 [INFO][4876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gnzxc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gnzxc-eth0" Jul 10 04:57:31.297553 containerd[1540]: time="2025-07-10T04:57:31.297506747Z" level=info msg="connecting to shim c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e" address="unix:///run/containerd/s/95532867d34b46c8ca1b03122cb16040e85722072c50c0f05557723beb98fc51" namespace=k8s.io protocol=ttrpc version=3 Jul 10 04:57:31.337324 systemd[1]: Started cri-containerd-c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e.scope - libcontainer container c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e. Jul 10 04:57:31.357195 containerd[1540]: time="2025-07-10T04:57:31.357146737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:31.360113 containerd[1540]: time="2025-07-10T04:57:31.360057849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 10 04:57:31.363100 containerd[1540]: time="2025-07-10T04:57:31.363070443Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:31.366306 containerd[1540]: time="2025-07-10T04:57:31.366247802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:31.366316 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 04:57:31.366923 containerd[1540]: time="2025-07-10T04:57:31.366884217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.382435622s" Jul 10 04:57:31.366923 containerd[1540]: time="2025-07-10T04:57:31.366921298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 10 04:57:31.368257 containerd[1540]: time="2025-07-10T04:57:31.368227331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 04:57:31.371829 containerd[1540]: time="2025-07-10T04:57:31.371715017Z" level=info msg="CreateContainer within sandbox \"a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 04:57:31.382173 containerd[1540]: time="2025-07-10T04:57:31.382122113Z" level=info msg="Container 521b70f9eafde74ae3a7a3bad9033d89bf1d8aee112988ecb5da71fcbba05cff: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:31.389057 containerd[1540]: time="2025-07-10T04:57:31.389017763Z" level=info msg="CreateContainer within sandbox \"a8e886335e8295b37494895b1629a6dab16b4216cc670fb1625f44083e2b6197\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"521b70f9eafde74ae3a7a3bad9033d89bf1d8aee112988ecb5da71fcbba05cff\"" Jul 10 04:57:31.390511 containerd[1540]: time="2025-07-10T04:57:31.390481239Z" level=info msg="StartContainer for \"521b70f9eafde74ae3a7a3bad9033d89bf1d8aee112988ecb5da71fcbba05cff\"" Jul 10 04:57:31.391964 containerd[1540]: time="2025-07-10T04:57:31.391868313Z" level=info msg="connecting to shim 521b70f9eafde74ae3a7a3bad9033d89bf1d8aee112988ecb5da71fcbba05cff" address="unix:///run/containerd/s/9673090b4f3b469b685e9e64328b884ebee9eae52680735e88af5f6d67523f02" protocol=ttrpc version=3 Jul 10 04:57:31.392138 containerd[1540]: time="2025-07-10T04:57:31.392104999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gnzxc,Uid:c0885256-e6c0-4f6d-ad17-234f0a42947d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e\"" Jul 10 04:57:31.392940 kubelet[2686]: E0710 04:57:31.392728 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:31.397616 containerd[1540]: time="2025-07-10T04:57:31.397563534Z" level=info msg="CreateContainer within sandbox \"c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 04:57:31.399461 systemd-networkd[1443]: cali57959148d67: Gained IPv6LL Jul 10 04:57:31.409444 containerd[1540]: time="2025-07-10T04:57:31.409394065Z" level=info msg="Container 3788679fd2e74f892adffae7fa1b71f03a8674f3a3303cf05436076b6a65ec84: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:31.419259 systemd[1]: Started cri-containerd-521b70f9eafde74ae3a7a3bad9033d89bf1d8aee112988ecb5da71fcbba05cff.scope - libcontainer container 521b70f9eafde74ae3a7a3bad9033d89bf1d8aee112988ecb5da71fcbba05cff. Jul 10 04:57:31.431249 containerd[1540]: time="2025-07-10T04:57:31.431197483Z" level=info msg="CreateContainer within sandbox \"c3f1d757bd92c0eb9b48d533edd02f331b87be83f2c2dd2f9ab0835df27d3b0e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3788679fd2e74f892adffae7fa1b71f03a8674f3a3303cf05436076b6a65ec84\"" Jul 10 04:57:31.432588 containerd[1540]: time="2025-07-10T04:57:31.432139706Z" level=info msg="StartContainer for \"3788679fd2e74f892adffae7fa1b71f03a8674f3a3303cf05436076b6a65ec84\"" Jul 10 04:57:31.433164 containerd[1540]: time="2025-07-10T04:57:31.433132811Z" level=info msg="connecting to shim 3788679fd2e74f892adffae7fa1b71f03a8674f3a3303cf05436076b6a65ec84" address="unix:///run/containerd/s/95532867d34b46c8ca1b03122cb16040e85722072c50c0f05557723beb98fc51" protocol=ttrpc version=3 Jul 10 04:57:31.465174 systemd[1]: Started cri-containerd-3788679fd2e74f892adffae7fa1b71f03a8674f3a3303cf05436076b6a65ec84.scope - libcontainer container 3788679fd2e74f892adffae7fa1b71f03a8674f3a3303cf05436076b6a65ec84. Jul 10 04:57:31.474740 containerd[1540]: time="2025-07-10T04:57:31.474702795Z" level=info msg="StartContainer for \"521b70f9eafde74ae3a7a3bad9033d89bf1d8aee112988ecb5da71fcbba05cff\" returns successfully" Jul 10 04:57:31.518400 containerd[1540]: time="2025-07-10T04:57:31.518315590Z" level=info msg="StartContainer for \"3788679fd2e74f892adffae7fa1b71f03a8674f3a3303cf05436076b6a65ec84\" returns successfully" Jul 10 04:57:32.173497 kubelet[2686]: E0710 04:57:32.173354 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:32.175663 kubelet[2686]: E0710 04:57:32.175584 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:32.182561 kubelet[2686]: I0710 04:57:32.182117 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-9tzsm" podStartSLOduration=22.219989228 podStartE2EDuration="26.182103834s" podCreationTimestamp="2025-07-10 04:57:06 +0000 UTC" firstStartedPulling="2025-07-10 04:57:27.405992842 +0000 UTC m=+40.496469731" lastFinishedPulling="2025-07-10 04:57:31.368107408 +0000 UTC m=+44.458584337" observedRunningTime="2025-07-10 04:57:32.180441435 +0000 UTC m=+45.270918324" watchObservedRunningTime="2025-07-10 04:57:32.182103834 +0000 UTC m=+45.272580763" Jul 10 04:57:32.191547 kubelet[2686]: I0710 04:57:32.191491 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gnzxc" podStartSLOduration=38.191475339 podStartE2EDuration="38.191475339s" podCreationTimestamp="2025-07-10 04:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 04:57:32.191003408 +0000 UTC m=+45.281480337" watchObservedRunningTime="2025-07-10 04:57:32.191475339 +0000 UTC m=+45.281952268" Jul 10 04:57:32.295240 systemd-networkd[1443]: cali1b548c6826b: Gained IPv6LL Jul 10 04:57:32.346899 containerd[1540]: time="2025-07-10T04:57:32.346847867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:32.349000 containerd[1540]: time="2025-07-10T04:57:32.348626750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 10 04:57:32.349769 containerd[1540]: time="2025-07-10T04:57:32.349711216Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:32.352685 containerd[1540]: time="2025-07-10T04:57:32.352642086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 984.379675ms" Jul 10 04:57:32.352685 containerd[1540]: time="2025-07-10T04:57:32.352679727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 10 04:57:32.353349 containerd[1540]: time="2025-07-10T04:57:32.353316022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:32.354002 containerd[1540]: time="2025-07-10T04:57:32.353762393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 04:57:32.357249 containerd[1540]: time="2025-07-10T04:57:32.357215596Z" level=info msg="CreateContainer within sandbox \"477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 04:57:32.370840 containerd[1540]: time="2025-07-10T04:57:32.370574077Z" level=info msg="Container d4bb4240ee1e4fd7302988bcea9170a2437fa23f1fd80ff87b92d26639345e0c: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:32.379133 containerd[1540]: time="2025-07-10T04:57:32.379049680Z" level=info msg="CreateContainer within sandbox \"477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d4bb4240ee1e4fd7302988bcea9170a2437fa23f1fd80ff87b92d26639345e0c\"" Jul 10 04:57:32.379604 containerd[1540]: time="2025-07-10T04:57:32.379569172Z" level=info msg="StartContainer for \"d4bb4240ee1e4fd7302988bcea9170a2437fa23f1fd80ff87b92d26639345e0c\"" Jul 10 04:57:32.381219 containerd[1540]: time="2025-07-10T04:57:32.381157530Z" level=info msg="connecting to shim d4bb4240ee1e4fd7302988bcea9170a2437fa23f1fd80ff87b92d26639345e0c" address="unix:///run/containerd/s/f3cb8e960e7655a9cea96d1121359cd226f3d736d710ddb5bb133f8020517ccb" protocol=ttrpc version=3 Jul 10 04:57:32.402206 systemd[1]: Started cri-containerd-d4bb4240ee1e4fd7302988bcea9170a2437fa23f1fd80ff87b92d26639345e0c.scope - libcontainer container d4bb4240ee1e4fd7302988bcea9170a2437fa23f1fd80ff87b92d26639345e0c. Jul 10 04:57:32.409389 systemd[1]: Started sshd@8-10.0.0.20:22-10.0.0.1:60592.service - OpenSSH per-connection server daemon (10.0.0.1:60592). Jul 10 04:57:32.460211 containerd[1540]: time="2025-07-10T04:57:32.460148026Z" level=info msg="StartContainer for \"d4bb4240ee1e4fd7302988bcea9170a2437fa23f1fd80ff87b92d26639345e0c\" returns successfully" Jul 10 04:57:32.488064 sshd[5061]: Accepted publickey for core from 10.0.0.1 port 60592 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:57:32.489547 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:57:32.494192 systemd-logind[1509]: New session 9 of user core. Jul 10 04:57:32.510159 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 04:57:32.602926 containerd[1540]: time="2025-07-10T04:57:32.602868170Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:32.603493 containerd[1540]: time="2025-07-10T04:57:32.603377502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 04:57:32.605105 containerd[1540]: time="2025-07-10T04:57:32.605074023Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 251.280949ms" Jul 10 04:57:32.605222 containerd[1540]: time="2025-07-10T04:57:32.605184186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 04:57:32.606396 containerd[1540]: time="2025-07-10T04:57:32.606195130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 04:57:32.609307 containerd[1540]: time="2025-07-10T04:57:32.609283404Z" level=info msg="CreateContainer within sandbox \"ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 04:57:32.615058 containerd[1540]: time="2025-07-10T04:57:32.614383766Z" level=info msg="Container 6d294356bb454fe1a9ea1291f3f90e737a29df95f697e4f49c99e41bfa095ffa: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:32.622814 containerd[1540]: time="2025-07-10T04:57:32.622761207Z" level=info msg="CreateContainer within sandbox \"ce3d128957c91bca8756a52968b84c547bacc4893c2154ed691e97cb75e88fa1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d294356bb454fe1a9ea1291f3f90e737a29df95f697e4f49c99e41bfa095ffa\"" Jul 10 04:57:32.624482 containerd[1540]: time="2025-07-10T04:57:32.623927395Z" level=info msg="StartContainer for \"6d294356bb454fe1a9ea1291f3f90e737a29df95f697e4f49c99e41bfa095ffa\"" Jul 10 04:57:32.625293 containerd[1540]: time="2025-07-10T04:57:32.625268548Z" level=info msg="connecting to shim 6d294356bb454fe1a9ea1291f3f90e737a29df95f697e4f49c99e41bfa095ffa" address="unix:///run/containerd/s/6660f37f94b09fbf06cbf7fe484526b2a6564e297ce98faf674fe87455af15cd" protocol=ttrpc version=3 Jul 10 04:57:32.650166 systemd[1]: Started cri-containerd-6d294356bb454fe1a9ea1291f3f90e737a29df95f697e4f49c99e41bfa095ffa.scope - libcontainer container 6d294356bb454fe1a9ea1291f3f90e737a29df95f697e4f49c99e41bfa095ffa. Jul 10 04:57:32.712568 containerd[1540]: time="2025-07-10T04:57:32.712427759Z" level=info msg="StartContainer for \"6d294356bb454fe1a9ea1291f3f90e737a29df95f697e4f49c99e41bfa095ffa\" returns successfully" Jul 10 04:57:32.741200 sshd[5075]: Connection closed by 10.0.0.1 port 60592 Jul 10 04:57:32.741714 sshd-session[5061]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:32.746150 systemd[1]: sshd@8-10.0.0.20:22-10.0.0.1:60592.service: Deactivated successfully. Jul 10 04:57:32.748271 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 04:57:32.751141 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Jul 10 04:57:32.752646 systemd-logind[1509]: Removed session 9. Jul 10 04:57:33.183003 kubelet[2686]: I0710 04:57:33.182571 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 04:57:33.183626 kubelet[2686]: E0710 04:57:33.183605 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:33.190458 kubelet[2686]: I0710 04:57:33.190387 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-676c4b66fd-55hj2" podStartSLOduration=25.916920336 podStartE2EDuration="30.190372306s" podCreationTimestamp="2025-07-10 04:57:03 +0000 UTC" firstStartedPulling="2025-07-10 04:57:28.332619237 +0000 UTC m=+41.423096166" lastFinishedPulling="2025-07-10 04:57:32.606071207 +0000 UTC m=+45.696548136" observedRunningTime="2025-07-10 04:57:33.188729947 +0000 UTC m=+46.279206876" watchObservedRunningTime="2025-07-10 04:57:33.190372306 +0000 UTC m=+46.280849235" Jul 10 04:57:33.737670 kubelet[2686]: I0710 04:57:33.736621 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 04:57:34.187300 kubelet[2686]: E0710 04:57:34.187265 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 04:57:34.621260 containerd[1540]: time="2025-07-10T04:57:34.621136221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:34.622655 containerd[1540]: time="2025-07-10T04:57:34.622515373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 10 04:57:34.623513 containerd[1540]: time="2025-07-10T04:57:34.623476234Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:34.627419 containerd[1540]: time="2025-07-10T04:57:34.627255760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:34.629033 containerd[1540]: time="2025-07-10T04:57:34.628957639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.022716348s" Jul 10 04:57:34.629033 containerd[1540]: time="2025-07-10T04:57:34.629016520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 10 04:57:34.631012 containerd[1540]: time="2025-07-10T04:57:34.630281709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 04:57:34.642332 containerd[1540]: time="2025-07-10T04:57:34.642291542Z" level=info msg="CreateContainer within sandbox \"959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 04:57:34.659815 containerd[1540]: time="2025-07-10T04:57:34.659763740Z" level=info msg="Container 34a0470da3ee9bb1d1f4d973712cdd08389085217c8a81e959528d58f78de73c: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:34.763943 containerd[1540]: time="2025-07-10T04:57:34.763655782Z" level=info msg="CreateContainer within sandbox \"959d33b3dc5ca83a36afc831498832780d99d753f5f647fa2114267352e0ab77\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"34a0470da3ee9bb1d1f4d973712cdd08389085217c8a81e959528d58f78de73c\"" Jul 10 04:57:34.764366 containerd[1540]: time="2025-07-10T04:57:34.764328197Z" level=info msg="StartContainer for \"34a0470da3ee9bb1d1f4d973712cdd08389085217c8a81e959528d58f78de73c\"" Jul 10 04:57:34.767855 containerd[1540]: time="2025-07-10T04:57:34.767108821Z" level=info msg="connecting to shim 34a0470da3ee9bb1d1f4d973712cdd08389085217c8a81e959528d58f78de73c" address="unix:///run/containerd/s/13030b843aca78a4e02cebd8a650ddf0aad156922825a54548ac45459c7d2cc7" protocol=ttrpc version=3 Jul 10 04:57:34.803552 systemd[1]: Started cri-containerd-34a0470da3ee9bb1d1f4d973712cdd08389085217c8a81e959528d58f78de73c.scope - libcontainer container 34a0470da3ee9bb1d1f4d973712cdd08389085217c8a81e959528d58f78de73c. Jul 10 04:57:34.941795 containerd[1540]: time="2025-07-10T04:57:34.941751192Z" level=info msg="StartContainer for \"34a0470da3ee9bb1d1f4d973712cdd08389085217c8a81e959528d58f78de73c\" returns successfully" Jul 10 04:57:34.983673 kubelet[2686]: I0710 04:57:34.983592 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 04:57:35.110670 containerd[1540]: time="2025-07-10T04:57:35.110625086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"521b70f9eafde74ae3a7a3bad9033d89bf1d8aee112988ecb5da71fcbba05cff\" id:\"c73f63fda18152024307c4a3fa5a4a83d35372e066f88b50d8fce397b230fc68\" pid:5190 exit_status:1 exited_at:{seconds:1752123455 nanos:110240398}" Jul 10 04:57:35.177622 containerd[1540]: time="2025-07-10T04:57:35.177568289Z" level=info msg="TaskExit event in podsandbox handler container_id:\"521b70f9eafde74ae3a7a3bad9033d89bf1d8aee112988ecb5da71fcbba05cff\" id:\"dc4e8bbccde984df97a013375f1c16237efdfc3b3a980674f63aabad78854535\" pid:5216 exit_status:1 exited_at:{seconds:1752123455 nanos:177091838}" Jul 10 04:57:35.200845 kubelet[2686]: I0710 04:57:35.200724 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-769b95776c-5sv6r" podStartSLOduration=22.836449707 podStartE2EDuration="28.200709441s" podCreationTimestamp="2025-07-10 04:57:07 +0000 UTC" firstStartedPulling="2025-07-10 04:57:29.265590285 +0000 UTC m=+42.356067214" lastFinishedPulling="2025-07-10 04:57:34.629850059 +0000 UTC m=+47.720326948" observedRunningTime="2025-07-10 04:57:35.199876103 +0000 UTC m=+48.290353072" watchObservedRunningTime="2025-07-10 04:57:35.200709441 +0000 UTC m=+48.291186370" Jul 10 04:57:35.857731 containerd[1540]: time="2025-07-10T04:57:35.857676268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:35.858285 containerd[1540]: time="2025-07-10T04:57:35.858063676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 10 04:57:35.858850 containerd[1540]: time="2025-07-10T04:57:35.858807133Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:35.860605 containerd[1540]: time="2025-07-10T04:57:35.860529931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 04:57:35.861338 containerd[1540]: time="2025-07-10T04:57:35.861210386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.230888876s" Jul 10 04:57:35.861338 containerd[1540]: time="2025-07-10T04:57:35.861244627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 10 04:57:35.865517 containerd[1540]: time="2025-07-10T04:57:35.865481681Z" level=info msg="CreateContainer within sandbox \"477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 04:57:35.894001 containerd[1540]: time="2025-07-10T04:57:35.893662625Z" level=info msg="Container 47df87d66ce1c27eb0527c946e788a7b1616800e9a6e8b8543c9eb0683f87859: CDI devices from CRI Config.CDIDevices: []" Jul 10 04:57:35.900840 containerd[1540]: time="2025-07-10T04:57:35.900792302Z" level=info msg="CreateContainer within sandbox \"477e0356d7d4adb3f04f74e2515c87b41fc6c01cf3644c3b09bc02e09be29516\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"47df87d66ce1c27eb0527c946e788a7b1616800e9a6e8b8543c9eb0683f87859\"" Jul 10 04:57:35.902082 containerd[1540]: time="2025-07-10T04:57:35.902038890Z" level=info msg="StartContainer for \"47df87d66ce1c27eb0527c946e788a7b1616800e9a6e8b8543c9eb0683f87859\"" Jul 10 04:57:35.904234 containerd[1540]: time="2025-07-10T04:57:35.904187858Z" level=info msg="connecting to shim 47df87d66ce1c27eb0527c946e788a7b1616800e9a6e8b8543c9eb0683f87859" address="unix:///run/containerd/s/f3cb8e960e7655a9cea96d1121359cd226f3d736d710ddb5bb133f8020517ccb" protocol=ttrpc version=3 Jul 10 04:57:35.927187 systemd[1]: Started cri-containerd-47df87d66ce1c27eb0527c946e788a7b1616800e9a6e8b8543c9eb0683f87859.scope - libcontainer container 47df87d66ce1c27eb0527c946e788a7b1616800e9a6e8b8543c9eb0683f87859. Jul 10 04:57:35.961799 containerd[1540]: time="2025-07-10T04:57:35.961761572Z" level=info msg="StartContainer for \"47df87d66ce1c27eb0527c946e788a7b1616800e9a6e8b8543c9eb0683f87859\" returns successfully" Jul 10 04:57:36.054698 kubelet[2686]: I0710 04:57:36.054627 2686 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 04:57:36.059280 kubelet[2686]: I0710 04:57:36.059196 2686 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 04:57:36.199642 kubelet[2686]: I0710 04:57:36.199594 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 04:57:36.212788 kubelet[2686]: I0710 04:57:36.212713 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s2vr6" podStartSLOduration=21.546350656 podStartE2EDuration="29.212697446s" podCreationTimestamp="2025-07-10 04:57:07 +0000 UTC" firstStartedPulling="2025-07-10 04:57:28.195848258 +0000 UTC m=+41.286325147" lastFinishedPulling="2025-07-10 04:57:35.862195008 +0000 UTC m=+48.952671937" observedRunningTime="2025-07-10 04:57:36.210991009 +0000 UTC m=+49.301468218" watchObservedRunningTime="2025-07-10 04:57:36.212697446 +0000 UTC m=+49.303174375" Jul 10 04:57:37.753787 systemd[1]: Started sshd@9-10.0.0.20:22-10.0.0.1:33192.service - OpenSSH per-connection server daemon (10.0.0.1:33192). Jul 10 04:57:37.834660 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 33192 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:57:37.836679 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:57:37.842075 systemd-logind[1509]: New session 10 of user core. Jul 10 04:57:37.854137 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 04:57:38.064802 sshd[5283]: Connection closed by 10.0.0.1 port 33192 Jul 10 04:57:38.065092 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:38.077429 systemd[1]: sshd@9-10.0.0.20:22-10.0.0.1:33192.service: Deactivated successfully. Jul 10 04:57:38.079097 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 04:57:38.080079 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Jul 10 04:57:38.082316 systemd[1]: Started sshd@10-10.0.0.20:22-10.0.0.1:33206.service - OpenSSH per-connection server daemon (10.0.0.1:33206). Jul 10 04:57:38.083810 systemd-logind[1509]: Removed session 10. Jul 10 04:57:38.151082 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 33206 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:57:38.152426 sshd-session[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:57:38.156653 systemd-logind[1509]: New session 11 of user core. Jul 10 04:57:38.166174 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 04:57:38.401092 sshd[5300]: Connection closed by 10.0.0.1 port 33206 Jul 10 04:57:38.401430 sshd-session[5297]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:38.414275 systemd[1]: sshd@10-10.0.0.20:22-10.0.0.1:33206.service: Deactivated successfully. Jul 10 04:57:38.416529 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 04:57:38.419046 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Jul 10 04:57:38.423233 systemd[1]: Started sshd@11-10.0.0.20:22-10.0.0.1:33222.service - OpenSSH per-connection server daemon (10.0.0.1:33222). Jul 10 04:57:38.424304 systemd-logind[1509]: Removed session 11. Jul 10 04:57:38.479565 sshd[5312]: Accepted publickey for core from 10.0.0.1 port 33222 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:57:38.480822 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:57:38.484930 systemd-logind[1509]: New session 12 of user core. Jul 10 04:57:38.491136 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 04:57:38.657007 sshd[5315]: Connection closed by 10.0.0.1 port 33222 Jul 10 04:57:38.657964 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:38.661867 systemd[1]: sshd@11-10.0.0.20:22-10.0.0.1:33222.service: Deactivated successfully. Jul 10 04:57:38.664601 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 04:57:38.665569 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Jul 10 04:57:38.666554 systemd-logind[1509]: Removed session 12. Jul 10 04:57:39.848861 kubelet[2686]: I0710 04:57:39.848478 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 04:57:39.892452 containerd[1540]: time="2025-07-10T04:57:39.892388152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34a0470da3ee9bb1d1f4d973712cdd08389085217c8a81e959528d58f78de73c\" id:\"5561b754092e1dd93dccfb46ea9ec15c5bd175f9526c9b0bb8840959c46701bf\" pid:5345 exited_at:{seconds:1752123459 nanos:890107546}" Jul 10 04:57:39.935962 containerd[1540]: time="2025-07-10T04:57:39.935919100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34a0470da3ee9bb1d1f4d973712cdd08389085217c8a81e959528d58f78de73c\" id:\"eec6b8e63437e747e95a052d938dfed9d2141abcf1b03958b7e1e86f8f09ab35\" pid:5367 exited_at:{seconds:1752123459 nanos:935709735}" Jul 10 04:57:41.838499 containerd[1540]: time="2025-07-10T04:57:41.838448990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"521b70f9eafde74ae3a7a3bad9033d89bf1d8aee112988ecb5da71fcbba05cff\" id:\"29fa1e633cf969ae7bc99a0890f556b7ff96e8d13556c2d8e0ff2fbe044ce5be\" pid:5392 exited_at:{seconds:1752123461 nanos:838184665}" Jul 10 04:57:43.672829 systemd[1]: Started sshd@12-10.0.0.20:22-10.0.0.1:49344.service - OpenSSH per-connection server daemon (10.0.0.1:49344). Jul 10 04:57:43.732209 sshd[5411]: Accepted publickey for core from 10.0.0.1 port 49344 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:57:43.733655 sshd-session[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:57:43.737914 systemd-logind[1509]: New session 13 of user core. Jul 10 04:57:43.745168 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 04:57:43.886791 sshd[5414]: Connection closed by 10.0.0.1 port 49344 Jul 10 04:57:43.886903 sshd-session[5411]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:43.896487 systemd[1]: sshd@12-10.0.0.20:22-10.0.0.1:49344.service: Deactivated successfully. Jul 10 04:57:43.899413 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 04:57:43.901503 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Jul 10 04:57:43.903734 systemd[1]: Started sshd@13-10.0.0.20:22-10.0.0.1:49360.service - OpenSSH per-connection server daemon (10.0.0.1:49360). Jul 10 04:57:43.905067 systemd-logind[1509]: Removed session 13. Jul 10 04:57:43.970286 sshd[5427]: Accepted publickey for core from 10.0.0.1 port 49360 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:57:43.971754 sshd-session[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:57:43.976617 systemd-logind[1509]: New session 14 of user core. Jul 10 04:57:43.986145 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 04:57:44.200497 sshd[5430]: Connection closed by 10.0.0.1 port 49360 Jul 10 04:57:44.199392 sshd-session[5427]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:44.207580 systemd[1]: sshd@13-10.0.0.20:22-10.0.0.1:49360.service: Deactivated successfully. Jul 10 04:57:44.209827 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 04:57:44.210713 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Jul 10 04:57:44.213086 systemd[1]: Started sshd@14-10.0.0.20:22-10.0.0.1:49374.service - OpenSSH per-connection server daemon (10.0.0.1:49374). Jul 10 04:57:44.215322 systemd-logind[1509]: Removed session 14. Jul 10 04:57:44.269751 sshd[5441]: Accepted publickey for core from 10.0.0.1 port 49374 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:57:44.271120 sshd-session[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:57:44.277154 systemd-logind[1509]: New session 15 of user core. Jul 10 04:57:44.286171 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 04:57:45.089788 sshd[5444]: Connection closed by 10.0.0.1 port 49374 Jul 10 04:57:45.090309 sshd-session[5441]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:45.099598 systemd[1]: sshd@14-10.0.0.20:22-10.0.0.1:49374.service: Deactivated successfully. Jul 10 04:57:45.105072 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 04:57:45.106013 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Jul 10 04:57:45.111423 systemd[1]: Started sshd@15-10.0.0.20:22-10.0.0.1:49388.service - OpenSSH per-connection server daemon (10.0.0.1:49388). Jul 10 04:57:45.113476 systemd-logind[1509]: Removed session 15. Jul 10 04:57:45.174337 sshd[5462]: Accepted publickey for core from 10.0.0.1 port 49388 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:57:45.175601 sshd-session[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:57:45.179870 systemd-logind[1509]: New session 16 of user core. Jul 10 04:57:45.188147 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 04:57:45.484192 sshd[5466]: Connection closed by 10.0.0.1 port 49388 Jul 10 04:57:45.483513 sshd-session[5462]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:45.493391 systemd[1]: sshd@15-10.0.0.20:22-10.0.0.1:49388.service: Deactivated successfully. Jul 10 04:57:45.498618 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 04:57:45.500634 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Jul 10 04:57:45.508650 systemd-logind[1509]: Removed session 16. Jul 10 04:57:45.510735 systemd[1]: Started sshd@16-10.0.0.20:22-10.0.0.1:49396.service - OpenSSH per-connection server daemon (10.0.0.1:49396). Jul 10 04:57:45.572335 sshd[5477]: Accepted publickey for core from 10.0.0.1 port 49396 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:57:45.573730 sshd-session[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:57:45.578868 systemd-logind[1509]: New session 17 of user core. Jul 10 04:57:45.591186 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 04:57:45.760928 sshd[5480]: Connection closed by 10.0.0.1 port 49396 Jul 10 04:57:45.761228 sshd-session[5477]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:45.765553 systemd[1]: sshd@16-10.0.0.20:22-10.0.0.1:49396.service: Deactivated successfully. Jul 10 04:57:45.765864 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Jul 10 04:57:45.767900 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 04:57:45.770090 systemd-logind[1509]: Removed session 17. Jul 10 04:57:50.776443 systemd[1]: Started sshd@17-10.0.0.20:22-10.0.0.1:49398.service - OpenSSH per-connection server daemon (10.0.0.1:49398). Jul 10 04:57:50.817674 sshd[5495]: Accepted publickey for core from 10.0.0.1 port 49398 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:57:50.818882 sshd-session[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:57:50.823219 systemd-logind[1509]: New session 18 of user core. Jul 10 04:57:50.829166 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 04:57:50.949703 sshd[5498]: Connection closed by 10.0.0.1 port 49398 Jul 10 04:57:50.950048 sshd-session[5495]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:50.952918 systemd[1]: sshd@17-10.0.0.20:22-10.0.0.1:49398.service: Deactivated successfully. Jul 10 04:57:50.955655 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 04:57:50.957175 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Jul 10 04:57:50.958530 systemd-logind[1509]: Removed session 18. Jul 10 04:57:52.199156 containerd[1540]: time="2025-07-10T04:57:52.199102362Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9bf7583f664e66876af314850012f89fa18f2a8d829e3921f99c6d49f74c5a7\" id:\"e87631ff1d380ba235f0f27ca8c610715360d91fc03f145c6a8f5c0be456d416\" pid:5526 exited_at:{seconds:1752123472 nanos:198785797}" Jul 10 04:57:55.963432 systemd[1]: Started sshd@18-10.0.0.20:22-10.0.0.1:59436.service - OpenSSH per-connection server daemon (10.0.0.1:59436). Jul 10 04:57:56.035825 sshd[5543]: Accepted publickey for core from 10.0.0.1 port 59436 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:57:56.038425 sshd-session[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:57:56.041957 systemd-logind[1509]: New session 19 of user core. Jul 10 04:57:56.060118 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 04:57:56.195124 sshd[5546]: Connection closed by 10.0.0.1 port 59436 Jul 10 04:57:56.194670 sshd-session[5543]: pam_unix(sshd:session): session closed for user core Jul 10 04:57:56.199787 systemd[1]: sshd@18-10.0.0.20:22-10.0.0.1:59436.service: Deactivated successfully. Jul 10 04:57:56.201577 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 04:57:56.202397 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Jul 10 04:57:56.203385 systemd-logind[1509]: Removed session 19. Jul 10 04:58:01.209103 systemd[1]: Started sshd@19-10.0.0.20:22-10.0.0.1:59440.service - OpenSSH per-connection server daemon (10.0.0.1:59440). Jul 10 04:58:01.273002 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 59440 ssh2: RSA SHA256:60x4kt+cNvrQ3rsJebu6okA1gOhUy1cCf4FkoCiFAMw Jul 10 04:58:01.274467 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 04:58:01.278914 systemd-logind[1509]: New session 20 of user core. Jul 10 04:58:01.289148 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 04:58:01.441186 sshd[5563]: Connection closed by 10.0.0.1 port 59440 Jul 10 04:58:01.441715 sshd-session[5560]: pam_unix(sshd:session): session closed for user core Jul 10 04:58:01.445540 systemd[1]: sshd@19-10.0.0.20:22-10.0.0.1:59440.service: Deactivated successfully. Jul 10 04:58:01.448433 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 04:58:01.449330 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Jul 10 04:58:01.450408 systemd-logind[1509]: Removed session 20.