Jul 6 23:38:46.859457 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:38:46.859479 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:52:18 -00 2025 Jul 6 23:38:46.859489 kernel: KASLR enabled Jul 6 23:38:46.859495 kernel: efi: EFI v2.7 by EDK II Jul 6 23:38:46.859500 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Jul 6 23:38:46.859506 kernel: random: crng init done Jul 6 23:38:46.859513 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 6 23:38:46.859519 kernel: secureboot: Secure boot enabled Jul 6 23:38:46.859525 kernel: ACPI: Early table checksum verification disabled Jul 6 23:38:46.859532 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Jul 6 23:38:46.859542 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:38:46.859549 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:38:46.859555 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:38:46.859561 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:38:46.859568 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:38:46.859576 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:38:46.859582 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:38:46.859588 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:38:46.859594 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:38:46.859600 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:38:46.859607 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 6 23:38:46.859613 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:38:46.859619 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:38:46.859625 kernel: NODE_DATA(0) allocated [mem 0xdc737dc0-0xdc73efff] Jul 6 23:38:46.859632 kernel: Zone ranges: Jul 6 23:38:46.859639 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:38:46.859646 kernel: DMA32 empty Jul 6 23:38:46.859652 kernel: Normal empty Jul 6 23:38:46.859658 kernel: Device empty Jul 6 23:38:46.859664 kernel: Movable zone start for each node Jul 6 23:38:46.859670 kernel: Early memory node ranges Jul 6 23:38:46.859676 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Jul 6 23:38:46.859682 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Jul 6 23:38:46.859688 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Jul 6 23:38:46.859695 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Jul 6 23:38:46.859700 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Jul 6 23:38:46.859707 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Jul 6 23:38:46.859714 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Jul 6 23:38:46.859720 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Jul 6 23:38:46.859727 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 6 23:38:46.859736 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:38:46.859742 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 6 23:38:46.859749 kernel: psci: probing for conduit method from ACPI. Jul 6 23:38:46.859755 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:38:46.859763 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:38:46.859770 kernel: psci: Trusted OS migration not required Jul 6 23:38:46.859776 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:38:46.859783 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 6 23:38:46.859790 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:38:46.859796 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:38:46.859803 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 6 23:38:46.859809 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:38:46.859816 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:38:46.859824 kernel: CPU features: detected: Spectre-v4 Jul 6 23:38:46.859831 kernel: CPU features: detected: Spectre-BHB Jul 6 23:38:46.859837 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:38:46.859844 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:38:46.859850 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:38:46.859857 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:38:46.859873 kernel: alternatives: applying boot alternatives Jul 6 23:38:46.859892 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 6 23:38:46.859900 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:38:46.859907 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:38:46.859913 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:38:46.859922 kernel: Fallback order for Node 0: 0 Jul 6 23:38:46.859929 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 6 23:38:46.859935 kernel: Policy zone: DMA Jul 6 23:38:46.859942 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:38:46.859948 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 6 23:38:46.859955 kernel: software IO TLB: area num 4. Jul 6 23:38:46.859961 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 6 23:38:46.859968 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Jul 6 23:38:46.859974 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:38:46.859981 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:38:46.859988 kernel: rcu: RCU event tracing is enabled. Jul 6 23:38:46.859995 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:38:46.860003 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:38:46.860010 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:38:46.860017 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:38:46.860023 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:38:46.860030 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:38:46.860037 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:38:46.860044 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:38:46.860050 kernel: GICv3: 256 SPIs implemented Jul 6 23:38:46.860057 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:38:46.860063 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:38:46.860070 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:38:46.860077 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 6 23:38:46.860086 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 6 23:38:46.860095 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 6 23:38:46.860105 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:38:46.860111 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:38:46.860118 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 6 23:38:46.860125 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 6 23:38:46.860131 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:38:46.860138 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:38:46.860145 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:38:46.860152 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:38:46.860159 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:38:46.860167 kernel: arm-pv: using stolen time PV Jul 6 23:38:46.860174 kernel: Console: colour dummy device 80x25 Jul 6 23:38:46.860181 kernel: ACPI: Core revision 20240827 Jul 6 23:38:46.860187 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:38:46.860194 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:38:46.860201 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:38:46.860208 kernel: landlock: Up and running. Jul 6 23:38:46.860215 kernel: SELinux: Initializing. Jul 6 23:38:46.860234 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:38:46.860242 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:38:46.860249 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:38:46.860255 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:38:46.860262 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:38:46.860268 kernel: Remapping and enabling EFI services. Jul 6 23:38:46.860275 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:38:46.860281 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:38:46.860288 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 6 23:38:46.860295 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 6 23:38:46.860304 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:38:46.860316 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:38:46.860323 kernel: Detected PIPT I-cache on CPU2 Jul 6 23:38:46.860332 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 6 23:38:46.860339 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 6 23:38:46.860346 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:38:46.860353 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 6 23:38:46.860360 kernel: Detected PIPT I-cache on CPU3 Jul 6 23:38:46.860367 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 6 23:38:46.860376 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 6 23:38:46.860384 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:38:46.860390 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 6 23:38:46.860397 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:38:46.860404 kernel: SMP: Total of 4 processors activated. Jul 6 23:38:46.860411 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:38:46.860418 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:38:46.860425 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:38:46.860434 kernel: CPU features: detected: Common not Private translations Jul 6 23:38:46.860441 kernel: CPU features: detected: CRC32 instructions Jul 6 23:38:46.860448 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 6 23:38:46.860456 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:38:46.860463 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:38:46.860470 kernel: CPU features: detected: Privileged Access Never Jul 6 23:38:46.860477 kernel: CPU features: detected: RAS Extension Support Jul 6 23:38:46.860484 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:38:46.860491 kernel: alternatives: applying system-wide alternatives Jul 6 23:38:46.860499 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 6 23:38:46.860508 kernel: Memory: 2438448K/2572288K available (11072K kernel code, 2428K rwdata, 9032K rodata, 39424K init, 1035K bss, 127892K reserved, 0K cma-reserved) Jul 6 23:38:46.860515 kernel: devtmpfs: initialized Jul 6 23:38:46.860522 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:38:46.860529 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:38:46.860536 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:38:46.860543 kernel: 0 pages in range for non-PLT usage Jul 6 23:38:46.860550 kernel: 508480 pages in range for PLT usage Jul 6 23:38:46.860557 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:38:46.860564 kernel: SMBIOS 3.0.0 present. Jul 6 23:38:46.860573 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 6 23:38:46.860580 kernel: DMI: Memory slots populated: 1/1 Jul 6 23:38:46.860588 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:38:46.860595 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:38:46.860602 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:38:46.860609 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:38:46.860616 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:38:46.860624 kernel: audit: type=2000 audit(0.053:1): state=initialized audit_enabled=0 res=1 Jul 6 23:38:46.860633 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:38:46.860641 kernel: cpuidle: using governor menu Jul 6 23:38:46.860651 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:38:46.860663 kernel: ASID allocator initialised with 32768 entries Jul 6 23:38:46.860674 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:38:46.860681 kernel: Serial: AMBA PL011 UART driver Jul 6 23:38:46.860688 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:38:46.860697 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:38:46.860705 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:38:46.860716 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:38:46.860724 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:38:46.860731 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:38:46.860738 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:38:46.860745 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:38:46.860752 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:38:46.860760 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:38:46.860767 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:38:46.860774 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:38:46.860787 kernel: ACPI: Interpreter enabled Jul 6 23:38:46.860796 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:38:46.860803 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:38:46.860810 kernel: ACPI: CPU0 has been hot-added Jul 6 23:38:46.860817 kernel: ACPI: CPU1 has been hot-added Jul 6 23:38:46.860824 kernel: ACPI: CPU2 has been hot-added Jul 6 23:38:46.860831 kernel: ACPI: CPU3 has been hot-added Jul 6 23:38:46.860839 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:38:46.860846 kernel: printk: legacy console [ttyAMA0] enabled Jul 6 23:38:46.860853 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:38:46.861012 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:38:46.861081 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:38:46.861144 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:38:46.861208 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 6 23:38:46.861325 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 6 23:38:46.861336 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 6 23:38:46.861344 kernel: PCI host bridge to bus 0000:00 Jul 6 23:38:46.861422 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 6 23:38:46.861491 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:38:46.861546 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 6 23:38:46.861628 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:38:46.861728 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 6 23:38:46.861805 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 6 23:38:46.862099 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 6 23:38:46.862191 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 6 23:38:46.862255 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 6 23:38:46.862316 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 6 23:38:46.862377 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 6 23:38:46.862438 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 6 23:38:46.862499 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 6 23:38:46.862569 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:38:46.862628 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 6 23:38:46.862637 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:38:46.862644 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:38:46.862652 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:38:46.862659 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:38:46.862666 kernel: iommu: Default domain type: Translated Jul 6 23:38:46.862673 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:38:46.862682 kernel: efivars: Registered efivars operations Jul 6 23:38:46.862690 kernel: vgaarb: loaded Jul 6 23:38:46.862697 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:38:46.862704 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:38:46.862711 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:38:46.862718 kernel: pnp: PnP ACPI init Jul 6 23:38:46.862801 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 6 23:38:46.862814 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:38:46.862822 kernel: NET: Registered PF_INET protocol family Jul 6 23:38:46.862833 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:38:46.862842 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:38:46.862850 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:38:46.862859 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:38:46.862887 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:38:46.862897 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:38:46.862905 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:38:46.862914 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:38:46.862923 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:38:46.862934 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:38:46.862943 kernel: kvm [1]: HYP mode not available Jul 6 23:38:46.862951 kernel: Initialise system trusted keyrings Jul 6 23:38:46.862960 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:38:46.862967 kernel: Key type asymmetric registered Jul 6 23:38:46.862975 kernel: Asymmetric key parser 'x509' registered Jul 6 23:38:46.862982 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:38:46.862989 kernel: io scheduler mq-deadline registered Jul 6 23:38:46.862996 kernel: io scheduler kyber registered Jul 6 23:38:46.863005 kernel: io scheduler bfq registered Jul 6 23:38:46.863012 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:38:46.863019 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:38:46.863027 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:38:46.863100 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 6 23:38:46.863110 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:38:46.863118 kernel: thunder_xcv, ver 1.0 Jul 6 23:38:46.863125 kernel: thunder_bgx, ver 1.0 Jul 6 23:38:46.863132 kernel: nicpf, ver 1.0 Jul 6 23:38:46.863141 kernel: nicvf, ver 1.0 Jul 6 23:38:46.863210 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:38:46.863268 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:38:46 UTC (1751845126) Jul 6 23:38:46.863278 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:38:46.863285 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 6 23:38:46.863292 kernel: watchdog: NMI not fully supported Jul 6 23:38:46.863299 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:38:46.863306 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:38:46.863315 kernel: Segment Routing with IPv6 Jul 6 23:38:46.863323 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:38:46.863330 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:38:46.863337 kernel: Key type dns_resolver registered Jul 6 23:38:46.863344 kernel: registered taskstats version 1 Jul 6 23:38:46.863351 kernel: Loading compiled-in X.509 certificates Jul 6 23:38:46.863358 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 90fb300ebe1fa0773739bb35dad461c5679d8dfb' Jul 6 23:38:46.863366 kernel: Demotion targets for Node 0: null Jul 6 23:38:46.863373 kernel: Key type .fscrypt registered Jul 6 23:38:46.863381 kernel: Key type fscrypt-provisioning registered Jul 6 23:38:46.863389 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:38:46.863396 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:38:46.863403 kernel: ima: No architecture policies found Jul 6 23:38:46.863411 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:38:46.863418 kernel: clk: Disabling unused clocks Jul 6 23:38:46.863425 kernel: PM: genpd: Disabling unused power domains Jul 6 23:38:46.863432 kernel: Warning: unable to open an initial console. Jul 6 23:38:46.863439 kernel: Freeing unused kernel memory: 39424K Jul 6 23:38:46.863448 kernel: Run /init as init process Jul 6 23:38:46.863455 kernel: with arguments: Jul 6 23:38:46.863462 kernel: /init Jul 6 23:38:46.863469 kernel: with environment: Jul 6 23:38:46.863476 kernel: HOME=/ Jul 6 23:38:46.863483 kernel: TERM=linux Jul 6 23:38:46.863490 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:38:46.863498 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:38:46.863509 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:38:46.863518 systemd[1]: Detected virtualization kvm. Jul 6 23:38:46.863525 systemd[1]: Detected architecture arm64. Jul 6 23:38:46.863532 systemd[1]: Running in initrd. Jul 6 23:38:46.863540 systemd[1]: No hostname configured, using default hostname. Jul 6 23:38:46.863548 systemd[1]: Hostname set to . Jul 6 23:38:46.863555 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:38:46.863563 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:38:46.863573 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:38:46.863581 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:38:46.863589 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:38:46.863597 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:38:46.863604 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:38:46.863613 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:38:46.863623 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:38:46.863631 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:38:46.863639 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:38:46.863647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:38:46.863655 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:38:46.863662 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:38:46.863670 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:38:46.863677 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:38:46.863685 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:38:46.863694 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:38:46.863702 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:38:46.863709 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:38:46.863717 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:38:46.863724 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:38:46.863732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:38:46.863740 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:38:46.863748 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:38:46.863757 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:38:46.863765 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:38:46.863773 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:38:46.863781 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:38:46.863789 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:38:46.863796 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:38:46.863804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:38:46.863812 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:38:46.863821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:38:46.863829 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:38:46.863837 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:38:46.863867 systemd-journald[244]: Collecting audit messages is disabled. Jul 6 23:38:46.863898 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:38:46.863908 systemd-journald[244]: Journal started Jul 6 23:38:46.863926 systemd-journald[244]: Runtime Journal (/run/log/journal/6484510b66e446419ad5b42ba10ec358) is 6M, max 48.5M, 42.4M free. Jul 6 23:38:46.841618 systemd-modules-load[245]: Inserted module 'overlay' Jul 6 23:38:46.865671 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:38:46.867486 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:38:46.870899 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:38:46.872040 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:38:46.876066 kernel: Bridge firewalling registered Jul 6 23:38:46.873716 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:38:46.874565 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 6 23:38:46.877716 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:38:46.890143 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:38:46.893383 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:38:46.896638 systemd-tmpfiles[268]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:38:46.900176 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:38:46.901545 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:38:46.906123 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:38:46.908178 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:38:46.909179 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:38:46.920325 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:38:46.935049 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 6 23:38:46.969267 systemd-resolved[288]: Positive Trust Anchors: Jul 6 23:38:46.969283 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:38:46.969314 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:38:46.975194 systemd-resolved[288]: Defaulting to hostname 'linux'. Jul 6 23:38:46.976293 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:38:46.977439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:38:47.038909 kernel: SCSI subsystem initialized Jul 6 23:38:47.043894 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:38:47.069929 kernel: iscsi: registered transport (tcp) Jul 6 23:38:47.086621 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:38:47.086673 kernel: QLogic iSCSI HBA Driver Jul 6 23:38:47.116455 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:38:47.150664 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:38:47.155352 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:38:47.249090 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:38:47.250965 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:38:47.342909 kernel: raid6: neonx8 gen() 15774 MB/s Jul 6 23:38:47.359898 kernel: raid6: neonx4 gen() 15807 MB/s Jul 6 23:38:47.376895 kernel: raid6: neonx2 gen() 13193 MB/s Jul 6 23:38:47.393920 kernel: raid6: neonx1 gen() 10511 MB/s Jul 6 23:38:47.410901 kernel: raid6: int64x8 gen() 6895 MB/s Jul 6 23:38:47.427901 kernel: raid6: int64x4 gen() 7346 MB/s Jul 6 23:38:47.444925 kernel: raid6: int64x2 gen() 6099 MB/s Jul 6 23:38:47.461913 kernel: raid6: int64x1 gen() 5049 MB/s Jul 6 23:38:47.461949 kernel: raid6: using algorithm neonx4 gen() 15807 MB/s Jul 6 23:38:47.478904 kernel: raid6: .... xor() 12328 MB/s, rmw enabled Jul 6 23:38:47.478932 kernel: raid6: using neon recovery algorithm Jul 6 23:38:47.484909 kernel: xor: measuring software checksum speed Jul 6 23:38:47.484972 kernel: 8regs : 21613 MB/sec Jul 6 23:38:47.484984 kernel: 32regs : 19074 MB/sec Jul 6 23:38:47.486471 kernel: arm64_neon : 27936 MB/sec Jul 6 23:38:47.486487 kernel: xor: using function: arm64_neon (27936 MB/sec) Jul 6 23:38:47.553937 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:38:47.564639 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:38:47.567568 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:38:47.598707 systemd-udevd[496]: Using default interface naming scheme 'v255'. Jul 6 23:38:47.603332 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:38:47.605134 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:38:47.630228 dracut-pre-trigger[502]: rd.md=0: removing MD RAID activation Jul 6 23:38:47.658943 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:38:47.661136 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:38:47.723921 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:38:47.727012 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:38:47.782901 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 6 23:38:47.786551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:38:47.790035 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:38:47.786682 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:38:47.791951 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:38:47.794004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:38:47.803147 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:38:47.803172 kernel: GPT:9289727 != 19775487 Jul 6 23:38:47.803182 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:38:47.803190 kernel: GPT:9289727 != 19775487 Jul 6 23:38:47.803199 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:38:47.803208 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:38:47.828710 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:38:47.846808 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:38:47.854418 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:38:47.862033 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:38:47.868349 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:38:47.869313 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:38:47.871147 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:38:47.892055 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:38:47.893458 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:38:47.894580 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:38:47.896361 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:38:47.899030 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:38:47.906281 disk-uuid[588]: Primary Header is updated. Jul 6 23:38:47.906281 disk-uuid[588]: Secondary Entries is updated. Jul 6 23:38:47.906281 disk-uuid[588]: Secondary Header is updated. Jul 6 23:38:47.920670 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:38:47.923906 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:38:48.930965 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:38:48.933309 disk-uuid[590]: The operation has completed successfully. Jul 6 23:38:48.969073 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:38:48.969183 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:38:48.993093 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:38:49.018212 sh[610]: Success Jul 6 23:38:49.038189 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:38:49.038260 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:38:49.039656 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:38:49.057273 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:38:49.091181 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:38:49.096679 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:38:49.113608 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:38:49.120959 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:38:49.121023 kernel: BTRFS: device fsid aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (622) Jul 6 23:38:49.125058 kernel: BTRFS info (device dm-0): first mount of filesystem aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 Jul 6 23:38:49.125127 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:38:49.125138 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:38:49.130148 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:38:49.131275 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:38:49.133606 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:38:49.134557 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:38:49.139039 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:38:49.160898 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (651) Jul 6 23:38:49.162970 kernel: BTRFS info (device vda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:38:49.163021 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:38:49.163900 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:38:49.170959 kernel: BTRFS info (device vda6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:38:49.172588 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:38:49.176033 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:38:49.286923 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:38:49.300032 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:38:49.329485 ignition[698]: Ignition 2.21.0 Jul 6 23:38:49.329544 ignition[698]: Stage: fetch-offline Jul 6 23:38:49.329574 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:38:49.329584 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:38:49.329766 ignition[698]: parsed url from cmdline: "" Jul 6 23:38:49.329769 ignition[698]: no config URL provided Jul 6 23:38:49.329774 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:38:49.329780 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:38:49.329801 ignition[698]: op(1): [started] loading QEMU firmware config module Jul 6 23:38:49.329805 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:38:49.335669 ignition[698]: op(1): [finished] loading QEMU firmware config module Jul 6 23:38:49.341806 systemd-networkd[803]: lo: Link UP Jul 6 23:38:49.341820 systemd-networkd[803]: lo: Gained carrier Jul 6 23:38:49.342794 systemd-networkd[803]: Enumeration completed Jul 6 23:38:49.342931 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:38:49.343997 systemd[1]: Reached target network.target - Network. Jul 6 23:38:49.344700 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:38:49.344705 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:38:49.345286 systemd-networkd[803]: eth0: Link UP Jul 6 23:38:49.345289 systemd-networkd[803]: eth0: Gained carrier Jul 6 23:38:49.345297 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:38:49.361949 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:38:49.383729 ignition[698]: parsing config with SHA512: dd700f0af2daa2816f3ac577e09e83ce646dbcde258a87058bd406e062a49bf9017f0ac33ae2b35f69b61f5a01f7e07dfb33282d16bf97d5ae09effa004e08b1 Jul 6 23:38:49.389864 unknown[698]: fetched base config from "system" Jul 6 23:38:49.389893 unknown[698]: fetched user config from "qemu" Jul 6 23:38:49.390266 ignition[698]: fetch-offline: fetch-offline passed Jul 6 23:38:49.390322 ignition[698]: Ignition finished successfully Jul 6 23:38:49.392615 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:38:49.393844 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:38:49.394657 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:38:49.423918 ignition[810]: Ignition 2.21.0 Jul 6 23:38:49.423932 ignition[810]: Stage: kargs Jul 6 23:38:49.424069 ignition[810]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:38:49.424078 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:38:49.425187 ignition[810]: kargs: kargs passed Jul 6 23:38:49.426921 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:38:49.425246 ignition[810]: Ignition finished successfully Jul 6 23:38:49.429030 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:38:49.460442 ignition[818]: Ignition 2.21.0 Jul 6 23:38:49.460458 ignition[818]: Stage: disks Jul 6 23:38:49.460834 ignition[818]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:38:49.460845 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:38:49.462406 ignition[818]: disks: disks passed Jul 6 23:38:49.464481 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:38:49.462480 ignition[818]: Ignition finished successfully Jul 6 23:38:49.465483 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:38:49.466693 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:38:49.467965 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:38:49.469258 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:38:49.470681 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:38:49.472862 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:38:49.504253 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 6 23:38:49.509983 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:38:49.511994 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:38:49.578982 kernel: EXT4-fs (vda9): mounted filesystem a6b10247-fbe6-4a25-95d9-ddd4b58604ec r/w with ordered data mode. Quota mode: none. Jul 6 23:38:49.579592 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:38:49.580683 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:38:49.583333 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:38:49.584921 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:38:49.585809 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:38:49.585868 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:38:49.585918 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:38:49.599663 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:38:49.602173 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:38:49.605512 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (836) Jul 6 23:38:49.605546 kernel: BTRFS info (device vda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:38:49.605557 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:38:49.606899 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:38:49.609576 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:38:49.650496 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:38:49.653928 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:38:49.658149 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:38:49.661911 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:38:49.740827 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:38:49.742695 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:38:49.744323 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:38:49.765924 kernel: BTRFS info (device vda6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:38:49.781333 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:38:49.808387 ignition[951]: INFO : Ignition 2.21.0 Jul 6 23:38:49.808387 ignition[951]: INFO : Stage: mount Jul 6 23:38:49.809706 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:38:49.809706 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:38:49.812041 ignition[951]: INFO : mount: mount passed Jul 6 23:38:49.812041 ignition[951]: INFO : Ignition finished successfully Jul 6 23:38:49.812719 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:38:49.814911 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:38:50.121527 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:38:50.125067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:38:50.141788 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (963) Jul 6 23:38:50.141858 kernel: BTRFS info (device vda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:38:50.141871 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:38:50.142491 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:38:50.146339 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:38:50.172606 ignition[980]: INFO : Ignition 2.21.0 Jul 6 23:38:50.172606 ignition[980]: INFO : Stage: files Jul 6 23:38:50.175031 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:38:50.175031 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:38:50.175031 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:38:50.177630 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:38:50.177630 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:38:50.180308 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:38:50.181390 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:38:50.181390 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:38:50.180930 unknown[980]: wrote ssh authorized keys file for user: core Jul 6 23:38:50.184470 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 6 23:38:50.184470 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 6 23:38:50.218300 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:38:50.424524 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 6 23:38:50.424524 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:38:50.427280 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:38:50.427280 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:38:50.427280 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:38:50.427280 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:38:50.427280 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:38:50.427280 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:38:50.427280 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:38:50.437162 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:38:50.437162 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:38:50.437162 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:38:50.441978 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:38:50.443986 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:38:50.443986 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 6 23:38:50.692013 systemd-networkd[803]: eth0: Gained IPv6LL Jul 6 23:38:50.957573 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:38:51.609512 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:38:51.609512 ignition[980]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:38:51.612615 ignition[980]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:38:51.614179 ignition[980]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:38:51.614179 ignition[980]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:38:51.614179 ignition[980]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 6 23:38:51.614179 ignition[980]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:38:51.614179 ignition[980]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:38:51.614179 ignition[980]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 6 23:38:51.614179 ignition[980]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:38:51.630953 ignition[980]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:38:51.634690 ignition[980]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:38:51.637218 ignition[980]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:38:51.637218 ignition[980]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:38:51.637218 ignition[980]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:38:51.637218 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:38:51.637218 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:38:51.637218 ignition[980]: INFO : files: files passed Jul 6 23:38:51.637218 ignition[980]: INFO : Ignition finished successfully Jul 6 23:38:51.638158 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:38:51.640722 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:38:51.644135 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:38:51.654781 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:38:51.654907 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:38:51.658169 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:38:51.661458 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:38:51.661458 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:38:51.665458 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:38:51.664897 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:38:51.666917 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:38:51.669874 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:38:51.737083 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:38:51.737217 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:38:51.739101 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:38:51.740690 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:38:51.742281 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:38:51.743163 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:38:51.769452 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:38:51.771752 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:38:51.799312 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:38:51.800312 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:38:51.802045 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:38:51.803653 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:38:51.803782 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:38:51.806069 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:38:51.807785 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:38:51.809319 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:38:51.810719 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:38:51.812368 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:38:51.814057 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:38:51.815714 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:38:51.817359 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:38:51.819045 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:38:51.820696 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:38:51.822176 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:38:51.823571 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:38:51.823703 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:38:51.825811 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:38:51.827530 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:38:51.829171 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:38:51.829970 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:38:51.831045 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:38:51.831176 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:38:51.834228 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:38:51.834431 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:38:51.836116 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:38:51.837527 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:38:51.839954 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:38:51.841345 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:38:51.842671 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:38:51.844519 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:38:51.844652 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:38:51.846992 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:38:51.847140 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:38:51.848475 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:38:51.848710 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:38:51.850612 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:38:51.850760 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:38:51.853524 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:38:51.854875 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:38:51.855082 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:38:51.865636 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:38:51.867140 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:38:51.867333 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:38:51.869186 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:38:51.869391 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:38:51.878748 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:38:51.878911 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:38:51.883822 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:38:51.885023 ignition[1036]: INFO : Ignition 2.21.0 Jul 6 23:38:51.885023 ignition[1036]: INFO : Stage: umount Jul 6 23:38:51.885023 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:38:51.885023 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:38:51.889217 ignition[1036]: INFO : umount: umount passed Jul 6 23:38:51.889217 ignition[1036]: INFO : Ignition finished successfully Jul 6 23:38:51.890925 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:38:51.891025 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:38:51.893388 systemd[1]: Stopped target network.target - Network. Jul 6 23:38:51.895715 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:38:51.895836 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:38:51.897059 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:38:51.897108 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:38:51.898311 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:38:51.898360 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:38:51.899708 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:38:51.899756 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:38:51.901417 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:38:51.902754 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:38:51.904417 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:38:51.904523 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:38:51.906066 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:38:51.906189 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:38:51.913778 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:38:51.913943 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:38:51.918497 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:38:51.918748 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:38:51.918875 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:38:51.922070 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:38:51.922722 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:38:51.923763 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:38:51.923804 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:38:51.926385 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:38:51.927527 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:38:51.927605 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:38:51.929195 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:38:51.929246 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:38:51.931708 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:38:51.931757 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:38:51.933238 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:38:51.933286 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:38:51.935597 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:38:51.942822 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:38:51.942936 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:38:51.951778 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:38:51.952011 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:38:51.953815 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:38:51.953960 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:38:51.955804 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:38:51.955917 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:38:51.956763 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:38:51.956790 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:38:51.958244 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:38:51.958297 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:38:51.960348 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:38:51.960400 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:38:51.962619 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:38:51.962677 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:38:51.965723 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:38:51.966997 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:38:51.967065 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:38:51.971058 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:38:51.971109 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:38:51.973180 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:38:51.973230 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:38:51.976649 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 6 23:38:51.976704 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:38:51.976734 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:38:51.986559 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:38:51.986698 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:38:51.988577 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:38:51.990608 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:38:52.012217 systemd[1]: Switching root. Jul 6 23:38:52.043234 systemd-journald[244]: Journal stopped Jul 6 23:38:52.881439 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 6 23:38:52.881489 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:38:52.881501 kernel: SELinux: policy capability open_perms=1 Jul 6 23:38:52.881511 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:38:52.881520 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:38:52.881533 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:38:52.881542 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:38:52.881551 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:38:52.881560 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:38:52.881579 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:38:52.881590 kernel: audit: type=1403 audit(1751845132.209:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:38:52.881600 systemd[1]: Successfully loaded SELinux policy in 41.244ms. Jul 6 23:38:52.881619 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.221ms. Jul 6 23:38:52.881630 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:38:52.881640 systemd[1]: Detected virtualization kvm. Jul 6 23:38:52.881652 systemd[1]: Detected architecture arm64. Jul 6 23:38:52.881662 systemd[1]: Detected first boot. Jul 6 23:38:52.881672 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:38:52.881682 zram_generator::config[1081]: No configuration found. Jul 6 23:38:52.881692 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:38:52.881702 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:38:52.881713 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:38:52.881723 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:38:52.881734 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:38:52.881746 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:38:52.881756 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:38:52.881766 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:38:52.881776 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:38:52.881786 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:38:52.881797 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:38:52.881808 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:38:52.881818 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:38:52.881830 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:38:52.881840 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:38:52.881857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:38:52.881868 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:38:52.881890 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:38:52.881901 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:38:52.881911 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:38:52.881925 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:38:52.881937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:38:52.881948 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:38:52.881958 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:38:52.881968 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:38:52.881978 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:38:52.881988 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:38:52.881998 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:38:52.882012 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:38:52.882022 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:38:52.882033 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:38:52.882044 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:38:52.882055 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:38:52.882065 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:38:52.882076 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:38:52.882086 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:38:52.882096 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:38:52.882107 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:38:52.882118 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:38:52.882129 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:38:52.882140 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:38:52.882150 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:38:52.882159 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:38:52.882169 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:38:52.882180 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:38:52.882190 systemd[1]: Reached target machines.target - Containers. Jul 6 23:38:52.882200 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:38:52.882211 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:38:52.882223 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:38:52.882233 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:38:52.882243 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:38:52.882253 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:38:52.882264 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:38:52.882277 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:38:52.882287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:38:52.882298 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:38:52.882309 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:38:52.882320 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:38:52.882330 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:38:52.882340 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:38:52.882351 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:38:52.882361 kernel: fuse: init (API version 7.41) Jul 6 23:38:52.882371 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:38:52.882381 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:38:52.882392 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:38:52.882404 kernel: ACPI: bus type drm_connector registered Jul 6 23:38:52.882413 kernel: loop: module loaded Jul 6 23:38:52.882423 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:38:52.882434 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:38:52.882444 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:38:52.882456 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:38:52.882466 systemd[1]: Stopped verity-setup.service. Jul 6 23:38:52.882475 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:38:52.882486 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:38:52.882497 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:38:52.882507 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:38:52.882517 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:38:52.882527 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:38:52.882561 systemd-journald[1151]: Collecting audit messages is disabled. Jul 6 23:38:52.882583 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:38:52.882593 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:38:52.882605 systemd-journald[1151]: Journal started Jul 6 23:38:52.882628 systemd-journald[1151]: Runtime Journal (/run/log/journal/6484510b66e446419ad5b42ba10ec358) is 6M, max 48.5M, 42.4M free. Jul 6 23:38:52.648584 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:38:52.669005 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:38:52.669427 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:38:52.885905 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:38:52.886520 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:38:52.886714 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:38:52.888149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:38:52.888325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:38:52.889544 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:38:52.889709 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:38:52.890785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:38:52.890977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:38:52.892151 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:38:52.892335 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:38:52.893562 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:38:52.893754 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:38:52.895106 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:38:52.896291 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:38:52.897530 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:38:52.899025 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:38:52.911729 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:38:52.914395 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:38:52.916536 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:38:52.917516 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:38:52.917549 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:38:52.919347 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:38:52.925100 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:38:52.926118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:38:52.929056 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:38:52.931453 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:38:52.932504 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:38:52.934347 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:38:52.935454 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:38:52.938307 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:38:52.941834 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:38:52.945717 systemd-journald[1151]: Time spent on flushing to /var/log/journal/6484510b66e446419ad5b42ba10ec358 is 11.972ms for 880 entries. Jul 6 23:38:52.945717 systemd-journald[1151]: System Journal (/var/log/journal/6484510b66e446419ad5b42ba10ec358) is 8M, max 195.6M, 187.6M free. Jul 6 23:38:52.968652 systemd-journald[1151]: Received client request to flush runtime journal. Jul 6 23:38:52.947178 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:38:52.953176 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:38:52.957492 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:38:52.958924 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:38:52.960201 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:38:52.964974 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:38:52.977150 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:38:52.978630 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:38:52.980418 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:38:52.990911 kernel: loop0: detected capacity change from 0 to 207008 Jul 6 23:38:53.008016 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:38:53.009138 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:38:53.020215 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:38:53.025133 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:38:53.031902 kernel: loop1: detected capacity change from 0 to 138376 Jul 6 23:38:53.051488 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jul 6 23:38:53.051508 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jul 6 23:38:53.056517 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:38:53.069970 kernel: loop2: detected capacity change from 0 to 107312 Jul 6 23:38:53.104910 kernel: loop3: detected capacity change from 0 to 207008 Jul 6 23:38:53.111905 kernel: loop4: detected capacity change from 0 to 138376 Jul 6 23:38:53.120924 kernel: loop5: detected capacity change from 0 to 107312 Jul 6 23:38:53.127183 (sd-merge)[1220]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:38:53.127586 (sd-merge)[1220]: Merged extensions into '/usr'. Jul 6 23:38:53.132756 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:38:53.132770 systemd[1]: Reloading... Jul 6 23:38:53.194966 zram_generator::config[1252]: No configuration found. Jul 6 23:38:53.251757 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:38:53.282388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:38:53.346161 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:38:53.346415 systemd[1]: Reloading finished in 213 ms. Jul 6 23:38:53.377719 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:38:53.381032 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:38:53.396297 systemd[1]: Starting ensure-sysext.service... Jul 6 23:38:53.398071 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:38:53.419672 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:38:53.420202 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:38:53.420443 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:38:53.420461 systemd[1]: Reloading... Jul 6 23:38:53.420653 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:38:53.420948 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:38:53.421674 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:38:53.422042 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 6 23:38:53.422169 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 6 23:38:53.425463 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:38:53.425614 systemd-tmpfiles[1284]: Skipping /boot Jul 6 23:38:53.434996 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:38:53.435133 systemd-tmpfiles[1284]: Skipping /boot Jul 6 23:38:53.477904 zram_generator::config[1317]: No configuration found. Jul 6 23:38:53.547034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:38:53.611633 systemd[1]: Reloading finished in 190 ms. Jul 6 23:38:53.636636 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:38:53.642420 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:38:53.658526 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:38:53.660963 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:38:53.678325 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:38:53.681725 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:38:53.684088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:38:53.688076 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:38:53.694837 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:38:53.698600 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:38:53.711275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:38:53.715450 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:38:53.717139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:38:53.717277 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:38:53.718378 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:38:53.721630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:38:53.722159 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:38:53.724540 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:38:53.724749 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:38:53.726430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:38:53.726601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:38:53.734281 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:38:53.741273 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Jul 6 23:38:53.742564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:38:53.744134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:38:53.746242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:38:53.748390 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:38:53.749496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:38:53.749688 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:38:53.756168 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:38:53.761402 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:38:53.762413 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:38:53.764180 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:38:53.765910 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:38:53.766090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:38:53.767548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:38:53.767714 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:38:53.769296 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:38:53.771999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:38:53.775141 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:38:53.776286 augenrules[1385]: No rules Jul 6 23:38:53.778960 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:38:53.780419 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:38:53.780652 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:38:53.797419 systemd[1]: Finished ensure-sysext.service. Jul 6 23:38:53.803066 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:38:53.803993 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:38:53.805248 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:38:53.809468 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:38:53.816418 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:38:53.820667 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:38:53.821929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:38:53.821986 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:38:53.824026 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:38:53.829153 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:38:53.831993 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:38:53.832668 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:38:53.832864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:38:53.836280 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:38:53.836460 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:38:53.837788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:38:53.838052 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:38:53.839245 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:38:53.839416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:38:53.849462 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:38:53.849529 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:38:53.856354 augenrules[1419]: /sbin/augenrules: No change Jul 6 23:38:53.866533 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:38:53.868265 augenrules[1451]: No rules Jul 6 23:38:53.870250 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:38:53.871961 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:38:53.907757 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:38:53.951627 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:38:53.956657 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:38:53.998819 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:38:54.000037 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:38:54.009448 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:38:54.022156 systemd-networkd[1429]: lo: Link UP Jul 6 23:38:54.022164 systemd-networkd[1429]: lo: Gained carrier Jul 6 23:38:54.023444 systemd-networkd[1429]: Enumeration completed Jul 6 23:38:54.023617 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:38:54.023907 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:38:54.023915 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:38:54.024444 systemd-networkd[1429]: eth0: Link UP Jul 6 23:38:54.024560 systemd-networkd[1429]: eth0: Gained carrier Jul 6 23:38:54.024579 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:38:54.027644 systemd-resolved[1350]: Positive Trust Anchors: Jul 6 23:38:54.027664 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:38:54.027697 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:38:54.028256 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:38:54.031830 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:38:54.039078 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:38:54.040743 systemd-resolved[1350]: Defaulting to hostname 'linux'. Jul 6 23:38:54.041004 systemd-timesyncd[1431]: Network configuration changed, trying to establish connection. Jul 6 23:38:54.043976 systemd-timesyncd[1431]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:38:54.044054 systemd-timesyncd[1431]: Initial clock synchronization to Sun 2025-07-06 23:38:54.386166 UTC. Jul 6 23:38:54.054239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:38:54.055198 systemd[1]: Reached target network.target - Network. Jul 6 23:38:54.055889 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:38:54.056756 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:38:54.057654 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:38:54.058596 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:38:54.059669 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:38:54.061108 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:38:54.062010 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:38:54.063723 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:38:54.063765 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:38:54.064670 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:38:54.066387 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:38:54.069058 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:38:54.072822 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:38:54.074054 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:38:54.074970 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:38:54.078259 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:38:54.079875 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:38:54.081933 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:38:54.083104 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:38:54.083971 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:38:54.084828 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:38:54.084859 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:38:54.086153 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:38:54.088075 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:38:54.091194 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:38:54.102035 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:38:54.106376 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:38:54.107263 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:38:54.108643 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:38:54.113107 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:38:54.119099 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:38:54.121243 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:38:54.127068 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:38:54.128822 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:38:54.129495 jq[1497]: false Jul 6 23:38:54.129456 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:38:54.130312 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:38:54.135065 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:38:54.137130 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:38:54.139447 extend-filesystems[1498]: Found /dev/vda6 Jul 6 23:38:54.141137 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:38:54.142728 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:38:54.143082 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:38:54.144070 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:38:54.144780 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:38:54.152947 jq[1511]: true Jul 6 23:38:54.155220 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:38:54.155508 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:38:54.159345 extend-filesystems[1498]: Found /dev/vda9 Jul 6 23:38:54.162444 extend-filesystems[1498]: Checking size of /dev/vda9 Jul 6 23:38:54.169206 jq[1526]: true Jul 6 23:38:54.179863 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:38:54.187217 (ntainerd)[1533]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:38:54.200429 extend-filesystems[1498]: Resized partition /dev/vda9 Jul 6 23:38:54.209327 extend-filesystems[1546]: resize2fs 1.47.2 (1-Jan-2025) Jul 6 23:38:54.210289 tar[1520]: linux-arm64/LICENSE Jul 6 23:38:54.210289 tar[1520]: linux-arm64/helm Jul 6 23:38:54.224891 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:38:54.223898 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:38:54.223575 dbus-daemon[1495]: [system] SELinux support is enabled Jul 6 23:38:54.227776 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:38:54.227807 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:38:54.228827 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:38:54.228858 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:38:54.230520 update_engine[1510]: I20250706 23:38:54.230101 1510 main.cc:92] Flatcar Update Engine starting Jul 6 23:38:54.241008 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:38:54.242772 update_engine[1510]: I20250706 23:38:54.242177 1510 update_check_scheduler.cc:74] Next update check in 6m26s Jul 6 23:38:54.244434 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:38:54.268550 systemd-logind[1504]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:38:54.271437 systemd-logind[1504]: New seat seat0. Jul 6 23:38:54.272644 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:38:54.283903 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:38:54.298621 bash[1559]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:38:54.302709 extend-filesystems[1546]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:38:54.302709 extend-filesystems[1546]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:38:54.302709 extend-filesystems[1546]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:38:54.302648 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:38:54.314284 extend-filesystems[1498]: Resized filesystem in /dev/vda9 Jul 6 23:38:54.304828 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:38:54.311053 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:38:54.311289 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:38:54.328021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:38:54.356741 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:38:54.458483 containerd[1533]: time="2025-07-06T23:38:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:38:54.462346 containerd[1533]: time="2025-07-06T23:38:54.462295400Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:38:54.474581 containerd[1533]: time="2025-07-06T23:38:54.474522800Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.4µs" Jul 6 23:38:54.474581 containerd[1533]: time="2025-07-06T23:38:54.474566760Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:38:54.474581 containerd[1533]: time="2025-07-06T23:38:54.474586840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:38:54.474780 containerd[1533]: time="2025-07-06T23:38:54.474758960Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:38:54.474811 containerd[1533]: time="2025-07-06T23:38:54.474779360Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:38:54.474811 containerd[1533]: time="2025-07-06T23:38:54.474805920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:38:54.476736 containerd[1533]: time="2025-07-06T23:38:54.474871480Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:38:54.476736 containerd[1533]: time="2025-07-06T23:38:54.474900360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:38:54.476736 containerd[1533]: time="2025-07-06T23:38:54.475263240Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:38:54.476736 containerd[1533]: time="2025-07-06T23:38:54.475283800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:38:54.476736 containerd[1533]: time="2025-07-06T23:38:54.475296800Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:38:54.476736 containerd[1533]: time="2025-07-06T23:38:54.475305040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:38:54.476736 containerd[1533]: time="2025-07-06T23:38:54.475380920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:38:54.476736 containerd[1533]: time="2025-07-06T23:38:54.475810440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:38:54.476736 containerd[1533]: time="2025-07-06T23:38:54.475924040Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:38:54.476736 containerd[1533]: time="2025-07-06T23:38:54.475936720Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:38:54.476736 containerd[1533]: time="2025-07-06T23:38:54.476607920Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:38:54.477152 containerd[1533]: time="2025-07-06T23:38:54.477115000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:38:54.477241 containerd[1533]: time="2025-07-06T23:38:54.477214400Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:38:54.481539 containerd[1533]: time="2025-07-06T23:38:54.481487240Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:38:54.481711 containerd[1533]: time="2025-07-06T23:38:54.481576920Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:38:54.481711 containerd[1533]: time="2025-07-06T23:38:54.481593360Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:38:54.481711 containerd[1533]: time="2025-07-06T23:38:54.481612200Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:38:54.481711 containerd[1533]: time="2025-07-06T23:38:54.481682080Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:38:54.481711 containerd[1533]: time="2025-07-06T23:38:54.481697400Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:38:54.481711 containerd[1533]: time="2025-07-06T23:38:54.481709560Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:38:54.481814 containerd[1533]: time="2025-07-06T23:38:54.481721560Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:38:54.481814 containerd[1533]: time="2025-07-06T23:38:54.481737080Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:38:54.481814 containerd[1533]: time="2025-07-06T23:38:54.481748080Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:38:54.481814 containerd[1533]: time="2025-07-06T23:38:54.481757680Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:38:54.481814 containerd[1533]: time="2025-07-06T23:38:54.481770360Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:38:54.481959 containerd[1533]: time="2025-07-06T23:38:54.481938640Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:38:54.482048 containerd[1533]: time="2025-07-06T23:38:54.481966000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:38:54.482048 containerd[1533]: time="2025-07-06T23:38:54.481983440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:38:54.482123 containerd[1533]: time="2025-07-06T23:38:54.482103960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:38:54.482144 containerd[1533]: time="2025-07-06T23:38:54.482127560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:38:54.482161 containerd[1533]: time="2025-07-06T23:38:54.482141600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:38:54.482161 containerd[1533]: time="2025-07-06T23:38:54.482154040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:38:54.482201 containerd[1533]: time="2025-07-06T23:38:54.482164360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:38:54.482201 containerd[1533]: time="2025-07-06T23:38:54.482176760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:38:54.482201 containerd[1533]: time="2025-07-06T23:38:54.482187840Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:38:54.482201 containerd[1533]: time="2025-07-06T23:38:54.482198160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:38:54.483422 containerd[1533]: time="2025-07-06T23:38:54.482899640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:38:54.483422 containerd[1533]: time="2025-07-06T23:38:54.483417960Z" level=info msg="Start snapshots syncer" Jul 6 23:38:54.483670 containerd[1533]: time="2025-07-06T23:38:54.483468360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:38:54.484002 containerd[1533]: time="2025-07-06T23:38:54.483935040Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:38:54.484107 containerd[1533]: time="2025-07-06T23:38:54.484002680Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:38:54.484128 containerd[1533]: time="2025-07-06T23:38:54.484110880Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:38:54.484354 containerd[1533]: time="2025-07-06T23:38:54.484277120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:38:54.484354 containerd[1533]: time="2025-07-06T23:38:54.484307400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:38:54.484354 containerd[1533]: time="2025-07-06T23:38:54.484322040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:38:54.484354 containerd[1533]: time="2025-07-06T23:38:54.484337920Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:38:54.484354 containerd[1533]: time="2025-07-06T23:38:54.484354920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:38:54.484464 containerd[1533]: time="2025-07-06T23:38:54.484370600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:38:54.484464 containerd[1533]: time="2025-07-06T23:38:54.484382720Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:38:54.484464 containerd[1533]: time="2025-07-06T23:38:54.484421360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:38:54.484464 containerd[1533]: time="2025-07-06T23:38:54.484434480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:38:54.484464 containerd[1533]: time="2025-07-06T23:38:54.484454440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:38:54.484551 containerd[1533]: time="2025-07-06T23:38:54.484488840Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:38:54.484551 containerd[1533]: time="2025-07-06T23:38:54.484507840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:38:54.484551 containerd[1533]: time="2025-07-06T23:38:54.484520600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:38:54.484602 containerd[1533]: time="2025-07-06T23:38:54.484551200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:38:54.484602 containerd[1533]: time="2025-07-06T23:38:54.484563560Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:38:54.484602 containerd[1533]: time="2025-07-06T23:38:54.484578080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:38:54.484602 containerd[1533]: time="2025-07-06T23:38:54.484592160Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:38:54.484820 containerd[1533]: time="2025-07-06T23:38:54.484670800Z" level=info msg="runtime interface created" Jul 6 23:38:54.484820 containerd[1533]: time="2025-07-06T23:38:54.484689600Z" level=info msg="created NRI interface" Jul 6 23:38:54.484820 containerd[1533]: time="2025-07-06T23:38:54.484703600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:38:54.484820 containerd[1533]: time="2025-07-06T23:38:54.484720720Z" level=info msg="Connect containerd service" Jul 6 23:38:54.484820 containerd[1533]: time="2025-07-06T23:38:54.484754280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:38:54.486441 containerd[1533]: time="2025-07-06T23:38:54.486392560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:38:54.603037 containerd[1533]: time="2025-07-06T23:38:54.602452280Z" level=info msg="Start subscribing containerd event" Jul 6 23:38:54.603202 containerd[1533]: time="2025-07-06T23:38:54.603179880Z" level=info msg="Start recovering state" Jul 6 23:38:54.603383 containerd[1533]: time="2025-07-06T23:38:54.602940400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:38:54.603528 containerd[1533]: time="2025-07-06T23:38:54.603465000Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:38:54.603596 containerd[1533]: time="2025-07-06T23:38:54.603579760Z" level=info msg="Start event monitor" Jul 6 23:38:54.603662 containerd[1533]: time="2025-07-06T23:38:54.603649960Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:38:54.606452 containerd[1533]: time="2025-07-06T23:38:54.604909600Z" level=info msg="Start streaming server" Jul 6 23:38:54.606452 containerd[1533]: time="2025-07-06T23:38:54.604950120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:38:54.606452 containerd[1533]: time="2025-07-06T23:38:54.604959880Z" level=info msg="runtime interface starting up..." Jul 6 23:38:54.606452 containerd[1533]: time="2025-07-06T23:38:54.604966320Z" level=info msg="starting plugins..." Jul 6 23:38:54.606452 containerd[1533]: time="2025-07-06T23:38:54.604991960Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:38:54.605260 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:38:54.606737 containerd[1533]: time="2025-07-06T23:38:54.606710880Z" level=info msg="containerd successfully booted in 0.148578s" Jul 6 23:38:54.649220 tar[1520]: linux-arm64/README.md Jul 6 23:38:54.669326 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:38:55.319079 sshd_keygen[1517]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:38:55.340571 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:38:55.343382 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:38:55.365059 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:38:55.365077 systemd-networkd[1429]: eth0: Gained IPv6LL Jul 6 23:38:55.365345 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:38:55.367974 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:38:55.371759 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:38:55.373404 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:38:55.385234 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:38:55.387677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:38:55.389950 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:38:55.391654 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:38:55.398418 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:38:55.402957 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:38:55.404250 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:38:55.417311 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:38:55.419014 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:38:55.419270 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:38:55.421265 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:38:56.047848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:38:56.049796 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:38:56.051999 systemd[1]: Startup finished in 2.198s (kernel) + 5.575s (initrd) + 3.889s (userspace) = 11.663s. Jul 6 23:38:56.062369 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:38:56.512326 kubelet[1636]: E0706 23:38:56.511930 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:38:56.514866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:38:56.515027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:38:56.515354 systemd[1]: kubelet.service: Consumed 823ms CPU time, 257.8M memory peak. Jul 6 23:38:59.612780 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:38:59.614472 systemd[1]: Started sshd@0-10.0.0.120:22-10.0.0.1:54898.service - OpenSSH per-connection server daemon (10.0.0.1:54898). Jul 6 23:38:59.697115 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 54898 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:38:59.699874 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:59.706857 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:38:59.707857 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:38:59.713647 systemd-logind[1504]: New session 1 of user core. Jul 6 23:38:59.737008 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:38:59.739609 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:38:59.759238 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:38:59.762721 systemd-logind[1504]: New session c1 of user core. Jul 6 23:38:59.885380 systemd[1653]: Queued start job for default target default.target. Jul 6 23:38:59.896939 systemd[1653]: Created slice app.slice - User Application Slice. Jul 6 23:38:59.896966 systemd[1653]: Reached target paths.target - Paths. Jul 6 23:38:59.897007 systemd[1653]: Reached target timers.target - Timers. Jul 6 23:38:59.898512 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:38:59.912474 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:38:59.912603 systemd[1653]: Reached target sockets.target - Sockets. Jul 6 23:38:59.912650 systemd[1653]: Reached target basic.target - Basic System. Jul 6 23:38:59.912679 systemd[1653]: Reached target default.target - Main User Target. Jul 6 23:38:59.912707 systemd[1653]: Startup finished in 142ms. Jul 6 23:38:59.912945 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:38:59.914475 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:38:59.980998 systemd[1]: Started sshd@1-10.0.0.120:22-10.0.0.1:54914.service - OpenSSH per-connection server daemon (10.0.0.1:54914). Jul 6 23:39:00.036933 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 54914 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:39:00.038976 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:00.044637 systemd-logind[1504]: New session 2 of user core. Jul 6 23:39:00.054309 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:39:00.107340 sshd[1666]: Connection closed by 10.0.0.1 port 54914 Jul 6 23:39:00.107794 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:00.115438 systemd[1]: sshd@1-10.0.0.120:22-10.0.0.1:54914.service: Deactivated successfully. Jul 6 23:39:00.118466 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:39:00.119405 systemd-logind[1504]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:39:00.122820 systemd[1]: Started sshd@2-10.0.0.120:22-10.0.0.1:54924.service - OpenSSH per-connection server daemon (10.0.0.1:54924). Jul 6 23:39:00.123609 systemd-logind[1504]: Removed session 2. Jul 6 23:39:00.186144 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 54924 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:39:00.188963 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:00.193994 systemd-logind[1504]: New session 3 of user core. Jul 6 23:39:00.210158 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:39:00.261938 sshd[1675]: Connection closed by 10.0.0.1 port 54924 Jul 6 23:39:00.262030 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:00.276420 systemd[1]: sshd@2-10.0.0.120:22-10.0.0.1:54924.service: Deactivated successfully. Jul 6 23:39:00.281430 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:39:00.282544 systemd-logind[1504]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:39:00.287403 systemd[1]: Started sshd@3-10.0.0.120:22-10.0.0.1:54940.service - OpenSSH per-connection server daemon (10.0.0.1:54940). Jul 6 23:39:00.288046 systemd-logind[1504]: Removed session 3. Jul 6 23:39:00.347259 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 54940 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:39:00.348716 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:00.353892 systemd-logind[1504]: New session 4 of user core. Jul 6 23:39:00.371180 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:39:00.428462 sshd[1683]: Connection closed by 10.0.0.1 port 54940 Jul 6 23:39:00.429041 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:00.442485 systemd[1]: sshd@3-10.0.0.120:22-10.0.0.1:54940.service: Deactivated successfully. Jul 6 23:39:00.445771 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:39:00.447436 systemd-logind[1504]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:39:00.450014 systemd[1]: Started sshd@4-10.0.0.120:22-10.0.0.1:54944.service - OpenSSH per-connection server daemon (10.0.0.1:54944). Jul 6 23:39:00.451198 systemd-logind[1504]: Removed session 4. Jul 6 23:39:00.508681 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 54944 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:39:00.512541 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:00.517333 systemd-logind[1504]: New session 5 of user core. Jul 6 23:39:00.532109 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:39:00.595887 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:39:00.596209 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:39:00.613882 sudo[1692]: pam_unix(sudo:session): session closed for user root Jul 6 23:39:00.616151 sshd[1691]: Connection closed by 10.0.0.1 port 54944 Jul 6 23:39:00.617056 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:00.634330 systemd[1]: sshd@4-10.0.0.120:22-10.0.0.1:54944.service: Deactivated successfully. Jul 6 23:39:00.637554 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:39:00.639011 systemd-logind[1504]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:39:00.641306 systemd[1]: Started sshd@5-10.0.0.120:22-10.0.0.1:54960.service - OpenSSH per-connection server daemon (10.0.0.1:54960). Jul 6 23:39:00.645503 systemd-logind[1504]: Removed session 5. Jul 6 23:39:00.706626 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 54960 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:39:00.708051 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:00.713027 systemd-logind[1504]: New session 6 of user core. Jul 6 23:39:00.728105 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:39:00.784451 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:39:00.785150 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:39:00.900677 sudo[1702]: pam_unix(sudo:session): session closed for user root Jul 6 23:39:00.906669 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:39:00.906972 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:39:00.917894 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:39:00.956071 augenrules[1724]: No rules Jul 6 23:39:00.957623 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:39:00.958957 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:39:00.960989 sudo[1701]: pam_unix(sudo:session): session closed for user root Jul 6 23:39:00.967337 sshd[1700]: Connection closed by 10.0.0.1 port 54960 Jul 6 23:39:00.967187 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:00.974067 systemd[1]: sshd@5-10.0.0.120:22-10.0.0.1:54960.service: Deactivated successfully. Jul 6 23:39:00.975983 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:39:00.976824 systemd-logind[1504]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:39:00.984270 systemd[1]: Started sshd@6-10.0.0.120:22-10.0.0.1:54974.service - OpenSSH per-connection server daemon (10.0.0.1:54974). Jul 6 23:39:00.985443 systemd-logind[1504]: Removed session 6. Jul 6 23:39:01.036049 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 54974 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:39:01.037623 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:01.043385 systemd-logind[1504]: New session 7 of user core. Jul 6 23:39:01.052128 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:39:01.105123 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:39:01.105735 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:39:01.483084 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:39:01.498299 (dockerd)[1756]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:39:01.802988 dockerd[1756]: time="2025-07-06T23:39:01.802607925Z" level=info msg="Starting up" Jul 6 23:39:01.804422 dockerd[1756]: time="2025-07-06T23:39:01.804393753Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:39:01.914783 systemd[1]: var-lib-docker-metacopy\x2dcheck1902069268-merged.mount: Deactivated successfully. Jul 6 23:39:01.925362 dockerd[1756]: time="2025-07-06T23:39:01.925315011Z" level=info msg="Loading containers: start." Jul 6 23:39:01.934938 kernel: Initializing XFRM netlink socket Jul 6 23:39:02.149974 systemd-networkd[1429]: docker0: Link UP Jul 6 23:39:02.155717 dockerd[1756]: time="2025-07-06T23:39:02.155622413Z" level=info msg="Loading containers: done." Jul 6 23:39:02.169589 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2961353538-merged.mount: Deactivated successfully. Jul 6 23:39:02.174023 dockerd[1756]: time="2025-07-06T23:39:02.173977723Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:39:02.174211 dockerd[1756]: time="2025-07-06T23:39:02.174193016Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:39:02.174387 dockerd[1756]: time="2025-07-06T23:39:02.174368773Z" level=info msg="Initializing buildkit" Jul 6 23:39:02.197001 dockerd[1756]: time="2025-07-06T23:39:02.196958103Z" level=info msg="Completed buildkit initialization" Jul 6 23:39:02.203679 dockerd[1756]: time="2025-07-06T23:39:02.203607484Z" level=info msg="Daemon has completed initialization" Jul 6 23:39:02.204042 dockerd[1756]: time="2025-07-06T23:39:02.203843887Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:39:02.204326 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:39:02.844353 containerd[1533]: time="2025-07-06T23:39:02.844299710Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:39:03.481798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288985265.mount: Deactivated successfully. Jul 6 23:39:04.658023 containerd[1533]: time="2025-07-06T23:39:04.657308708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:04.658023 containerd[1533]: time="2025-07-06T23:39:04.657730690Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 6 23:39:04.658616 containerd[1533]: time="2025-07-06T23:39:04.658566511Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:04.661424 containerd[1533]: time="2025-07-06T23:39:04.660909960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:04.661970 containerd[1533]: time="2025-07-06T23:39:04.661945650Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.817603264s" Jul 6 23:39:04.662035 containerd[1533]: time="2025-07-06T23:39:04.661979603Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 6 23:39:04.662736 containerd[1533]: time="2025-07-06T23:39:04.662676289Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:39:05.680374 containerd[1533]: time="2025-07-06T23:39:05.680319465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:05.681183 containerd[1533]: time="2025-07-06T23:39:05.681148978Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 6 23:39:05.682481 containerd[1533]: time="2025-07-06T23:39:05.682446724Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:05.685429 containerd[1533]: time="2025-07-06T23:39:05.685389943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:05.685980 containerd[1533]: time="2025-07-06T23:39:05.685949114Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.023244434s" Jul 6 23:39:05.686048 containerd[1533]: time="2025-07-06T23:39:05.685991022Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 6 23:39:05.686818 containerd[1533]: time="2025-07-06T23:39:05.686450478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:39:06.684457 containerd[1533]: time="2025-07-06T23:39:06.684400583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:06.685026 containerd[1533]: time="2025-07-06T23:39:06.684995570Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 6 23:39:06.685991 containerd[1533]: time="2025-07-06T23:39:06.685940061Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:06.689493 containerd[1533]: time="2025-07-06T23:39:06.689065028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:06.690042 containerd[1533]: time="2025-07-06T23:39:06.690015013Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.003529073s" Jul 6 23:39:06.690133 containerd[1533]: time="2025-07-06T23:39:06.690119193Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 6 23:39:06.690633 containerd[1533]: time="2025-07-06T23:39:06.690602931Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:39:06.765390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:39:06.766824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:39:06.904941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:39:06.909357 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:39:06.946143 kubelet[2038]: E0706 23:39:06.945922 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:39:06.949227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:39:06.949353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:39:06.949654 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107.5M memory peak. Jul 6 23:39:07.751095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1937431164.mount: Deactivated successfully. Jul 6 23:39:08.169218 containerd[1533]: time="2025-07-06T23:39:08.169085914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:08.170232 containerd[1533]: time="2025-07-06T23:39:08.170200491Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 6 23:39:08.171446 containerd[1533]: time="2025-07-06T23:39:08.171385478Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:08.173395 containerd[1533]: time="2025-07-06T23:39:08.173332109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:08.173902 containerd[1533]: time="2025-07-06T23:39:08.173802484Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.483054148s" Jul 6 23:39:08.173902 containerd[1533]: time="2025-07-06T23:39:08.173835130Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 6 23:39:08.174341 containerd[1533]: time="2025-07-06T23:39:08.174270804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:39:08.716002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556006065.mount: Deactivated successfully. Jul 6 23:39:09.322497 containerd[1533]: time="2025-07-06T23:39:09.322433198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:09.323035 containerd[1533]: time="2025-07-06T23:39:09.322995742Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 6 23:39:09.324633 containerd[1533]: time="2025-07-06T23:39:09.324582104Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:09.330935 containerd[1533]: time="2025-07-06T23:39:09.330606441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:09.331902 containerd[1533]: time="2025-07-06T23:39:09.331821681Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.157516024s" Jul 6 23:39:09.332010 containerd[1533]: time="2025-07-06T23:39:09.331993009Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:39:09.332502 containerd[1533]: time="2025-07-06T23:39:09.332425576Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:39:09.745604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761049647.mount: Deactivated successfully. Jul 6 23:39:09.750924 containerd[1533]: time="2025-07-06T23:39:09.750675591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:39:09.757782 containerd[1533]: time="2025-07-06T23:39:09.757733814Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 6 23:39:09.758808 containerd[1533]: time="2025-07-06T23:39:09.758773296Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:39:09.760935 containerd[1533]: time="2025-07-06T23:39:09.760900016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:39:09.761642 containerd[1533]: time="2025-07-06T23:39:09.761618788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 429.167442ms" Jul 6 23:39:09.761696 containerd[1533]: time="2025-07-06T23:39:09.761649308Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:39:09.762364 containerd[1533]: time="2025-07-06T23:39:09.762340378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:39:10.276787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount195321542.mount: Deactivated successfully. Jul 6 23:39:11.712336 containerd[1533]: time="2025-07-06T23:39:11.712237822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:11.713042 containerd[1533]: time="2025-07-06T23:39:11.713012895Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 6 23:39:11.713979 containerd[1533]: time="2025-07-06T23:39:11.713929160Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:11.716866 containerd[1533]: time="2025-07-06T23:39:11.716812836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:11.719484 containerd[1533]: time="2025-07-06T23:39:11.719169411Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.956791205s" Jul 6 23:39:11.719484 containerd[1533]: time="2025-07-06T23:39:11.719222720Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 6 23:39:17.199756 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:39:17.201325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:39:17.344653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:39:17.362256 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:39:17.400588 kubelet[2194]: E0706 23:39:17.400525 2194 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:39:17.403228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:39:17.403365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:39:17.403906 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.2M memory peak. Jul 6 23:39:17.833829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:39:17.833997 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.2M memory peak. Jul 6 23:39:17.836776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:39:17.858614 systemd[1]: Reload requested from client PID 2210 ('systemctl') (unit session-7.scope)... Jul 6 23:39:17.858634 systemd[1]: Reloading... Jul 6 23:39:17.939915 zram_generator::config[2256]: No configuration found. Jul 6 23:39:18.256764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:39:18.341736 systemd[1]: Reloading finished in 482 ms. Jul 6 23:39:18.394387 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:39:18.394471 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:39:18.394755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:39:18.394800 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95.2M memory peak. Jul 6 23:39:18.396386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:39:18.518530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:39:18.522565 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:39:18.558271 kubelet[2298]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:39:18.558271 kubelet[2298]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:39:18.558271 kubelet[2298]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:39:18.558671 kubelet[2298]: I0706 23:39:18.558322 2298 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:39:19.208136 kubelet[2298]: I0706 23:39:19.208078 2298 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:39:19.208136 kubelet[2298]: I0706 23:39:19.208116 2298 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:39:19.208412 kubelet[2298]: I0706 23:39:19.208385 2298 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:39:19.255364 kubelet[2298]: E0706 23:39:19.255320 2298 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:39:19.256919 kubelet[2298]: I0706 23:39:19.256852 2298 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:39:19.263788 kubelet[2298]: I0706 23:39:19.263739 2298 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:39:19.267901 kubelet[2298]: I0706 23:39:19.266721 2298 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:39:19.268767 kubelet[2298]: I0706 23:39:19.268719 2298 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:39:19.269167 kubelet[2298]: I0706 23:39:19.268769 2298 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:39:19.269276 kubelet[2298]: I0706 23:39:19.269243 2298 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:39:19.269276 kubelet[2298]: I0706 23:39:19.269257 2298 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:39:19.269541 kubelet[2298]: I0706 23:39:19.269512 2298 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:39:19.272517 kubelet[2298]: I0706 23:39:19.272484 2298 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:39:19.272517 kubelet[2298]: I0706 23:39:19.272510 2298 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:39:19.273566 kubelet[2298]: I0706 23:39:19.273530 2298 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:39:19.273566 kubelet[2298]: I0706 23:39:19.273560 2298 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:39:19.282387 kubelet[2298]: I0706 23:39:19.282350 2298 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:39:19.283219 kubelet[2298]: I0706 23:39:19.283098 2298 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:39:19.283219 kubelet[2298]: W0706 23:39:19.283115 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jul 6 23:39:19.283219 kubelet[2298]: E0706 23:39:19.283177 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:39:19.283332 kubelet[2298]: W0706 23:39:19.283210 2298 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:39:19.284020 kubelet[2298]: W0706 23:39:19.283979 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jul 6 23:39:19.284061 kubelet[2298]: E0706 23:39:19.284029 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:39:19.284243 kubelet[2298]: I0706 23:39:19.284170 2298 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:39:19.284243 kubelet[2298]: I0706 23:39:19.284205 2298 server.go:1287] "Started kubelet" Jul 6 23:39:19.285107 kubelet[2298]: I0706 23:39:19.285061 2298 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:39:19.285223 kubelet[2298]: I0706 23:39:19.285196 2298 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:39:19.285370 kubelet[2298]: I0706 23:39:19.285346 2298 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:39:19.286065 kubelet[2298]: I0706 23:39:19.286007 2298 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:39:19.286184 kubelet[2298]: I0706 23:39:19.286158 2298 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:39:19.286735 kubelet[2298]: I0706 23:39:19.286700 2298 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:39:19.287238 kubelet[2298]: E0706 23:39:19.287211 2298 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:39:19.287290 kubelet[2298]: I0706 23:39:19.287252 2298 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:39:19.287435 kubelet[2298]: I0706 23:39:19.287410 2298 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:39:19.287496 kubelet[2298]: I0706 23:39:19.287481 2298 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:39:19.289418 kubelet[2298]: W0706 23:39:19.287786 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jul 6 23:39:19.289418 kubelet[2298]: E0706 23:39:19.287835 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:39:19.289418 kubelet[2298]: I0706 23:39:19.288485 2298 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:39:19.289418 kubelet[2298]: I0706 23:39:19.288573 2298 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:39:19.289418 kubelet[2298]: E0706 23:39:19.289251 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="200ms" Jul 6 23:39:19.289418 kubelet[2298]: E0706 23:39:19.289329 2298 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:39:19.289418 kubelet[2298]: I0706 23:39:19.289340 2298 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:39:19.290156 kubelet[2298]: E0706 23:39:19.289917 2298 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.120:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.120:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcde4049f22d9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:39:19.284187865 +0000 UTC m=+0.758450755,LastTimestamp:2025-07-06 23:39:19.284187865 +0000 UTC m=+0.758450755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:39:19.302255 kubelet[2298]: I0706 23:39:19.299045 2298 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:39:19.302255 kubelet[2298]: I0706 23:39:19.299065 2298 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:39:19.302255 kubelet[2298]: I0706 23:39:19.299086 2298 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:39:19.308129 kubelet[2298]: I0706 23:39:19.307935 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:39:19.309154 kubelet[2298]: I0706 23:39:19.309130 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:39:19.309428 kubelet[2298]: I0706 23:39:19.309413 2298 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:39:19.309503 kubelet[2298]: I0706 23:39:19.309491 2298 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:39:19.309545 kubelet[2298]: I0706 23:39:19.309536 2298 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:39:19.309650 kubelet[2298]: E0706 23:39:19.309632 2298 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:39:19.388040 kubelet[2298]: E0706 23:39:19.388001 2298 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:39:19.401827 kubelet[2298]: W0706 23:39:19.401774 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jul 6 23:39:19.402011 kubelet[2298]: E0706 23:39:19.401970 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:39:19.410340 kubelet[2298]: E0706 23:39:19.410315 2298 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:39:19.417370 kubelet[2298]: I0706 23:39:19.417330 2298 policy_none.go:49] "None policy: Start" Jul 6 23:39:19.417370 kubelet[2298]: I0706 23:39:19.417364 2298 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:39:19.417370 kubelet[2298]: I0706 23:39:19.417377 2298 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:39:19.436620 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:39:19.457241 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:39:19.460368 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:39:19.476889 kubelet[2298]: I0706 23:39:19.476836 2298 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:39:19.477095 kubelet[2298]: I0706 23:39:19.477074 2298 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:39:19.477143 kubelet[2298]: I0706 23:39:19.477091 2298 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:39:19.477540 kubelet[2298]: I0706 23:39:19.477353 2298 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:39:19.478797 kubelet[2298]: E0706 23:39:19.478778 2298 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:39:19.478956 kubelet[2298]: E0706 23:39:19.478929 2298 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:39:19.490542 kubelet[2298]: E0706 23:39:19.490495 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="400ms" Jul 6 23:39:19.578526 kubelet[2298]: I0706 23:39:19.578483 2298 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:39:19.579428 kubelet[2298]: E0706 23:39:19.579401 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Jul 6 23:39:19.618060 systemd[1]: Created slice kubepods-burstable-pod848828fa0987d47d118737fcbe764c76.slice - libcontainer container kubepods-burstable-pod848828fa0987d47d118737fcbe764c76.slice. Jul 6 23:39:19.632539 kubelet[2298]: E0706 23:39:19.632435 2298 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.120:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.120:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcde4049f22d9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:39:19.284187865 +0000 UTC m=+0.758450755,LastTimestamp:2025-07-06 23:39:19.284187865 +0000 UTC m=+0.758450755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:39:19.635136 kubelet[2298]: E0706 23:39:19.635063 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:39:19.637972 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 6 23:39:19.656190 kubelet[2298]: E0706 23:39:19.656091 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:39:19.658424 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 6 23:39:19.660052 kubelet[2298]: E0706 23:39:19.660019 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:39:19.689514 kubelet[2298]: I0706 23:39:19.689308 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/848828fa0987d47d118737fcbe764c76-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"848828fa0987d47d118737fcbe764c76\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:19.689514 kubelet[2298]: I0706 23:39:19.689354 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/848828fa0987d47d118737fcbe764c76-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"848828fa0987d47d118737fcbe764c76\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:19.689514 kubelet[2298]: I0706 23:39:19.689373 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:19.689514 kubelet[2298]: I0706 23:39:19.689392 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:19.689514 kubelet[2298]: I0706 23:39:19.689411 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:19.689699 kubelet[2298]: I0706 23:39:19.689427 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:39:19.689699 kubelet[2298]: I0706 23:39:19.689442 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/848828fa0987d47d118737fcbe764c76-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"848828fa0987d47d118737fcbe764c76\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:19.689699 kubelet[2298]: I0706 23:39:19.689457 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:19.689699 kubelet[2298]: I0706 23:39:19.689470 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:19.780666 kubelet[2298]: I0706 23:39:19.780560 2298 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:39:19.781531 kubelet[2298]: E0706 23:39:19.781496 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Jul 6 23:39:19.890964 kubelet[2298]: E0706 23:39:19.890909 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="800ms" Jul 6 23:39:19.937078 containerd[1533]: time="2025-07-06T23:39:19.937012520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:848828fa0987d47d118737fcbe764c76,Namespace:kube-system,Attempt:0,}" Jul 6 23:39:19.955565 containerd[1533]: time="2025-07-06T23:39:19.955527981Z" level=info msg="connecting to shim ceb3f646ac877f9ef3f15b599a11c545d788eb3ed14a834409bf1718b90d3efd" address="unix:///run/containerd/s/4493664c7e7f48d5396484aabe143607d87034451a1e08eea33a0e5f499b330b" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:39:19.957508 containerd[1533]: time="2025-07-06T23:39:19.957468718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 6 23:39:19.961647 containerd[1533]: time="2025-07-06T23:39:19.961549940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 6 23:39:19.980042 systemd[1]: Started cri-containerd-ceb3f646ac877f9ef3f15b599a11c545d788eb3ed14a834409bf1718b90d3efd.scope - libcontainer container ceb3f646ac877f9ef3f15b599a11c545d788eb3ed14a834409bf1718b90d3efd. Jul 6 23:39:19.984603 containerd[1533]: time="2025-07-06T23:39:19.984483770Z" level=info msg="connecting to shim 754ef9415f30e943820e3ce5840794fb639ffc4bac7b15c1300a958ac60f937a" address="unix:///run/containerd/s/86b87b8c8f4c07d410fcd5e419c2f0e6f7b29ece9225632dc8c671f2a306737b" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:39:20.002799 containerd[1533]: time="2025-07-06T23:39:20.002729681Z" level=info msg="connecting to shim 97d0a0e0029608e0c646f4a5aa81ace1007a35cdad2ecaaba76b2835cc730358" address="unix:///run/containerd/s/229091ae77ee31338e186ee8f8ba2d7e4ef777b1ac17ec08a724ede032347c6d" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:39:20.022078 systemd[1]: Started cri-containerd-754ef9415f30e943820e3ce5840794fb639ffc4bac7b15c1300a958ac60f937a.scope - libcontainer container 754ef9415f30e943820e3ce5840794fb639ffc4bac7b15c1300a958ac60f937a. Jul 6 23:39:20.029575 systemd[1]: Started cri-containerd-97d0a0e0029608e0c646f4a5aa81ace1007a35cdad2ecaaba76b2835cc730358.scope - libcontainer container 97d0a0e0029608e0c646f4a5aa81ace1007a35cdad2ecaaba76b2835cc730358. Jul 6 23:39:20.034013 containerd[1533]: time="2025-07-06T23:39:20.033906244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:848828fa0987d47d118737fcbe764c76,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceb3f646ac877f9ef3f15b599a11c545d788eb3ed14a834409bf1718b90d3efd\"" Jul 6 23:39:20.039549 containerd[1533]: time="2025-07-06T23:39:20.039515947Z" level=info msg="CreateContainer within sandbox \"ceb3f646ac877f9ef3f15b599a11c545d788eb3ed14a834409bf1718b90d3efd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:39:20.046913 containerd[1533]: time="2025-07-06T23:39:20.046868345Z" level=info msg="Container e1874d3ffcd67b8edb17e9a3824e3611242f529c11b3b3e9b731119d6ae124cc: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:20.054750 containerd[1533]: time="2025-07-06T23:39:20.054713013Z" level=info msg="CreateContainer within sandbox \"ceb3f646ac877f9ef3f15b599a11c545d788eb3ed14a834409bf1718b90d3efd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e1874d3ffcd67b8edb17e9a3824e3611242f529c11b3b3e9b731119d6ae124cc\"" Jul 6 23:39:20.055774 containerd[1533]: time="2025-07-06T23:39:20.055357394Z" level=info msg="StartContainer for \"e1874d3ffcd67b8edb17e9a3824e3611242f529c11b3b3e9b731119d6ae124cc\"" Jul 6 23:39:20.057898 containerd[1533]: time="2025-07-06T23:39:20.057832123Z" level=info msg="connecting to shim e1874d3ffcd67b8edb17e9a3824e3611242f529c11b3b3e9b731119d6ae124cc" address="unix:///run/containerd/s/4493664c7e7f48d5396484aabe143607d87034451a1e08eea33a0e5f499b330b" protocol=ttrpc version=3 Jul 6 23:39:20.068144 containerd[1533]: time="2025-07-06T23:39:20.068107733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"754ef9415f30e943820e3ce5840794fb639ffc4bac7b15c1300a958ac60f937a\"" Jul 6 23:39:20.071393 containerd[1533]: time="2025-07-06T23:39:20.071353797Z" level=info msg="CreateContainer within sandbox \"754ef9415f30e943820e3ce5840794fb639ffc4bac7b15c1300a958ac60f937a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:39:20.078100 containerd[1533]: time="2025-07-06T23:39:20.078060291Z" level=info msg="Container 775a5cc77415e9481d4ad2f08c50df508daccf310a8cc5d05152bf25b7a1c85e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:20.079114 systemd[1]: Started cri-containerd-e1874d3ffcd67b8edb17e9a3824e3611242f529c11b3b3e9b731119d6ae124cc.scope - libcontainer container e1874d3ffcd67b8edb17e9a3824e3611242f529c11b3b3e9b731119d6ae124cc. Jul 6 23:39:20.084769 containerd[1533]: time="2025-07-06T23:39:20.084735257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"97d0a0e0029608e0c646f4a5aa81ace1007a35cdad2ecaaba76b2835cc730358\"" Jul 6 23:39:20.087350 containerd[1533]: time="2025-07-06T23:39:20.087320915Z" level=info msg="CreateContainer within sandbox \"97d0a0e0029608e0c646f4a5aa81ace1007a35cdad2ecaaba76b2835cc730358\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:39:20.087671 containerd[1533]: time="2025-07-06T23:39:20.087480558Z" level=info msg="CreateContainer within sandbox \"754ef9415f30e943820e3ce5840794fb639ffc4bac7b15c1300a958ac60f937a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"775a5cc77415e9481d4ad2f08c50df508daccf310a8cc5d05152bf25b7a1c85e\"" Jul 6 23:39:20.088190 containerd[1533]: time="2025-07-06T23:39:20.088151741Z" level=info msg="StartContainer for \"775a5cc77415e9481d4ad2f08c50df508daccf310a8cc5d05152bf25b7a1c85e\"" Jul 6 23:39:20.089336 containerd[1533]: time="2025-07-06T23:39:20.089301332Z" level=info msg="connecting to shim 775a5cc77415e9481d4ad2f08c50df508daccf310a8cc5d05152bf25b7a1c85e" address="unix:///run/containerd/s/86b87b8c8f4c07d410fcd5e419c2f0e6f7b29ece9225632dc8c671f2a306737b" protocol=ttrpc version=3 Jul 6 23:39:20.097139 containerd[1533]: time="2025-07-06T23:39:20.097104095Z" level=info msg="Container d6288230815d573ab5d6fdbcb2d20bbea0fbccd7f15151e89ab2163a9731be55: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:20.107799 containerd[1533]: time="2025-07-06T23:39:20.107768858Z" level=info msg="CreateContainer within sandbox \"97d0a0e0029608e0c646f4a5aa81ace1007a35cdad2ecaaba76b2835cc730358\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d6288230815d573ab5d6fdbcb2d20bbea0fbccd7f15151e89ab2163a9731be55\"" Jul 6 23:39:20.108749 containerd[1533]: time="2025-07-06T23:39:20.108723953Z" level=info msg="StartContainer for \"d6288230815d573ab5d6fdbcb2d20bbea0fbccd7f15151e89ab2163a9731be55\"" Jul 6 23:39:20.112057 systemd[1]: Started cri-containerd-775a5cc77415e9481d4ad2f08c50df508daccf310a8cc5d05152bf25b7a1c85e.scope - libcontainer container 775a5cc77415e9481d4ad2f08c50df508daccf310a8cc5d05152bf25b7a1c85e. Jul 6 23:39:20.113526 containerd[1533]: time="2025-07-06T23:39:20.113503191Z" level=info msg="connecting to shim d6288230815d573ab5d6fdbcb2d20bbea0fbccd7f15151e89ab2163a9731be55" address="unix:///run/containerd/s/229091ae77ee31338e186ee8f8ba2d7e4ef777b1ac17ec08a724ede032347c6d" protocol=ttrpc version=3 Jul 6 23:39:20.131228 containerd[1533]: time="2025-07-06T23:39:20.131182517Z" level=info msg="StartContainer for \"e1874d3ffcd67b8edb17e9a3824e3611242f529c11b3b3e9b731119d6ae124cc\" returns successfully" Jul 6 23:39:20.146033 systemd[1]: Started cri-containerd-d6288230815d573ab5d6fdbcb2d20bbea0fbccd7f15151e89ab2163a9731be55.scope - libcontainer container d6288230815d573ab5d6fdbcb2d20bbea0fbccd7f15151e89ab2163a9731be55. Jul 6 23:39:20.185307 kubelet[2298]: I0706 23:39:20.184955 2298 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:39:20.185307 kubelet[2298]: E0706 23:39:20.185271 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Jul 6 23:39:20.188799 containerd[1533]: time="2025-07-06T23:39:20.188722752Z" level=info msg="StartContainer for \"775a5cc77415e9481d4ad2f08c50df508daccf310a8cc5d05152bf25b7a1c85e\" returns successfully" Jul 6 23:39:20.215268 containerd[1533]: time="2025-07-06T23:39:20.215234330Z" level=info msg="StartContainer for \"d6288230815d573ab5d6fdbcb2d20bbea0fbccd7f15151e89ab2163a9731be55\" returns successfully" Jul 6 23:39:20.318328 kubelet[2298]: E0706 23:39:20.318228 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:39:20.321913 kubelet[2298]: E0706 23:39:20.321835 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:39:20.325836 kubelet[2298]: E0706 23:39:20.325810 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:39:20.987480 kubelet[2298]: I0706 23:39:20.987172 2298 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:39:21.327619 kubelet[2298]: E0706 23:39:21.327499 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:39:21.328654 kubelet[2298]: E0706 23:39:21.328629 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:39:21.804338 kubelet[2298]: E0706 23:39:21.804231 2298 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 6 23:39:21.955554 kubelet[2298]: I0706 23:39:21.953864 2298 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:39:21.955554 kubelet[2298]: E0706 23:39:21.953911 2298 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 6 23:39:21.989855 kubelet[2298]: I0706 23:39:21.989818 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:22.000497 kubelet[2298]: E0706 23:39:22.000447 2298 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:22.000497 kubelet[2298]: I0706 23:39:22.000483 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:22.006085 kubelet[2298]: E0706 23:39:22.005893 2298 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:22.006085 kubelet[2298]: I0706 23:39:22.005927 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:39:22.008173 kubelet[2298]: E0706 23:39:22.008141 2298 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 6 23:39:22.283411 kubelet[2298]: I0706 23:39:22.283271 2298 apiserver.go:52] "Watching apiserver" Jul 6 23:39:22.288544 kubelet[2298]: I0706 23:39:22.288507 2298 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:39:23.497291 kubelet[2298]: I0706 23:39:23.497253 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:39:23.990669 systemd[1]: Reload requested from client PID 2574 ('systemctl') (unit session-7.scope)... Jul 6 23:39:23.990686 systemd[1]: Reloading... Jul 6 23:39:23.992084 kubelet[2298]: I0706 23:39:23.991681 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:24.063927 zram_generator::config[2623]: No configuration found. Jul 6 23:39:24.133443 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:39:24.231429 systemd[1]: Reloading finished in 240 ms. Jul 6 23:39:24.258644 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:39:24.279874 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:39:24.280131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:39:24.280182 systemd[1]: kubelet.service: Consumed 1.173s CPU time, 128.1M memory peak. Jul 6 23:39:24.282405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:39:24.419914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:39:24.433357 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:39:24.477134 kubelet[2659]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:39:24.477134 kubelet[2659]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:39:24.477134 kubelet[2659]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:39:24.477581 kubelet[2659]: I0706 23:39:24.477253 2659 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:39:24.485589 kubelet[2659]: I0706 23:39:24.485522 2659 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:39:24.485589 kubelet[2659]: I0706 23:39:24.485558 2659 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:39:24.485962 kubelet[2659]: I0706 23:39:24.485918 2659 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:39:24.487486 kubelet[2659]: I0706 23:39:24.487461 2659 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:39:24.489800 kubelet[2659]: I0706 23:39:24.489762 2659 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:39:24.493488 kubelet[2659]: I0706 23:39:24.493468 2659 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:39:24.502303 kubelet[2659]: I0706 23:39:24.502269 2659 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:39:24.502498 kubelet[2659]: I0706 23:39:24.502466 2659 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:39:24.502677 kubelet[2659]: I0706 23:39:24.502498 2659 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:39:24.502773 kubelet[2659]: I0706 23:39:24.502680 2659 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:39:24.502773 kubelet[2659]: I0706 23:39:24.502689 2659 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:39:24.502773 kubelet[2659]: I0706 23:39:24.502732 2659 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:39:24.502872 kubelet[2659]: I0706 23:39:24.502863 2659 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:39:24.502923 kubelet[2659]: I0706 23:39:24.502893 2659 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:39:24.502923 kubelet[2659]: I0706 23:39:24.502913 2659 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:39:24.502923 kubelet[2659]: I0706 23:39:24.502922 2659 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:39:24.504020 kubelet[2659]: I0706 23:39:24.503944 2659 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:39:24.504447 kubelet[2659]: I0706 23:39:24.504425 2659 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:39:24.505525 kubelet[2659]: I0706 23:39:24.504876 2659 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:39:24.505525 kubelet[2659]: I0706 23:39:24.504924 2659 server.go:1287] "Started kubelet" Jul 6 23:39:24.507743 kubelet[2659]: I0706 23:39:24.506411 2659 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:39:24.509353 kubelet[2659]: I0706 23:39:24.508630 2659 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:39:24.510842 kubelet[2659]: I0706 23:39:24.509773 2659 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:39:24.510949 kubelet[2659]: I0706 23:39:24.510876 2659 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:39:24.511534 kubelet[2659]: I0706 23:39:24.511130 2659 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:39:24.511534 kubelet[2659]: I0706 23:39:24.511339 2659 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:39:24.512284 kubelet[2659]: I0706 23:39:24.512112 2659 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:39:24.512353 kubelet[2659]: I0706 23:39:24.512315 2659 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:39:24.513699 kubelet[2659]: I0706 23:39:24.512975 2659 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:39:24.514972 kubelet[2659]: I0706 23:39:24.514214 2659 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:39:24.516582 kubelet[2659]: E0706 23:39:24.516086 2659 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:39:24.525847 kubelet[2659]: E0706 23:39:24.525799 2659 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:39:24.527909 kubelet[2659]: I0706 23:39:24.526241 2659 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:39:24.528519 kubelet[2659]: I0706 23:39:24.528483 2659 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:39:24.528519 kubelet[2659]: I0706 23:39:24.528509 2659 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:39:24.534926 kubelet[2659]: I0706 23:39:24.534741 2659 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:39:24.534926 kubelet[2659]: I0706 23:39:24.534770 2659 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:39:24.534926 kubelet[2659]: I0706 23:39:24.534787 2659 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:39:24.534926 kubelet[2659]: I0706 23:39:24.534793 2659 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:39:24.534926 kubelet[2659]: E0706 23:39:24.534841 2659 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:39:24.569230 kubelet[2659]: I0706 23:39:24.569202 2659 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:39:24.569913 kubelet[2659]: I0706 23:39:24.569378 2659 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:39:24.569913 kubelet[2659]: I0706 23:39:24.569402 2659 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:39:24.569913 kubelet[2659]: I0706 23:39:24.569600 2659 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:39:24.569913 kubelet[2659]: I0706 23:39:24.569613 2659 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:39:24.569913 kubelet[2659]: I0706 23:39:24.569634 2659 policy_none.go:49] "None policy: Start" Jul 6 23:39:24.569913 kubelet[2659]: I0706 23:39:24.569642 2659 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:39:24.569913 kubelet[2659]: I0706 23:39:24.569657 2659 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:39:24.569913 kubelet[2659]: I0706 23:39:24.569753 2659 state_mem.go:75] "Updated machine memory state" Jul 6 23:39:24.573769 kubelet[2659]: I0706 23:39:24.573739 2659 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:39:24.573944 kubelet[2659]: I0706 23:39:24.573928 2659 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:39:24.573991 kubelet[2659]: I0706 23:39:24.573957 2659 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:39:24.574529 kubelet[2659]: I0706 23:39:24.574503 2659 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:39:24.575198 kubelet[2659]: E0706 23:39:24.575174 2659 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:39:24.635548 kubelet[2659]: I0706 23:39:24.635511 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:24.635689 kubelet[2659]: I0706 23:39:24.635575 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:24.635689 kubelet[2659]: I0706 23:39:24.635516 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:39:24.641467 kubelet[2659]: E0706 23:39:24.641434 2659 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:39:24.641759 kubelet[2659]: E0706 23:39:24.641716 2659 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:24.676228 kubelet[2659]: I0706 23:39:24.676037 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:39:24.683902 kubelet[2659]: I0706 23:39:24.683746 2659 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 6 23:39:24.683902 kubelet[2659]: I0706 23:39:24.683829 2659 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:39:24.713942 kubelet[2659]: I0706 23:39:24.713874 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:24.713942 kubelet[2659]: I0706 23:39:24.713948 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:24.714113 kubelet[2659]: I0706 23:39:24.713977 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:24.714113 kubelet[2659]: I0706 23:39:24.713998 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:39:24.714113 kubelet[2659]: I0706 23:39:24.714013 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/848828fa0987d47d118737fcbe764c76-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"848828fa0987d47d118737fcbe764c76\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:24.714113 kubelet[2659]: I0706 23:39:24.714030 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/848828fa0987d47d118737fcbe764c76-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"848828fa0987d47d118737fcbe764c76\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:24.714113 kubelet[2659]: I0706 23:39:24.714055 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/848828fa0987d47d118737fcbe764c76-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"848828fa0987d47d118737fcbe764c76\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:24.714220 kubelet[2659]: I0706 23:39:24.714071 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:24.714220 kubelet[2659]: I0706 23:39:24.714085 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:25.504205 kubelet[2659]: I0706 23:39:25.503964 2659 apiserver.go:52] "Watching apiserver" Jul 6 23:39:25.512544 kubelet[2659]: I0706 23:39:25.512396 2659 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:39:25.552668 kubelet[2659]: I0706 23:39:25.552489 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:39:25.553574 kubelet[2659]: I0706 23:39:25.552745 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:25.553574 kubelet[2659]: I0706 23:39:25.553496 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:25.561540 kubelet[2659]: E0706 23:39:25.560940 2659 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:39:25.561835 kubelet[2659]: E0706 23:39:25.561801 2659 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:39:25.561908 kubelet[2659]: E0706 23:39:25.561804 2659 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:39:25.581317 kubelet[2659]: I0706 23:39:25.581231 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.580634108 podStartE2EDuration="1.580634108s" podCreationTimestamp="2025-07-06 23:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:39:25.571773861 +0000 UTC m=+1.134928847" watchObservedRunningTime="2025-07-06 23:39:25.580634108 +0000 UTC m=+1.143789094" Jul 6 23:39:25.581510 kubelet[2659]: I0706 23:39:25.581388 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.581380611 podStartE2EDuration="2.581380611s" podCreationTimestamp="2025-07-06 23:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:39:25.580434912 +0000 UTC m=+1.143589898" watchObservedRunningTime="2025-07-06 23:39:25.581380611 +0000 UTC m=+1.144535557" Jul 6 23:39:25.600033 kubelet[2659]: I0706 23:39:25.599819 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.599797328 podStartE2EDuration="2.599797328s" podCreationTimestamp="2025-07-06 23:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:39:25.589292957 +0000 UTC m=+1.152447943" watchObservedRunningTime="2025-07-06 23:39:25.599797328 +0000 UTC m=+1.162952314" Jul 6 23:39:30.030919 kubelet[2659]: I0706 23:39:30.030861 2659 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:39:30.031588 containerd[1533]: time="2025-07-06T23:39:30.031499779Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:39:30.031827 kubelet[2659]: I0706 23:39:30.031766 2659 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:39:31.043861 systemd[1]: Created slice kubepods-besteffort-pod4d380c92_6dcc_4ab3_bb86_3ec0ae3dc47a.slice - libcontainer container kubepods-besteffort-pod4d380c92_6dcc_4ab3_bb86_3ec0ae3dc47a.slice. Jul 6 23:39:31.054010 kubelet[2659]: I0706 23:39:31.053969 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4d380c92-6dcc-4ab3-bb86-3ec0ae3dc47a-kube-proxy\") pod \"kube-proxy-8tq26\" (UID: \"4d380c92-6dcc-4ab3-bb86-3ec0ae3dc47a\") " pod="kube-system/kube-proxy-8tq26" Jul 6 23:39:31.054010 kubelet[2659]: I0706 23:39:31.054006 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d380c92-6dcc-4ab3-bb86-3ec0ae3dc47a-xtables-lock\") pod \"kube-proxy-8tq26\" (UID: \"4d380c92-6dcc-4ab3-bb86-3ec0ae3dc47a\") " pod="kube-system/kube-proxy-8tq26" Jul 6 23:39:31.054010 kubelet[2659]: I0706 23:39:31.054027 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d380c92-6dcc-4ab3-bb86-3ec0ae3dc47a-lib-modules\") pod \"kube-proxy-8tq26\" (UID: \"4d380c92-6dcc-4ab3-bb86-3ec0ae3dc47a\") " pod="kube-system/kube-proxy-8tq26" Jul 6 23:39:31.054398 kubelet[2659]: I0706 23:39:31.054052 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmp4s\" (UniqueName: \"kubernetes.io/projected/4d380c92-6dcc-4ab3-bb86-3ec0ae3dc47a-kube-api-access-lmp4s\") pod \"kube-proxy-8tq26\" (UID: \"4d380c92-6dcc-4ab3-bb86-3ec0ae3dc47a\") " pod="kube-system/kube-proxy-8tq26" Jul 6 23:39:31.157027 systemd[1]: Created slice kubepods-besteffort-pod153430d7_4beb_46c7_b717_bbd3442eadf2.slice - libcontainer container kubepods-besteffort-pod153430d7_4beb_46c7_b717_bbd3442eadf2.slice. Jul 6 23:39:31.254819 kubelet[2659]: I0706 23:39:31.254771 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/153430d7-4beb-46c7-b717-bbd3442eadf2-var-lib-calico\") pod \"tigera-operator-747864d56d-lbxwp\" (UID: \"153430d7-4beb-46c7-b717-bbd3442eadf2\") " pod="tigera-operator/tigera-operator-747864d56d-lbxwp" Jul 6 23:39:31.255000 kubelet[2659]: I0706 23:39:31.254847 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chhxr\" (UniqueName: \"kubernetes.io/projected/153430d7-4beb-46c7-b717-bbd3442eadf2-kube-api-access-chhxr\") pod \"tigera-operator-747864d56d-lbxwp\" (UID: \"153430d7-4beb-46c7-b717-bbd3442eadf2\") " pod="tigera-operator/tigera-operator-747864d56d-lbxwp" Jul 6 23:39:31.360783 containerd[1533]: time="2025-07-06T23:39:31.360673699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8tq26,Uid:4d380c92-6dcc-4ab3-bb86-3ec0ae3dc47a,Namespace:kube-system,Attempt:0,}" Jul 6 23:39:31.385198 containerd[1533]: time="2025-07-06T23:39:31.384951505Z" level=info msg="connecting to shim 153531eb33bc1410561f03f3b7449864d55c66f5c751675414294cd93d35365c" address="unix:///run/containerd/s/2c61dbc2ff8e3418dd5e49defef89ce6a90768c4a3c24d99f140a1de18b9a63a" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:39:31.416178 systemd[1]: Started cri-containerd-153531eb33bc1410561f03f3b7449864d55c66f5c751675414294cd93d35365c.scope - libcontainer container 153531eb33bc1410561f03f3b7449864d55c66f5c751675414294cd93d35365c. Jul 6 23:39:31.437892 containerd[1533]: time="2025-07-06T23:39:31.437846376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8tq26,Uid:4d380c92-6dcc-4ab3-bb86-3ec0ae3dc47a,Namespace:kube-system,Attempt:0,} returns sandbox id \"153531eb33bc1410561f03f3b7449864d55c66f5c751675414294cd93d35365c\"" Jul 6 23:39:31.440761 containerd[1533]: time="2025-07-06T23:39:31.440725007Z" level=info msg="CreateContainer within sandbox \"153531eb33bc1410561f03f3b7449864d55c66f5c751675414294cd93d35365c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:39:31.449174 containerd[1533]: time="2025-07-06T23:39:31.449021859Z" level=info msg="Container fb7aa973c4c6a2288772034921e5de158f997e2652bd94336ad20bf1ca00c83b: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:31.456039 containerd[1533]: time="2025-07-06T23:39:31.455961675Z" level=info msg="CreateContainer within sandbox \"153531eb33bc1410561f03f3b7449864d55c66f5c751675414294cd93d35365c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb7aa973c4c6a2288772034921e5de158f997e2652bd94336ad20bf1ca00c83b\"" Jul 6 23:39:31.456705 containerd[1533]: time="2025-07-06T23:39:31.456678020Z" level=info msg="StartContainer for \"fb7aa973c4c6a2288772034921e5de158f997e2652bd94336ad20bf1ca00c83b\"" Jul 6 23:39:31.459667 containerd[1533]: time="2025-07-06T23:39:31.459639069Z" level=info msg="connecting to shim fb7aa973c4c6a2288772034921e5de158f997e2652bd94336ad20bf1ca00c83b" address="unix:///run/containerd/s/2c61dbc2ff8e3418dd5e49defef89ce6a90768c4a3c24d99f140a1de18b9a63a" protocol=ttrpc version=3 Jul 6 23:39:31.466915 containerd[1533]: time="2025-07-06T23:39:31.466861643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-lbxwp,Uid:153430d7-4beb-46c7-b717-bbd3442eadf2,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:39:31.479113 systemd[1]: Started cri-containerd-fb7aa973c4c6a2288772034921e5de158f997e2652bd94336ad20bf1ca00c83b.scope - libcontainer container fb7aa973c4c6a2288772034921e5de158f997e2652bd94336ad20bf1ca00c83b. Jul 6 23:39:31.491551 containerd[1533]: time="2025-07-06T23:39:31.491419967Z" level=info msg="connecting to shim 3a4c4dd7f37f837dee18a3500f86e90b9c87e776ec61e43e4ccf3348c4f8531a" address="unix:///run/containerd/s/1db4325128ca88b5d1ed8f9360c8c35213ef1bb9ff49c74b13ea6595bb7bc0e2" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:39:31.524692 systemd[1]: Started cri-containerd-3a4c4dd7f37f837dee18a3500f86e90b9c87e776ec61e43e4ccf3348c4f8531a.scope - libcontainer container 3a4c4dd7f37f837dee18a3500f86e90b9c87e776ec61e43e4ccf3348c4f8531a. Jul 6 23:39:31.541156 containerd[1533]: time="2025-07-06T23:39:31.541106535Z" level=info msg="StartContainer for \"fb7aa973c4c6a2288772034921e5de158f997e2652bd94336ad20bf1ca00c83b\" returns successfully" Jul 6 23:39:31.572361 containerd[1533]: time="2025-07-06T23:39:31.572314949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-lbxwp,Uid:153430d7-4beb-46c7-b717-bbd3442eadf2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3a4c4dd7f37f837dee18a3500f86e90b9c87e776ec61e43e4ccf3348c4f8531a\"" Jul 6 23:39:31.574735 containerd[1533]: time="2025-07-06T23:39:31.574705996Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:39:31.577314 kubelet[2659]: I0706 23:39:31.577236 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8tq26" podStartSLOduration=0.577219169 podStartE2EDuration="577.219169ms" podCreationTimestamp="2025-07-06 23:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:39:31.576851389 +0000 UTC m=+7.140006375" watchObservedRunningTime="2025-07-06 23:39:31.577219169 +0000 UTC m=+7.140374115" Jul 6 23:39:32.174727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633386242.mount: Deactivated successfully. Jul 6 23:39:32.695732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183671784.mount: Deactivated successfully. Jul 6 23:39:33.174002 containerd[1533]: time="2025-07-06T23:39:33.173957858Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:33.174742 containerd[1533]: time="2025-07-06T23:39:33.174473263Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 6 23:39:33.175290 containerd[1533]: time="2025-07-06T23:39:33.175267205Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:33.182659 containerd[1533]: time="2025-07-06T23:39:33.182607924Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:33.183544 containerd[1533]: time="2025-07-06T23:39:33.183511695Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.608772276s" Jul 6 23:39:33.183544 containerd[1533]: time="2025-07-06T23:39:33.183544916Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 6 23:39:33.188545 containerd[1533]: time="2025-07-06T23:39:33.188509733Z" level=info msg="CreateContainer within sandbox \"3a4c4dd7f37f837dee18a3500f86e90b9c87e776ec61e43e4ccf3348c4f8531a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:39:33.199900 containerd[1533]: time="2025-07-06T23:39:33.198792071Z" level=info msg="Container 18e1a1cf00fe9dbda30bd1ee8ea6e715d27dca19b2b598696c4a19e506f38498: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:33.203576 containerd[1533]: time="2025-07-06T23:39:33.203538390Z" level=info msg="CreateContainer within sandbox \"3a4c4dd7f37f837dee18a3500f86e90b9c87e776ec61e43e4ccf3348c4f8531a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"18e1a1cf00fe9dbda30bd1ee8ea6e715d27dca19b2b598696c4a19e506f38498\"" Jul 6 23:39:33.204062 containerd[1533]: time="2025-07-06T23:39:33.204032862Z" level=info msg="StartContainer for \"18e1a1cf00fe9dbda30bd1ee8ea6e715d27dca19b2b598696c4a19e506f38498\"" Jul 6 23:39:33.205250 containerd[1533]: time="2025-07-06T23:39:33.205215770Z" level=info msg="connecting to shim 18e1a1cf00fe9dbda30bd1ee8ea6e715d27dca19b2b598696c4a19e506f38498" address="unix:///run/containerd/s/1db4325128ca88b5d1ed8f9360c8c35213ef1bb9ff49c74b13ea6595bb7bc0e2" protocol=ttrpc version=3 Jul 6 23:39:33.230455 systemd[1]: Started cri-containerd-18e1a1cf00fe9dbda30bd1ee8ea6e715d27dca19b2b598696c4a19e506f38498.scope - libcontainer container 18e1a1cf00fe9dbda30bd1ee8ea6e715d27dca19b2b598696c4a19e506f38498. Jul 6 23:39:33.269255 containerd[1533]: time="2025-07-06T23:39:33.269213932Z" level=info msg="StartContainer for \"18e1a1cf00fe9dbda30bd1ee8ea6e715d27dca19b2b598696c4a19e506f38498\" returns successfully" Jul 6 23:39:34.864352 kubelet[2659]: I0706 23:39:34.864126 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-lbxwp" podStartSLOduration=2.251010715 podStartE2EDuration="3.864110955s" podCreationTimestamp="2025-07-06 23:39:31 +0000 UTC" firstStartedPulling="2025-07-06 23:39:31.573791951 +0000 UTC m=+7.136946897" lastFinishedPulling="2025-07-06 23:39:33.186892151 +0000 UTC m=+8.750047137" observedRunningTime="2025-07-06 23:39:33.578406836 +0000 UTC m=+9.141561822" watchObservedRunningTime="2025-07-06 23:39:34.864110955 +0000 UTC m=+10.427265941" Jul 6 23:39:38.679141 sudo[1736]: pam_unix(sudo:session): session closed for user root Jul 6 23:39:38.682472 sshd[1735]: Connection closed by 10.0.0.1 port 54974 Jul 6 23:39:38.683071 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:38.686722 systemd[1]: sshd@6-10.0.0.120:22-10.0.0.1:54974.service: Deactivated successfully. Jul 6 23:39:38.690384 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:39:38.690764 systemd[1]: session-7.scope: Consumed 8.246s CPU time, 229.7M memory peak. Jul 6 23:39:38.694267 systemd-logind[1504]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:39:38.696948 systemd-logind[1504]: Removed session 7. Jul 6 23:39:39.388998 update_engine[1510]: I20250706 23:39:39.388932 1510 update_attempter.cc:509] Updating boot flags... Jul 6 23:39:44.027955 systemd[1]: Created slice kubepods-besteffort-pod65b1ef67_446e_4c4f_b36e_fd93ccea18ed.slice - libcontainer container kubepods-besteffort-pod65b1ef67_446e_4c4f_b36e_fd93ccea18ed.slice. Jul 6 23:39:44.040539 kubelet[2659]: I0706 23:39:44.040477 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/65b1ef67-446e-4c4f-b36e-fd93ccea18ed-typha-certs\") pod \"calico-typha-55bc7dc6f8-fqwjc\" (UID: \"65b1ef67-446e-4c4f-b36e-fd93ccea18ed\") " pod="calico-system/calico-typha-55bc7dc6f8-fqwjc" Jul 6 23:39:44.040539 kubelet[2659]: I0706 23:39:44.040535 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw9cc\" (UniqueName: \"kubernetes.io/projected/65b1ef67-446e-4c4f-b36e-fd93ccea18ed-kube-api-access-gw9cc\") pod \"calico-typha-55bc7dc6f8-fqwjc\" (UID: \"65b1ef67-446e-4c4f-b36e-fd93ccea18ed\") " pod="calico-system/calico-typha-55bc7dc6f8-fqwjc" Jul 6 23:39:44.040938 kubelet[2659]: I0706 23:39:44.040565 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b1ef67-446e-4c4f-b36e-fd93ccea18ed-tigera-ca-bundle\") pod \"calico-typha-55bc7dc6f8-fqwjc\" (UID: \"65b1ef67-446e-4c4f-b36e-fd93ccea18ed\") " pod="calico-system/calico-typha-55bc7dc6f8-fqwjc" Jul 6 23:39:44.333795 containerd[1533]: time="2025-07-06T23:39:44.333213253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55bc7dc6f8-fqwjc,Uid:65b1ef67-446e-4c4f-b36e-fd93ccea18ed,Namespace:calico-system,Attempt:0,}" Jul 6 23:39:44.346663 systemd[1]: Created slice kubepods-besteffort-pod017d4531_7340_434b_9272_c3f736a1a6ac.slice - libcontainer container kubepods-besteffort-pod017d4531_7340_434b_9272_c3f736a1a6ac.slice. Jul 6 23:39:44.385564 containerd[1533]: time="2025-07-06T23:39:44.385511599Z" level=info msg="connecting to shim 37306b79c2a5f7a8307ce02fa53bfb3f0288e0095546039e038a50365385160e" address="unix:///run/containerd/s/e802dd0b74c6f0f710b679b080c7c34ad548f78f6723abe635d4efaa55764b8f" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:39:44.442858 kubelet[2659]: I0706 23:39:44.442663 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/017d4531-7340-434b-9272-c3f736a1a6ac-cni-net-dir\") pod \"calico-node-khs2s\" (UID: \"017d4531-7340-434b-9272-c3f736a1a6ac\") " pod="calico-system/calico-node-khs2s" Jul 6 23:39:44.442858 kubelet[2659]: I0706 23:39:44.442701 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/017d4531-7340-434b-9272-c3f736a1a6ac-tigera-ca-bundle\") pod \"calico-node-khs2s\" (UID: \"017d4531-7340-434b-9272-c3f736a1a6ac\") " pod="calico-system/calico-node-khs2s" Jul 6 23:39:44.442858 kubelet[2659]: I0706 23:39:44.442717 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/017d4531-7340-434b-9272-c3f736a1a6ac-var-lib-calico\") pod \"calico-node-khs2s\" (UID: \"017d4531-7340-434b-9272-c3f736a1a6ac\") " pod="calico-system/calico-node-khs2s" Jul 6 23:39:44.442858 kubelet[2659]: I0706 23:39:44.442743 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/017d4531-7340-434b-9272-c3f736a1a6ac-policysync\") pod \"calico-node-khs2s\" (UID: \"017d4531-7340-434b-9272-c3f736a1a6ac\") " pod="calico-system/calico-node-khs2s" Jul 6 23:39:44.442858 kubelet[2659]: I0706 23:39:44.442760 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/017d4531-7340-434b-9272-c3f736a1a6ac-var-run-calico\") pod \"calico-node-khs2s\" (UID: \"017d4531-7340-434b-9272-c3f736a1a6ac\") " pod="calico-system/calico-node-khs2s" Jul 6 23:39:44.443154 kubelet[2659]: I0706 23:39:44.442776 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/017d4531-7340-434b-9272-c3f736a1a6ac-cni-bin-dir\") pod \"calico-node-khs2s\" (UID: \"017d4531-7340-434b-9272-c3f736a1a6ac\") " pod="calico-system/calico-node-khs2s" Jul 6 23:39:44.443154 kubelet[2659]: I0706 23:39:44.442790 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/017d4531-7340-434b-9272-c3f736a1a6ac-cni-log-dir\") pod \"calico-node-khs2s\" (UID: \"017d4531-7340-434b-9272-c3f736a1a6ac\") " pod="calico-system/calico-node-khs2s" Jul 6 23:39:44.443154 kubelet[2659]: I0706 23:39:44.442804 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/017d4531-7340-434b-9272-c3f736a1a6ac-flexvol-driver-host\") pod \"calico-node-khs2s\" (UID: \"017d4531-7340-434b-9272-c3f736a1a6ac\") " pod="calico-system/calico-node-khs2s" Jul 6 23:39:44.443154 kubelet[2659]: I0706 23:39:44.442822 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/017d4531-7340-434b-9272-c3f736a1a6ac-lib-modules\") pod \"calico-node-khs2s\" (UID: \"017d4531-7340-434b-9272-c3f736a1a6ac\") " pod="calico-system/calico-node-khs2s" Jul 6 23:39:44.443154 kubelet[2659]: I0706 23:39:44.442836 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/017d4531-7340-434b-9272-c3f736a1a6ac-xtables-lock\") pod \"calico-node-khs2s\" (UID: \"017d4531-7340-434b-9272-c3f736a1a6ac\") " pod="calico-system/calico-node-khs2s" Jul 6 23:39:44.443442 kubelet[2659]: I0706 23:39:44.443327 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/017d4531-7340-434b-9272-c3f736a1a6ac-node-certs\") pod \"calico-node-khs2s\" (UID: \"017d4531-7340-434b-9272-c3f736a1a6ac\") " pod="calico-system/calico-node-khs2s" Jul 6 23:39:44.443442 kubelet[2659]: I0706 23:39:44.443377 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfgtf\" (UniqueName: \"kubernetes.io/projected/017d4531-7340-434b-9272-c3f736a1a6ac-kube-api-access-mfgtf\") pod \"calico-node-khs2s\" (UID: \"017d4531-7340-434b-9272-c3f736a1a6ac\") " pod="calico-system/calico-node-khs2s" Jul 6 23:39:44.444032 systemd[1]: Started cri-containerd-37306b79c2a5f7a8307ce02fa53bfb3f0288e0095546039e038a50365385160e.scope - libcontainer container 37306b79c2a5f7a8307ce02fa53bfb3f0288e0095546039e038a50365385160e. Jul 6 23:39:44.477972 containerd[1533]: time="2025-07-06T23:39:44.477923936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55bc7dc6f8-fqwjc,Uid:65b1ef67-446e-4c4f-b36e-fd93ccea18ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"37306b79c2a5f7a8307ce02fa53bfb3f0288e0095546039e038a50365385160e\"" Jul 6 23:39:44.482295 containerd[1533]: time="2025-07-06T23:39:44.482013251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:39:44.545449 kubelet[2659]: E0706 23:39:44.545417 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.545449 kubelet[2659]: W0706 23:39:44.545443 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.545449 kubelet[2659]: E0706 23:39:44.545465 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.545833 kubelet[2659]: E0706 23:39:44.545649 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.545833 kubelet[2659]: W0706 23:39:44.545657 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.545833 kubelet[2659]: E0706 23:39:44.545666 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.545833 kubelet[2659]: E0706 23:39:44.545826 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.545978 kubelet[2659]: W0706 23:39:44.545865 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.545978 kubelet[2659]: E0706 23:39:44.545913 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.546080 kubelet[2659]: E0706 23:39:44.546067 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.546080 kubelet[2659]: W0706 23:39:44.546079 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.546274 kubelet[2659]: E0706 23:39:44.546222 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.546274 kubelet[2659]: E0706 23:39:44.546242 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.546274 kubelet[2659]: W0706 23:39:44.546250 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.546388 kubelet[2659]: E0706 23:39:44.546373 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.546526 kubelet[2659]: E0706 23:39:44.546513 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.546526 kubelet[2659]: W0706 23:39:44.546526 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.546654 kubelet[2659]: E0706 23:39:44.546599 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.550998 kubelet[2659]: E0706 23:39:44.547016 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.550998 kubelet[2659]: W0706 23:39:44.547370 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.550998 kubelet[2659]: E0706 23:39:44.547405 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.550998 kubelet[2659]: E0706 23:39:44.549189 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.550998 kubelet[2659]: W0706 23:39:44.549206 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.550998 kubelet[2659]: E0706 23:39:44.549222 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.550998 kubelet[2659]: E0706 23:39:44.549351 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.550998 kubelet[2659]: W0706 23:39:44.549358 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.550998 kubelet[2659]: E0706 23:39:44.549366 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.550998 kubelet[2659]: E0706 23:39:44.549943 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.551537 kubelet[2659]: W0706 23:39:44.549954 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.551537 kubelet[2659]: E0706 23:39:44.549965 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.551537 kubelet[2659]: E0706 23:39:44.550136 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.551537 kubelet[2659]: W0706 23:39:44.550144 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.551537 kubelet[2659]: E0706 23:39:44.550153 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.551537 kubelet[2659]: E0706 23:39:44.550506 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.551537 kubelet[2659]: W0706 23:39:44.550519 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.551537 kubelet[2659]: E0706 23:39:44.550530 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.551537 kubelet[2659]: E0706 23:39:44.551029 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.551537 kubelet[2659]: W0706 23:39:44.551042 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.551772 kubelet[2659]: E0706 23:39:44.551175 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.551772 kubelet[2659]: W0706 23:39:44.551182 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.551772 kubelet[2659]: E0706 23:39:44.551192 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.551772 kubelet[2659]: E0706 23:39:44.551291 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.551772 kubelet[2659]: W0706 23:39:44.551307 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.551772 kubelet[2659]: E0706 23:39:44.551314 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.551772 kubelet[2659]: E0706 23:39:44.551933 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.551772 kubelet[2659]: W0706 23:39:44.551947 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.551772 kubelet[2659]: E0706 23:39:44.551959 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.551772 kubelet[2659]: E0706 23:39:44.551982 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.553687 kubelet[2659]: E0706 23:39:44.553574 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.553687 kubelet[2659]: W0706 23:39:44.553630 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.553687 kubelet[2659]: E0706 23:39:44.553646 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.556439 kubelet[2659]: E0706 23:39:44.556419 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.556439 kubelet[2659]: W0706 23:39:44.556436 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.556527 kubelet[2659]: E0706 23:39:44.556451 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.558897 kubelet[2659]: E0706 23:39:44.558866 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.558897 kubelet[2659]: W0706 23:39:44.558897 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.559033 kubelet[2659]: E0706 23:39:44.558911 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.586478 kubelet[2659]: E0706 23:39:44.586365 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4xsf" podUID="f22b58de-d418-473b-9edf-74d8e58c1351" Jul 6 23:39:44.633509 kubelet[2659]: E0706 23:39:44.633448 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.633509 kubelet[2659]: W0706 23:39:44.633488 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.633653 kubelet[2659]: E0706 23:39:44.633529 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.633778 kubelet[2659]: E0706 23:39:44.633756 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.633869 kubelet[2659]: W0706 23:39:44.633774 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.633869 kubelet[2659]: E0706 23:39:44.633809 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.634165 kubelet[2659]: E0706 23:39:44.634146 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.634165 kubelet[2659]: W0706 23:39:44.634160 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.634231 kubelet[2659]: E0706 23:39:44.634172 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.636017 kubelet[2659]: E0706 23:39:44.635987 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.636017 kubelet[2659]: W0706 23:39:44.636008 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.636017 kubelet[2659]: E0706 23:39:44.636022 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.636506 kubelet[2659]: E0706 23:39:44.636481 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.636506 kubelet[2659]: W0706 23:39:44.636501 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.636581 kubelet[2659]: E0706 23:39:44.636513 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.636893 kubelet[2659]: E0706 23:39:44.636859 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.636893 kubelet[2659]: W0706 23:39:44.636886 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.636971 kubelet[2659]: E0706 23:39:44.636897 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.637412 kubelet[2659]: E0706 23:39:44.637388 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.637412 kubelet[2659]: W0706 23:39:44.637403 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.637412 kubelet[2659]: E0706 23:39:44.637415 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.637826 kubelet[2659]: E0706 23:39:44.637805 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.637826 kubelet[2659]: W0706 23:39:44.637822 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.637826 kubelet[2659]: E0706 23:39:44.637833 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.638361 kubelet[2659]: E0706 23:39:44.638337 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.638361 kubelet[2659]: W0706 23:39:44.638354 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.638361 kubelet[2659]: E0706 23:39:44.638365 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.639105 kubelet[2659]: E0706 23:39:44.639072 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.639105 kubelet[2659]: W0706 23:39:44.639094 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.639105 kubelet[2659]: E0706 23:39:44.639107 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.639297 kubelet[2659]: E0706 23:39:44.639276 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.639297 kubelet[2659]: W0706 23:39:44.639288 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.639297 kubelet[2659]: E0706 23:39:44.639297 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.639458 kubelet[2659]: E0706 23:39:44.639438 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.639559 kubelet[2659]: W0706 23:39:44.639539 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.639592 kubelet[2659]: E0706 23:39:44.639557 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.639898 kubelet[2659]: E0706 23:39:44.639861 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.639955 kubelet[2659]: W0706 23:39:44.639875 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.639955 kubelet[2659]: E0706 23:39:44.639931 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.640115 kubelet[2659]: E0706 23:39:44.640094 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.640115 kubelet[2659]: W0706 23:39:44.640108 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.640181 kubelet[2659]: E0706 23:39:44.640118 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.640503 kubelet[2659]: E0706 23:39:44.640483 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.640503 kubelet[2659]: W0706 23:39:44.640497 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.640570 kubelet[2659]: E0706 23:39:44.640507 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.640954 kubelet[2659]: E0706 23:39:44.640915 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.640954 kubelet[2659]: W0706 23:39:44.640935 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.640954 kubelet[2659]: E0706 23:39:44.640946 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.641145 kubelet[2659]: E0706 23:39:44.641129 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.641145 kubelet[2659]: W0706 23:39:44.641142 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.641191 kubelet[2659]: E0706 23:39:44.641151 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.641321 kubelet[2659]: E0706 23:39:44.641302 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.641321 kubelet[2659]: W0706 23:39:44.641315 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.641382 kubelet[2659]: E0706 23:39:44.641324 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.641480 kubelet[2659]: E0706 23:39:44.641465 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.641480 kubelet[2659]: W0706 23:39:44.641478 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.641528 kubelet[2659]: E0706 23:39:44.641487 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.641672 kubelet[2659]: E0706 23:39:44.641653 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.641672 kubelet[2659]: W0706 23:39:44.641666 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.641746 kubelet[2659]: E0706 23:39:44.641675 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.645037 kubelet[2659]: E0706 23:39:44.645013 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.645037 kubelet[2659]: W0706 23:39:44.645030 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.645037 kubelet[2659]: E0706 23:39:44.645043 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.645202 kubelet[2659]: I0706 23:39:44.645070 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f22b58de-d418-473b-9edf-74d8e58c1351-registration-dir\") pod \"csi-node-driver-m4xsf\" (UID: \"f22b58de-d418-473b-9edf-74d8e58c1351\") " pod="calico-system/csi-node-driver-m4xsf" Jul 6 23:39:44.645260 kubelet[2659]: E0706 23:39:44.645244 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.645260 kubelet[2659]: W0706 23:39:44.645257 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.645306 kubelet[2659]: E0706 23:39:44.645270 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.645306 kubelet[2659]: I0706 23:39:44.645284 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kckbk\" (UniqueName: \"kubernetes.io/projected/f22b58de-d418-473b-9edf-74d8e58c1351-kube-api-access-kckbk\") pod \"csi-node-driver-m4xsf\" (UID: \"f22b58de-d418-473b-9edf-74d8e58c1351\") " pod="calico-system/csi-node-driver-m4xsf" Jul 6 23:39:44.646025 kubelet[2659]: E0706 23:39:44.646003 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.646025 kubelet[2659]: W0706 23:39:44.646020 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.646111 kubelet[2659]: E0706 23:39:44.646042 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.646111 kubelet[2659]: I0706 23:39:44.646066 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f22b58de-d418-473b-9edf-74d8e58c1351-socket-dir\") pod \"csi-node-driver-m4xsf\" (UID: \"f22b58de-d418-473b-9edf-74d8e58c1351\") " pod="calico-system/csi-node-driver-m4xsf" Jul 6 23:39:44.646779 kubelet[2659]: E0706 23:39:44.646413 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.646779 kubelet[2659]: W0706 23:39:44.646429 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.646779 kubelet[2659]: E0706 23:39:44.646446 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.646779 kubelet[2659]: I0706 23:39:44.646462 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22b58de-d418-473b-9edf-74d8e58c1351-kubelet-dir\") pod \"csi-node-driver-m4xsf\" (UID: \"f22b58de-d418-473b-9edf-74d8e58c1351\") " pod="calico-system/csi-node-driver-m4xsf" Jul 6 23:39:44.647445 kubelet[2659]: E0706 23:39:44.647384 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.647445 kubelet[2659]: W0706 23:39:44.647401 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.647807 kubelet[2659]: E0706 23:39:44.647488 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.647807 kubelet[2659]: I0706 23:39:44.647508 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f22b58de-d418-473b-9edf-74d8e58c1351-varrun\") pod \"csi-node-driver-m4xsf\" (UID: \"f22b58de-d418-473b-9edf-74d8e58c1351\") " pod="calico-system/csi-node-driver-m4xsf" Jul 6 23:39:44.647807 kubelet[2659]: E0706 23:39:44.647666 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.647807 kubelet[2659]: W0706 23:39:44.647687 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.647807 kubelet[2659]: E0706 23:39:44.647734 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.648101 kubelet[2659]: E0706 23:39:44.647900 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.648101 kubelet[2659]: W0706 23:39:44.647909 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.648101 kubelet[2659]: E0706 23:39:44.647966 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.648914 kubelet[2659]: E0706 23:39:44.648176 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.648914 kubelet[2659]: W0706 23:39:44.648190 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.648914 kubelet[2659]: E0706 23:39:44.648311 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.648914 kubelet[2659]: E0706 23:39:44.648638 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.648914 kubelet[2659]: W0706 23:39:44.648650 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.648914 kubelet[2659]: E0706 23:39:44.648772 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.649176 kubelet[2659]: E0706 23:39:44.648926 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.649176 kubelet[2659]: W0706 23:39:44.648935 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.649176 kubelet[2659]: E0706 23:39:44.648975 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.649176 kubelet[2659]: E0706 23:39:44.649154 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.649176 kubelet[2659]: W0706 23:39:44.649163 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.649176 kubelet[2659]: E0706 23:39:44.649173 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.649368 kubelet[2659]: E0706 23:39:44.649349 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.649368 kubelet[2659]: W0706 23:39:44.649360 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.649368 kubelet[2659]: E0706 23:39:44.649368 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.649520 kubelet[2659]: E0706 23:39:44.649504 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.649520 kubelet[2659]: W0706 23:39:44.649514 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.649583 kubelet[2659]: E0706 23:39:44.649522 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.649685 kubelet[2659]: E0706 23:39:44.649672 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.649685 kubelet[2659]: W0706 23:39:44.649682 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.649743 kubelet[2659]: E0706 23:39:44.649690 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.649865 kubelet[2659]: E0706 23:39:44.649848 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.649865 kubelet[2659]: W0706 23:39:44.649860 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.649935 kubelet[2659]: E0706 23:39:44.649867 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.652915 containerd[1533]: time="2025-07-06T23:39:44.651740560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-khs2s,Uid:017d4531-7340-434b-9272-c3f736a1a6ac,Namespace:calico-system,Attempt:0,}" Jul 6 23:39:44.676660 containerd[1533]: time="2025-07-06T23:39:44.676613292Z" level=info msg="connecting to shim 7924325a064e31e2664f4a9b7f5893ffe14502801b9fcd5d56f199667706875a" address="unix:///run/containerd/s/5777823fff217e5c324379fcb6e1766b1eb05d86f0e3f5139cda1868c235fa20" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:39:44.699042 systemd[1]: Started cri-containerd-7924325a064e31e2664f4a9b7f5893ffe14502801b9fcd5d56f199667706875a.scope - libcontainer container 7924325a064e31e2664f4a9b7f5893ffe14502801b9fcd5d56f199667706875a. Jul 6 23:39:44.745402 containerd[1533]: time="2025-07-06T23:39:44.745337804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-khs2s,Uid:017d4531-7340-434b-9272-c3f736a1a6ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"7924325a064e31e2664f4a9b7f5893ffe14502801b9fcd5d56f199667706875a\"" Jul 6 23:39:44.748472 kubelet[2659]: E0706 23:39:44.748446 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.748956 kubelet[2659]: W0706 23:39:44.748925 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.748999 kubelet[2659]: E0706 23:39:44.748966 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.749242 kubelet[2659]: E0706 23:39:44.749215 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.749275 kubelet[2659]: W0706 23:39:44.749241 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.749275 kubelet[2659]: E0706 23:39:44.749261 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.749580 kubelet[2659]: E0706 23:39:44.749549 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.749610 kubelet[2659]: W0706 23:39:44.749582 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.749671 kubelet[2659]: E0706 23:39:44.749648 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.750391 kubelet[2659]: E0706 23:39:44.750333 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.750391 kubelet[2659]: W0706 23:39:44.750353 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.750391 kubelet[2659]: E0706 23:39:44.750371 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.750740 kubelet[2659]: E0706 23:39:44.750721 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.750740 kubelet[2659]: W0706 23:39:44.750735 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.750930 kubelet[2659]: E0706 23:39:44.750906 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.751267 kubelet[2659]: E0706 23:39:44.751249 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.751267 kubelet[2659]: W0706 23:39:44.751264 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.751346 kubelet[2659]: E0706 23:39:44.751296 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.751580 kubelet[2659]: E0706 23:39:44.751540 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.751580 kubelet[2659]: W0706 23:39:44.751554 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.751695 kubelet[2659]: E0706 23:39:44.751673 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.751929 kubelet[2659]: E0706 23:39:44.751914 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.751929 kubelet[2659]: W0706 23:39:44.751927 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.752001 kubelet[2659]: E0706 23:39:44.751953 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.752191 kubelet[2659]: E0706 23:39:44.752176 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.752191 kubelet[2659]: W0706 23:39:44.752189 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.752249 kubelet[2659]: E0706 23:39:44.752211 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.752401 kubelet[2659]: E0706 23:39:44.752377 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.752431 kubelet[2659]: W0706 23:39:44.752402 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.752431 kubelet[2659]: E0706 23:39:44.752416 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.752588 kubelet[2659]: E0706 23:39:44.752578 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.752588 kubelet[2659]: W0706 23:39:44.752588 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.752635 kubelet[2659]: E0706 23:39:44.752606 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.752823 kubelet[2659]: E0706 23:39:44.752796 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.752823 kubelet[2659]: W0706 23:39:44.752808 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.752823 kubelet[2659]: E0706 23:39:44.752821 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.753152 kubelet[2659]: E0706 23:39:44.753126 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.753152 kubelet[2659]: W0706 23:39:44.753152 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.753205 kubelet[2659]: E0706 23:39:44.753171 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.753401 kubelet[2659]: E0706 23:39:44.753389 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.753485 kubelet[2659]: W0706 23:39:44.753401 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.753485 kubelet[2659]: E0706 23:39:44.753425 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.753540 kubelet[2659]: E0706 23:39:44.753523 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.753540 kubelet[2659]: W0706 23:39:44.753531 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.753647 kubelet[2659]: E0706 23:39:44.753616 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.753701 kubelet[2659]: E0706 23:39:44.753657 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.753701 kubelet[2659]: W0706 23:39:44.753666 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.753743 kubelet[2659]: E0706 23:39:44.753722 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.753941 kubelet[2659]: E0706 23:39:44.753825 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.753941 kubelet[2659]: W0706 23:39:44.753837 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.753941 kubelet[2659]: E0706 23:39:44.753850 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.754340 kubelet[2659]: E0706 23:39:44.754323 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.754415 kubelet[2659]: W0706 23:39:44.754401 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.754490 kubelet[2659]: E0706 23:39:44.754478 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.755195 kubelet[2659]: E0706 23:39:44.755143 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.755195 kubelet[2659]: W0706 23:39:44.755163 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.755195 kubelet[2659]: E0706 23:39:44.755187 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.755946 kubelet[2659]: E0706 23:39:44.755919 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.755946 kubelet[2659]: W0706 23:39:44.755936 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.755946 kubelet[2659]: E0706 23:39:44.755984 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.756238 kubelet[2659]: E0706 23:39:44.756116 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.756238 kubelet[2659]: W0706 23:39:44.756139 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.756238 kubelet[2659]: E0706 23:39:44.756191 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.756556 kubelet[2659]: E0706 23:39:44.756366 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.756556 kubelet[2659]: W0706 23:39:44.756394 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.756785 kubelet[2659]: E0706 23:39:44.756528 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.756972 kubelet[2659]: E0706 23:39:44.756949 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.756972 kubelet[2659]: W0706 23:39:44.756967 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.757047 kubelet[2659]: E0706 23:39:44.756987 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.757501 kubelet[2659]: E0706 23:39:44.757465 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.757501 kubelet[2659]: W0706 23:39:44.757481 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.757562 kubelet[2659]: E0706 23:39:44.757513 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.758336 kubelet[2659]: E0706 23:39:44.758310 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.758336 kubelet[2659]: W0706 23:39:44.758334 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.758427 kubelet[2659]: E0706 23:39:44.758349 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:44.783218 kubelet[2659]: E0706 23:39:44.783183 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:44.783218 kubelet[2659]: W0706 23:39:44.783211 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:44.783367 kubelet[2659]: E0706 23:39:44.783232 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:45.286934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46138703.mount: Deactivated successfully. Jul 6 23:39:46.365797 containerd[1533]: time="2025-07-06T23:39:46.365506447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:46.366224 containerd[1533]: time="2025-07-06T23:39:46.366026658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 6 23:39:46.367055 containerd[1533]: time="2025-07-06T23:39:46.366994377Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:46.369413 containerd[1533]: time="2025-07-06T23:39:46.369369678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:46.370255 containerd[1533]: time="2025-07-06T23:39:46.370229161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.888176015s" Jul 6 23:39:46.370372 containerd[1533]: time="2025-07-06T23:39:46.370356523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 6 23:39:46.371372 containerd[1533]: time="2025-07-06T23:39:46.371347889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:39:46.379817 containerd[1533]: time="2025-07-06T23:39:46.379772220Z" level=info msg="CreateContainer within sandbox \"37306b79c2a5f7a8307ce02fa53bfb3f0288e0095546039e038a50365385160e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:39:46.386998 containerd[1533]: time="2025-07-06T23:39:46.386951462Z" level=info msg="Container 00baf036ef13d284526ee205bbb564ac58f5eb5d43cb792718753f82ad73a75a: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:46.394062 containerd[1533]: time="2025-07-06T23:39:46.393998660Z" level=info msg="CreateContainer within sandbox \"37306b79c2a5f7a8307ce02fa53bfb3f0288e0095546039e038a50365385160e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"00baf036ef13d284526ee205bbb564ac58f5eb5d43cb792718753f82ad73a75a\"" Jul 6 23:39:46.395080 containerd[1533]: time="2025-07-06T23:39:46.395034441Z" level=info msg="StartContainer for \"00baf036ef13d284526ee205bbb564ac58f5eb5d43cb792718753f82ad73a75a\"" Jul 6 23:39:46.398351 containerd[1533]: time="2025-07-06T23:39:46.398227852Z" level=info msg="connecting to shim 00baf036ef13d284526ee205bbb564ac58f5eb5d43cb792718753f82ad73a75a" address="unix:///run/containerd/s/e802dd0b74c6f0f710b679b080c7c34ad548f78f6723abe635d4efaa55764b8f" protocol=ttrpc version=3 Jul 6 23:39:46.429088 systemd[1]: Started cri-containerd-00baf036ef13d284526ee205bbb564ac58f5eb5d43cb792718753f82ad73a75a.scope - libcontainer container 00baf036ef13d284526ee205bbb564ac58f5eb5d43cb792718753f82ad73a75a. Jul 6 23:39:46.476420 containerd[1533]: time="2025-07-06T23:39:46.476379401Z" level=info msg="StartContainer for \"00baf036ef13d284526ee205bbb564ac58f5eb5d43cb792718753f82ad73a75a\" returns successfully" Jul 6 23:39:46.539467 kubelet[2659]: E0706 23:39:46.539424 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4xsf" podUID="f22b58de-d418-473b-9edf-74d8e58c1351" Jul 6 23:39:46.654212 kubelet[2659]: E0706 23:39:46.654109 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.654212 kubelet[2659]: W0706 23:39:46.654133 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.654212 kubelet[2659]: E0706 23:39:46.654156 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.654372 kubelet[2659]: E0706 23:39:46.654318 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.654372 kubelet[2659]: W0706 23:39:46.654326 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.654372 kubelet[2659]: E0706 23:39:46.654338 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.654727 kubelet[2659]: E0706 23:39:46.654705 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.654727 kubelet[2659]: W0706 23:39:46.654723 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.654943 kubelet[2659]: E0706 23:39:46.654737 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.655071 kubelet[2659]: E0706 23:39:46.655042 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.655105 kubelet[2659]: W0706 23:39:46.655072 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.655105 kubelet[2659]: E0706 23:39:46.655085 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.655372 kubelet[2659]: E0706 23:39:46.655355 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.655372 kubelet[2659]: W0706 23:39:46.655368 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.655475 kubelet[2659]: E0706 23:39:46.655379 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.655707 kubelet[2659]: E0706 23:39:46.655692 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.655740 kubelet[2659]: W0706 23:39:46.655706 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.655740 kubelet[2659]: E0706 23:39:46.655723 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.656038 kubelet[2659]: E0706 23:39:46.656019 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.656038 kubelet[2659]: W0706 23:39:46.656033 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.656249 kubelet[2659]: E0706 23:39:46.656044 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.656249 kubelet[2659]: E0706 23:39:46.656208 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.656249 kubelet[2659]: W0706 23:39:46.656216 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.656249 kubelet[2659]: E0706 23:39:46.656224 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.657342 kubelet[2659]: E0706 23:39:46.657309 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.657342 kubelet[2659]: W0706 23:39:46.657327 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.657342 kubelet[2659]: E0706 23:39:46.657347 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.657978 kubelet[2659]: E0706 23:39:46.657959 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.657978 kubelet[2659]: W0706 23:39:46.657976 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.658079 kubelet[2659]: E0706 23:39:46.657988 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.658205 kubelet[2659]: E0706 23:39:46.658192 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.658237 kubelet[2659]: W0706 23:39:46.658205 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.658237 kubelet[2659]: E0706 23:39:46.658219 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.658447 kubelet[2659]: E0706 23:39:46.658433 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.658482 kubelet[2659]: W0706 23:39:46.658449 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.658482 kubelet[2659]: E0706 23:39:46.658460 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.658663 kubelet[2659]: E0706 23:39:46.658639 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.658697 kubelet[2659]: W0706 23:39:46.658663 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.658697 kubelet[2659]: E0706 23:39:46.658674 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.658868 kubelet[2659]: E0706 23:39:46.658857 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.658868 kubelet[2659]: W0706 23:39:46.658868 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.658996 kubelet[2659]: E0706 23:39:46.658875 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.659075 kubelet[2659]: E0706 23:39:46.659064 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.659105 kubelet[2659]: W0706 23:39:46.659075 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.659105 kubelet[2659]: E0706 23:39:46.659084 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.665595 kubelet[2659]: E0706 23:39:46.665559 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.665842 kubelet[2659]: W0706 23:39:46.665697 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.665842 kubelet[2659]: E0706 23:39:46.665721 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.666003 kubelet[2659]: E0706 23:39:46.665990 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.666052 kubelet[2659]: W0706 23:39:46.666042 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.666117 kubelet[2659]: E0706 23:39:46.666105 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.666404 kubelet[2659]: E0706 23:39:46.666364 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.666404 kubelet[2659]: W0706 23:39:46.666382 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.666485 kubelet[2659]: E0706 23:39:46.666413 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.666576 kubelet[2659]: E0706 23:39:46.666556 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.666576 kubelet[2659]: W0706 23:39:46.666567 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.666631 kubelet[2659]: E0706 23:39:46.666583 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.666735 kubelet[2659]: E0706 23:39:46.666717 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.666735 kubelet[2659]: W0706 23:39:46.666727 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.666793 kubelet[2659]: E0706 23:39:46.666740 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.666966 kubelet[2659]: E0706 23:39:46.666928 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.666966 kubelet[2659]: W0706 23:39:46.666939 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.666966 kubelet[2659]: E0706 23:39:46.666953 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.667433 kubelet[2659]: E0706 23:39:46.667409 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.667433 kubelet[2659]: W0706 23:39:46.667430 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.667527 kubelet[2659]: E0706 23:39:46.667448 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.669690 kubelet[2659]: E0706 23:39:46.669559 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.669690 kubelet[2659]: W0706 23:39:46.669578 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.669779 kubelet[2659]: E0706 23:39:46.669717 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.670081 kubelet[2659]: E0706 23:39:46.669851 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.670081 kubelet[2659]: W0706 23:39:46.669865 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.670081 kubelet[2659]: E0706 23:39:46.669920 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.671352 kubelet[2659]: E0706 23:39:46.671330 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.671352 kubelet[2659]: W0706 23:39:46.671344 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.671583 kubelet[2659]: E0706 23:39:46.671509 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.671818 kubelet[2659]: E0706 23:39:46.671794 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.671818 kubelet[2659]: W0706 23:39:46.671814 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.672988 kubelet[2659]: E0706 23:39:46.671948 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.672988 kubelet[2659]: E0706 23:39:46.672327 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.672988 kubelet[2659]: W0706 23:39:46.672342 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.672988 kubelet[2659]: E0706 23:39:46.672356 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.672988 kubelet[2659]: E0706 23:39:46.672601 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.672988 kubelet[2659]: W0706 23:39:46.672613 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.672988 kubelet[2659]: E0706 23:39:46.672623 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.672988 kubelet[2659]: E0706 23:39:46.672788 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.672988 kubelet[2659]: W0706 23:39:46.672796 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.672988 kubelet[2659]: E0706 23:39:46.672805 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.673225 kubelet[2659]: E0706 23:39:46.673084 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.673225 kubelet[2659]: W0706 23:39:46.673096 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.673225 kubelet[2659]: E0706 23:39:46.673112 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.673425 kubelet[2659]: E0706 23:39:46.673408 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.673425 kubelet[2659]: W0706 23:39:46.673423 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.673519 kubelet[2659]: E0706 23:39:46.673504 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.673650 kubelet[2659]: E0706 23:39:46.673636 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.673650 kubelet[2659]: W0706 23:39:46.673649 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.673717 kubelet[2659]: E0706 23:39:46.673658 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:46.674232 kubelet[2659]: E0706 23:39:46.674209 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:39:46.674232 kubelet[2659]: W0706 23:39:46.674224 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:39:46.674328 kubelet[2659]: E0706 23:39:46.674261 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:39:47.269498 containerd[1533]: time="2025-07-06T23:39:47.269436944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:47.270720 containerd[1533]: time="2025-07-06T23:39:47.270523246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 6 23:39:47.271346 containerd[1533]: time="2025-07-06T23:39:47.271319176Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:47.273813 containerd[1533]: time="2025-07-06T23:39:47.273780911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:47.274799 containerd[1533]: time="2025-07-06T23:39:47.274630378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 903.250118ms" Jul 6 23:39:47.274799 containerd[1533]: time="2025-07-06T23:39:47.274661988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 6 23:39:47.277922 containerd[1533]: time="2025-07-06T23:39:47.277028972Z" level=info msg="CreateContainer within sandbox \"7924325a064e31e2664f4a9b7f5893ffe14502801b9fcd5d56f199667706875a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:39:47.283545 containerd[1533]: time="2025-07-06T23:39:47.283513692Z" level=info msg="Container c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:47.289640 containerd[1533]: time="2025-07-06T23:39:47.289569117Z" level=info msg="CreateContainer within sandbox \"7924325a064e31e2664f4a9b7f5893ffe14502801b9fcd5d56f199667706875a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548\"" Jul 6 23:39:47.291184 containerd[1533]: time="2025-07-06T23:39:47.289968443Z" level=info msg="StartContainer for \"c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548\"" Jul 6 23:39:47.291675 containerd[1533]: time="2025-07-06T23:39:47.291641649Z" level=info msg="connecting to shim c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548" address="unix:///run/containerd/s/5777823fff217e5c324379fcb6e1766b1eb05d86f0e3f5139cda1868c235fa20" protocol=ttrpc version=3 Jul 6 23:39:47.323084 systemd[1]: Started cri-containerd-c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548.scope - libcontainer container c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548. Jul 6 23:39:47.358201 containerd[1533]: time="2025-07-06T23:39:47.358146889Z" level=info msg="StartContainer for \"c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548\" returns successfully" Jul 6 23:39:47.393577 systemd[1]: cri-containerd-c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548.scope: Deactivated successfully. Jul 6 23:39:47.393853 systemd[1]: cri-containerd-c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548.scope: Consumed 49ms CPU time, 6.2M memory peak, 4.5M written to disk. Jul 6 23:39:47.406138 containerd[1533]: time="2025-07-06T23:39:47.404675645Z" level=info msg="received exit event container_id:\"c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548\" id:\"c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548\" pid:3365 exited_at:{seconds:1751845187 nanos:401746443}" Jul 6 23:39:47.411422 containerd[1533]: time="2025-07-06T23:39:47.411371391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548\" id:\"c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548\" pid:3365 exited_at:{seconds:1751845187 nanos:401746443}" Jul 6 23:39:47.443007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c51989cc88c970c69e8768e69fc114237297dbd00f0bde3046ab8be4d8440548-rootfs.mount: Deactivated successfully. Jul 6 23:39:47.604524 kubelet[2659]: I0706 23:39:47.604383 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:39:47.605829 containerd[1533]: time="2025-07-06T23:39:47.605677192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:39:47.625905 kubelet[2659]: I0706 23:39:47.625817 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55bc7dc6f8-fqwjc" podStartSLOduration=2.733811921 podStartE2EDuration="4.62579632s" podCreationTimestamp="2025-07-06 23:39:43 +0000 UTC" firstStartedPulling="2025-07-06 23:39:44.479169346 +0000 UTC m=+20.042324332" lastFinishedPulling="2025-07-06 23:39:46.371153745 +0000 UTC m=+21.934308731" observedRunningTime="2025-07-06 23:39:46.61885003 +0000 UTC m=+22.182005016" watchObservedRunningTime="2025-07-06 23:39:47.62579632 +0000 UTC m=+23.188951306" Jul 6 23:39:48.538898 kubelet[2659]: E0706 23:39:48.538853 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4xsf" podUID="f22b58de-d418-473b-9edf-74d8e58c1351" Jul 6 23:39:50.535440 kubelet[2659]: E0706 23:39:50.535373 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4xsf" podUID="f22b58de-d418-473b-9edf-74d8e58c1351" Jul 6 23:39:50.565045 containerd[1533]: time="2025-07-06T23:39:50.564999427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:50.566169 containerd[1533]: time="2025-07-06T23:39:50.566131980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 6 23:39:50.566964 containerd[1533]: time="2025-07-06T23:39:50.566899792Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:50.568842 containerd[1533]: time="2025-07-06T23:39:50.568788394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:50.569901 containerd[1533]: time="2025-07-06T23:39:50.569677360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.963948193s" Jul 6 23:39:50.569901 containerd[1533]: time="2025-07-06T23:39:50.569716131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 6 23:39:50.572471 containerd[1533]: time="2025-07-06T23:39:50.572419878Z" level=info msg="CreateContainer within sandbox \"7924325a064e31e2664f4a9b7f5893ffe14502801b9fcd5d56f199667706875a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:39:50.582700 containerd[1533]: time="2025-07-06T23:39:50.581444374Z" level=info msg="Container 0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:50.592284 containerd[1533]: time="2025-07-06T23:39:50.592225555Z" level=info msg="CreateContainer within sandbox \"7924325a064e31e2664f4a9b7f5893ffe14502801b9fcd5d56f199667706875a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870\"" Jul 6 23:39:50.592870 containerd[1533]: time="2025-07-06T23:39:50.592839364Z" level=info msg="StartContainer for \"0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870\"" Jul 6 23:39:50.595700 containerd[1533]: time="2025-07-06T23:39:50.595659784Z" level=info msg="connecting to shim 0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870" address="unix:///run/containerd/s/5777823fff217e5c324379fcb6e1766b1eb05d86f0e3f5139cda1868c235fa20" protocol=ttrpc version=3 Jul 6 23:39:50.643311 systemd[1]: Started cri-containerd-0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870.scope - libcontainer container 0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870. Jul 6 23:39:50.698819 containerd[1533]: time="2025-07-06T23:39:50.698772735Z" level=info msg="StartContainer for \"0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870\" returns successfully" Jul 6 23:39:51.394825 systemd[1]: cri-containerd-0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870.scope: Deactivated successfully. Jul 6 23:39:51.395153 systemd[1]: cri-containerd-0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870.scope: Consumed 570ms CPU time, 175M memory peak, 2.8M read from disk, 165.8M written to disk. Jul 6 23:39:51.399172 containerd[1533]: time="2025-07-06T23:39:51.398889445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870\" id:\"0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870\" pid:3427 exited_at:{seconds:1751845191 nanos:398337498}" Jul 6 23:39:51.405298 containerd[1533]: time="2025-07-06T23:39:51.405228567Z" level=info msg="received exit event container_id:\"0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870\" id:\"0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870\" pid:3427 exited_at:{seconds:1751845191 nanos:398337498}" Jul 6 23:39:51.409034 kubelet[2659]: I0706 23:39:51.408970 2659 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:39:51.461297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0398537d9c0089f8d3cb66f8113c106558d1ca9f1e97dec85e4a925b0f14e870-rootfs.mount: Deactivated successfully. Jul 6 23:39:51.482449 kubelet[2659]: W0706 23:39:51.471474 2659 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 6 23:39:51.482449 kubelet[2659]: E0706 23:39:51.471518 2659 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 6 23:39:51.492142 systemd[1]: Created slice kubepods-besteffort-pod10290fb3_8639_44d7_ab20_491341382d8b.slice - libcontainer container kubepods-besteffort-pod10290fb3_8639_44d7_ab20_491341382d8b.slice. Jul 6 23:39:51.511019 systemd[1]: Created slice kubepods-burstable-podb21cb156_a9a7_4515_9788_bb30ff372348.slice - libcontainer container kubepods-burstable-podb21cb156_a9a7_4515_9788_bb30ff372348.slice. Jul 6 23:39:51.512388 kubelet[2659]: I0706 23:39:51.512340 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b21cb156-a9a7-4515-9788-bb30ff372348-config-volume\") pod \"coredns-668d6bf9bc-wchll\" (UID: \"b21cb156-a9a7-4515-9788-bb30ff372348\") " pod="kube-system/coredns-668d6bf9bc-wchll" Jul 6 23:39:51.512461 kubelet[2659]: I0706 23:39:51.512397 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trccp\" (UniqueName: \"kubernetes.io/projected/5aae3a8c-45b2-472c-9498-0c8bd37dfa4d-kube-api-access-trccp\") pod \"calico-apiserver-8894d8485-nh88g\" (UID: \"5aae3a8c-45b2-472c-9498-0c8bd37dfa4d\") " pod="calico-apiserver/calico-apiserver-8894d8485-nh88g" Jul 6 23:39:51.512461 kubelet[2659]: I0706 23:39:51.512419 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ca78c76-44cb-44db-913c-bf2ef022a55c-whisker-backend-key-pair\") pod \"whisker-7b6cb8cdcb-sbkm4\" (UID: \"5ca78c76-44cb-44db-913c-bf2ef022a55c\") " pod="calico-system/whisker-7b6cb8cdcb-sbkm4" Jul 6 23:39:51.512461 kubelet[2659]: I0706 23:39:51.512441 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjw5q\" (UniqueName: \"kubernetes.io/projected/9e7c8d04-4b45-4f3a-8697-5eb725212899-kube-api-access-gjw5q\") pod \"coredns-668d6bf9bc-sw5kv\" (UID: \"9e7c8d04-4b45-4f3a-8697-5eb725212899\") " pod="kube-system/coredns-668d6bf9bc-sw5kv" Jul 6 23:39:51.512461 kubelet[2659]: I0706 23:39:51.512459 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00f8e3fa-c9d3-485d-ba30-e0a92fd58537-config\") pod \"goldmane-768f4c5c69-hs8w5\" (UID: \"00f8e3fa-c9d3-485d-ba30-e0a92fd58537\") " pod="calico-system/goldmane-768f4c5c69-hs8w5" Jul 6 23:39:51.512564 kubelet[2659]: I0706 23:39:51.512475 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w62b\" (UniqueName: \"kubernetes.io/projected/5ca78c76-44cb-44db-913c-bf2ef022a55c-kube-api-access-8w62b\") pod \"whisker-7b6cb8cdcb-sbkm4\" (UID: \"5ca78c76-44cb-44db-913c-bf2ef022a55c\") " pod="calico-system/whisker-7b6cb8cdcb-sbkm4" Jul 6 23:39:51.512564 kubelet[2659]: I0706 23:39:51.512509 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzcxv\" (UniqueName: \"kubernetes.io/projected/db18717e-96a8-4ef1-bdd8-3bb06b81a7fa-kube-api-access-xzcxv\") pod \"calico-apiserver-8894d8485-shntm\" (UID: \"db18717e-96a8-4ef1-bdd8-3bb06b81a7fa\") " pod="calico-apiserver/calico-apiserver-8894d8485-shntm" Jul 6 23:39:51.512564 kubelet[2659]: I0706 23:39:51.512533 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00f8e3fa-c9d3-485d-ba30-e0a92fd58537-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-hs8w5\" (UID: \"00f8e3fa-c9d3-485d-ba30-e0a92fd58537\") " pod="calico-system/goldmane-768f4c5c69-hs8w5" Jul 6 23:39:51.512564 kubelet[2659]: I0706 23:39:51.512548 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca78c76-44cb-44db-913c-bf2ef022a55c-whisker-ca-bundle\") pod \"whisker-7b6cb8cdcb-sbkm4\" (UID: \"5ca78c76-44cb-44db-913c-bf2ef022a55c\") " pod="calico-system/whisker-7b6cb8cdcb-sbkm4" Jul 6 23:39:51.512652 kubelet[2659]: I0706 23:39:51.512564 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn952\" (UniqueName: \"kubernetes.io/projected/b21cb156-a9a7-4515-9788-bb30ff372348-kube-api-access-wn952\") pod \"coredns-668d6bf9bc-wchll\" (UID: \"b21cb156-a9a7-4515-9788-bb30ff372348\") " pod="kube-system/coredns-668d6bf9bc-wchll" Jul 6 23:39:51.512652 kubelet[2659]: I0706 23:39:51.512584 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e7c8d04-4b45-4f3a-8697-5eb725212899-config-volume\") pod \"coredns-668d6bf9bc-sw5kv\" (UID: \"9e7c8d04-4b45-4f3a-8697-5eb725212899\") " pod="kube-system/coredns-668d6bf9bc-sw5kv" Jul 6 23:39:51.512652 kubelet[2659]: I0706 23:39:51.512602 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgq5\" (UniqueName: \"kubernetes.io/projected/00f8e3fa-c9d3-485d-ba30-e0a92fd58537-kube-api-access-dlgq5\") pod \"goldmane-768f4c5c69-hs8w5\" (UID: \"00f8e3fa-c9d3-485d-ba30-e0a92fd58537\") " pod="calico-system/goldmane-768f4c5c69-hs8w5" Jul 6 23:39:51.512652 kubelet[2659]: I0706 23:39:51.512617 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5aae3a8c-45b2-472c-9498-0c8bd37dfa4d-calico-apiserver-certs\") pod \"calico-apiserver-8894d8485-nh88g\" (UID: \"5aae3a8c-45b2-472c-9498-0c8bd37dfa4d\") " pod="calico-apiserver/calico-apiserver-8894d8485-nh88g" Jul 6 23:39:51.512652 kubelet[2659]: I0706 23:39:51.512640 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db18717e-96a8-4ef1-bdd8-3bb06b81a7fa-calico-apiserver-certs\") pod \"calico-apiserver-8894d8485-shntm\" (UID: \"db18717e-96a8-4ef1-bdd8-3bb06b81a7fa\") " pod="calico-apiserver/calico-apiserver-8894d8485-shntm" Jul 6 23:39:51.512769 kubelet[2659]: I0706 23:39:51.512666 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/00f8e3fa-c9d3-485d-ba30-e0a92fd58537-goldmane-key-pair\") pod \"goldmane-768f4c5c69-hs8w5\" (UID: \"00f8e3fa-c9d3-485d-ba30-e0a92fd58537\") " pod="calico-system/goldmane-768f4c5c69-hs8w5" Jul 6 23:39:51.512769 kubelet[2659]: I0706 23:39:51.512683 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxmvw\" (UniqueName: \"kubernetes.io/projected/10290fb3-8639-44d7-ab20-491341382d8b-kube-api-access-hxmvw\") pod \"calico-kube-controllers-667774fdfc-fmclb\" (UID: \"10290fb3-8639-44d7-ab20-491341382d8b\") " pod="calico-system/calico-kube-controllers-667774fdfc-fmclb" Jul 6 23:39:51.512769 kubelet[2659]: I0706 23:39:51.512706 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10290fb3-8639-44d7-ab20-491341382d8b-tigera-ca-bundle\") pod \"calico-kube-controllers-667774fdfc-fmclb\" (UID: \"10290fb3-8639-44d7-ab20-491341382d8b\") " pod="calico-system/calico-kube-controllers-667774fdfc-fmclb" Jul 6 23:39:51.518016 systemd[1]: Created slice kubepods-burstable-pod9e7c8d04_4b45_4f3a_8697_5eb725212899.slice - libcontainer container kubepods-burstable-pod9e7c8d04_4b45_4f3a_8697_5eb725212899.slice. Jul 6 23:39:51.528869 systemd[1]: Created slice kubepods-besteffort-pod5aae3a8c_45b2_472c_9498_0c8bd37dfa4d.slice - libcontainer container kubepods-besteffort-pod5aae3a8c_45b2_472c_9498_0c8bd37dfa4d.slice. Jul 6 23:39:51.534894 systemd[1]: Created slice kubepods-besteffort-pod5ca78c76_44cb_44db_913c_bf2ef022a55c.slice - libcontainer container kubepods-besteffort-pod5ca78c76_44cb_44db_913c_bf2ef022a55c.slice. Jul 6 23:39:51.543174 systemd[1]: Created slice kubepods-besteffort-poddb18717e_96a8_4ef1_bdd8_3bb06b81a7fa.slice - libcontainer container kubepods-besteffort-poddb18717e_96a8_4ef1_bdd8_3bb06b81a7fa.slice. Jul 6 23:39:51.549107 systemd[1]: Created slice kubepods-besteffort-pod00f8e3fa_c9d3_485d_ba30_e0a92fd58537.slice - libcontainer container kubepods-besteffort-pod00f8e3fa_c9d3_485d_ba30_e0a92fd58537.slice. Jul 6 23:39:51.656331 containerd[1533]: time="2025-07-06T23:39:51.656217610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:39:51.806398 containerd[1533]: time="2025-07-06T23:39:51.806346649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-667774fdfc-fmclb,Uid:10290fb3-8639-44d7-ab20-491341382d8b,Namespace:calico-system,Attempt:0,}" Jul 6 23:39:51.834617 containerd[1533]: time="2025-07-06T23:39:51.834576781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8894d8485-nh88g,Uid:5aae3a8c-45b2-472c-9498-0c8bd37dfa4d,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:39:51.842494 containerd[1533]: time="2025-07-06T23:39:51.842454031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b6cb8cdcb-sbkm4,Uid:5ca78c76-44cb-44db-913c-bf2ef022a55c,Namespace:calico-system,Attempt:0,}" Jul 6 23:39:51.847640 containerd[1533]: time="2025-07-06T23:39:51.847601677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8894d8485-shntm,Uid:db18717e-96a8-4ef1-bdd8-3bb06b81a7fa,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:39:51.856560 containerd[1533]: time="2025-07-06T23:39:51.856519403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hs8w5,Uid:00f8e3fa-c9d3-485d-ba30-e0a92fd58537,Namespace:calico-system,Attempt:0,}" Jul 6 23:39:52.179739 containerd[1533]: time="2025-07-06T23:39:52.178423229Z" level=error msg="Failed to destroy network for sandbox \"63dfafb4fb236fb7ec5c561bfbf7e6a46c84a229dfc935f6079139b35012ef3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.180013 containerd[1533]: time="2025-07-06T23:39:52.179757729Z" level=error msg="Failed to destroy network for sandbox \"7b0b0bf193f9b00e33a0c1a0545c168c8a2f7279d52515c3af645857d96dd4e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.181566 containerd[1533]: time="2025-07-06T23:39:52.181491851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8894d8485-shntm,Uid:db18717e-96a8-4ef1-bdd8-3bb06b81a7fa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b0b0bf193f9b00e33a0c1a0545c168c8a2f7279d52515c3af645857d96dd4e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.182556 kubelet[2659]: E0706 23:39:52.182251 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b0b0bf193f9b00e33a0c1a0545c168c8a2f7279d52515c3af645857d96dd4e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.182556 kubelet[2659]: E0706 23:39:52.182343 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b0b0bf193f9b00e33a0c1a0545c168c8a2f7279d52515c3af645857d96dd4e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8894d8485-shntm" Jul 6 23:39:52.182556 kubelet[2659]: E0706 23:39:52.182365 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b0b0bf193f9b00e33a0c1a0545c168c8a2f7279d52515c3af645857d96dd4e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8894d8485-shntm" Jul 6 23:39:52.183093 kubelet[2659]: E0706 23:39:52.182423 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8894d8485-shntm_calico-apiserver(db18717e-96a8-4ef1-bdd8-3bb06b81a7fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8894d8485-shntm_calico-apiserver(db18717e-96a8-4ef1-bdd8-3bb06b81a7fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b0b0bf193f9b00e33a0c1a0545c168c8a2f7279d52515c3af645857d96dd4e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8894d8485-shntm" podUID="db18717e-96a8-4ef1-bdd8-3bb06b81a7fa" Jul 6 23:39:52.183866 containerd[1533]: time="2025-07-06T23:39:52.183814483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8894d8485-nh88g,Uid:5aae3a8c-45b2-472c-9498-0c8bd37dfa4d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63dfafb4fb236fb7ec5c561bfbf7e6a46c84a229dfc935f6079139b35012ef3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.184073 kubelet[2659]: E0706 23:39:52.184041 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63dfafb4fb236fb7ec5c561bfbf7e6a46c84a229dfc935f6079139b35012ef3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.184145 kubelet[2659]: E0706 23:39:52.184092 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63dfafb4fb236fb7ec5c561bfbf7e6a46c84a229dfc935f6079139b35012ef3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8894d8485-nh88g" Jul 6 23:39:52.184145 kubelet[2659]: E0706 23:39:52.184113 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63dfafb4fb236fb7ec5c561bfbf7e6a46c84a229dfc935f6079139b35012ef3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8894d8485-nh88g" Jul 6 23:39:52.184200 kubelet[2659]: E0706 23:39:52.184144 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8894d8485-nh88g_calico-apiserver(5aae3a8c-45b2-472c-9498-0c8bd37dfa4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8894d8485-nh88g_calico-apiserver(5aae3a8c-45b2-472c-9498-0c8bd37dfa4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63dfafb4fb236fb7ec5c561bfbf7e6a46c84a229dfc935f6079139b35012ef3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8894d8485-nh88g" podUID="5aae3a8c-45b2-472c-9498-0c8bd37dfa4d" Jul 6 23:39:52.191520 containerd[1533]: time="2025-07-06T23:39:52.191474236Z" level=error msg="Failed to destroy network for sandbox \"50c7dbc2f86dc2ef288a03464179702343f43cd5863e3ce18f40409e8a723def\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.192778 containerd[1533]: time="2025-07-06T23:39:52.192642974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hs8w5,Uid:00f8e3fa-c9d3-485d-ba30-e0a92fd58537,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"50c7dbc2f86dc2ef288a03464179702343f43cd5863e3ce18f40409e8a723def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.193007 kubelet[2659]: E0706 23:39:52.192947 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50c7dbc2f86dc2ef288a03464179702343f43cd5863e3ce18f40409e8a723def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.193070 kubelet[2659]: E0706 23:39:52.193032 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50c7dbc2f86dc2ef288a03464179702343f43cd5863e3ce18f40409e8a723def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-hs8w5" Jul 6 23:39:52.193070 kubelet[2659]: E0706 23:39:52.193054 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50c7dbc2f86dc2ef288a03464179702343f43cd5863e3ce18f40409e8a723def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-hs8w5" Jul 6 23:39:52.193127 kubelet[2659]: E0706 23:39:52.193104 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-hs8w5_calico-system(00f8e3fa-c9d3-485d-ba30-e0a92fd58537)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-hs8w5_calico-system(00f8e3fa-c9d3-485d-ba30-e0a92fd58537)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50c7dbc2f86dc2ef288a03464179702343f43cd5863e3ce18f40409e8a723def\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-hs8w5" podUID="00f8e3fa-c9d3-485d-ba30-e0a92fd58537" Jul 6 23:39:52.197356 containerd[1533]: time="2025-07-06T23:39:52.197301441Z" level=error msg="Failed to destroy network for sandbox \"5e15fbecd0aad790a0152767fe71c028a37d75d27c656a88eba5b4b0e7b8f68e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.198774 containerd[1533]: time="2025-07-06T23:39:52.198733366Z" level=error msg="Failed to destroy network for sandbox \"82c5b85b5efa61dca7414f525e46986a4f84068a387e4a3a3b665c865c1b3fa8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.198941 containerd[1533]: time="2025-07-06T23:39:52.198872802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b6cb8cdcb-sbkm4,Uid:5ca78c76-44cb-44db-913c-bf2ef022a55c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e15fbecd0aad790a0152767fe71c028a37d75d27c656a88eba5b4b0e7b8f68e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.199202 kubelet[2659]: E0706 23:39:52.199163 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e15fbecd0aad790a0152767fe71c028a37d75d27c656a88eba5b4b0e7b8f68e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.199251 kubelet[2659]: E0706 23:39:52.199224 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e15fbecd0aad790a0152767fe71c028a37d75d27c656a88eba5b4b0e7b8f68e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b6cb8cdcb-sbkm4" Jul 6 23:39:52.199251 kubelet[2659]: E0706 23:39:52.199243 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e15fbecd0aad790a0152767fe71c028a37d75d27c656a88eba5b4b0e7b8f68e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b6cb8cdcb-sbkm4" Jul 6 23:39:52.199317 kubelet[2659]: E0706 23:39:52.199282 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b6cb8cdcb-sbkm4_calico-system(5ca78c76-44cb-44db-913c-bf2ef022a55c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b6cb8cdcb-sbkm4_calico-system(5ca78c76-44cb-44db-913c-bf2ef022a55c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e15fbecd0aad790a0152767fe71c028a37d75d27c656a88eba5b4b0e7b8f68e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b6cb8cdcb-sbkm4" podUID="5ca78c76-44cb-44db-913c-bf2ef022a55c" Jul 6 23:39:52.203557 containerd[1533]: time="2025-07-06T23:39:52.203469734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-667774fdfc-fmclb,Uid:10290fb3-8639-44d7-ab20-491341382d8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c5b85b5efa61dca7414f525e46986a4f84068a387e4a3a3b665c865c1b3fa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.204042 kubelet[2659]: E0706 23:39:52.203762 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c5b85b5efa61dca7414f525e46986a4f84068a387e4a3a3b665c865c1b3fa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.204042 kubelet[2659]: E0706 23:39:52.203827 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c5b85b5efa61dca7414f525e46986a4f84068a387e4a3a3b665c865c1b3fa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-667774fdfc-fmclb" Jul 6 23:39:52.204042 kubelet[2659]: E0706 23:39:52.203846 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c5b85b5efa61dca7414f525e46986a4f84068a387e4a3a3b665c865c1b3fa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-667774fdfc-fmclb" Jul 6 23:39:52.204346 kubelet[2659]: E0706 23:39:52.203904 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-667774fdfc-fmclb_calico-system(10290fb3-8639-44d7-ab20-491341382d8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-667774fdfc-fmclb_calico-system(10290fb3-8639-44d7-ab20-491341382d8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82c5b85b5efa61dca7414f525e46986a4f84068a387e4a3a3b665c865c1b3fa8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-667774fdfc-fmclb" podUID="10290fb3-8639-44d7-ab20-491341382d8b" Jul 6 23:39:52.542998 systemd[1]: Created slice kubepods-besteffort-podf22b58de_d418_473b_9edf_74d8e58c1351.slice - libcontainer container kubepods-besteffort-podf22b58de_d418_473b_9edf_74d8e58c1351.slice. Jul 6 23:39:52.546576 containerd[1533]: time="2025-07-06T23:39:52.546528468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4xsf,Uid:f22b58de-d418-473b-9edf-74d8e58c1351,Namespace:calico-system,Attempt:0,}" Jul 6 23:39:52.633817 kubelet[2659]: E0706 23:39:52.632107 2659 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 6 23:39:52.633817 kubelet[2659]: E0706 23:39:52.632230 2659 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b21cb156-a9a7-4515-9788-bb30ff372348-config-volume podName:b21cb156-a9a7-4515-9788-bb30ff372348 nodeName:}" failed. No retries permitted until 2025-07-06 23:39:53.132201948 +0000 UTC m=+28.695356934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b21cb156-a9a7-4515-9788-bb30ff372348-config-volume") pod "coredns-668d6bf9bc-wchll" (UID: "b21cb156-a9a7-4515-9788-bb30ff372348") : failed to sync configmap cache: timed out waiting for the condition Jul 6 23:39:52.634042 containerd[1533]: time="2025-07-06T23:39:52.632706117Z" level=error msg="Failed to destroy network for sandbox \"d7d324bb60b8a350f987c8dd1514f7f062a9a0f814d5fb7583421b14eecfade9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.635004 systemd[1]: run-netns-cni\x2d4b1b3f53\x2dc4fa\x2dd7e1\x2d361f\x2dca74c414ed64.mount: Deactivated successfully. Jul 6 23:39:52.640141 containerd[1533]: time="2025-07-06T23:39:52.640072755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4xsf,Uid:f22b58de-d418-473b-9edf-74d8e58c1351,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d324bb60b8a350f987c8dd1514f7f062a9a0f814d5fb7583421b14eecfade9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.640392 kubelet[2659]: E0706 23:39:52.640352 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d324bb60b8a350f987c8dd1514f7f062a9a0f814d5fb7583421b14eecfade9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:52.640437 kubelet[2659]: E0706 23:39:52.640413 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d324bb60b8a350f987c8dd1514f7f062a9a0f814d5fb7583421b14eecfade9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m4xsf" Jul 6 23:39:52.640461 kubelet[2659]: E0706 23:39:52.640442 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d324bb60b8a350f987c8dd1514f7f062a9a0f814d5fb7583421b14eecfade9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m4xsf" Jul 6 23:39:52.650277 kubelet[2659]: E0706 23:39:52.640488 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m4xsf_calico-system(f22b58de-d418-473b-9edf-74d8e58c1351)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m4xsf_calico-system(f22b58de-d418-473b-9edf-74d8e58c1351)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7d324bb60b8a350f987c8dd1514f7f062a9a0f814d5fb7583421b14eecfade9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m4xsf" podUID="f22b58de-d418-473b-9edf-74d8e58c1351" Jul 6 23:39:52.654328 kubelet[2659]: E0706 23:39:52.654277 2659 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 6 23:39:52.654419 kubelet[2659]: E0706 23:39:52.654375 2659 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e7c8d04-4b45-4f3a-8697-5eb725212899-config-volume podName:9e7c8d04-4b45-4f3a-8697-5eb725212899 nodeName:}" failed. No retries permitted until 2025-07-06 23:39:53.154354756 +0000 UTC m=+28.717509702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9e7c8d04-4b45-4f3a-8697-5eb725212899-config-volume") pod "coredns-668d6bf9bc-sw5kv" (UID: "9e7c8d04-4b45-4f3a-8697-5eb725212899") : failed to sync configmap cache: timed out waiting for the condition Jul 6 23:39:53.315785 containerd[1533]: time="2025-07-06T23:39:53.315728515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wchll,Uid:b21cb156-a9a7-4515-9788-bb30ff372348,Namespace:kube-system,Attempt:0,}" Jul 6 23:39:53.325076 containerd[1533]: time="2025-07-06T23:39:53.325002548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sw5kv,Uid:9e7c8d04-4b45-4f3a-8697-5eb725212899,Namespace:kube-system,Attempt:0,}" Jul 6 23:39:53.389259 containerd[1533]: time="2025-07-06T23:39:53.389209408Z" level=error msg="Failed to destroy network for sandbox \"9a3e6935c08b7c10edfe04ddffc94a314b419a6fc01f66704962c33a8826487e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:53.390463 containerd[1533]: time="2025-07-06T23:39:53.390413703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wchll,Uid:b21cb156-a9a7-4515-9788-bb30ff372348,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a3e6935c08b7c10edfe04ddffc94a314b419a6fc01f66704962c33a8826487e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:53.391326 kubelet[2659]: E0706 23:39:53.391249 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a3e6935c08b7c10edfe04ddffc94a314b419a6fc01f66704962c33a8826487e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:53.391326 kubelet[2659]: E0706 23:39:53.391324 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a3e6935c08b7c10edfe04ddffc94a314b419a6fc01f66704962c33a8826487e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wchll" Jul 6 23:39:53.392135 kubelet[2659]: E0706 23:39:53.391346 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a3e6935c08b7c10edfe04ddffc94a314b419a6fc01f66704962c33a8826487e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wchll" Jul 6 23:39:53.392135 kubelet[2659]: E0706 23:39:53.391402 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wchll_kube-system(b21cb156-a9a7-4515-9788-bb30ff372348)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wchll_kube-system(b21cb156-a9a7-4515-9788-bb30ff372348)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a3e6935c08b7c10edfe04ddffc94a314b419a6fc01f66704962c33a8826487e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wchll" podUID="b21cb156-a9a7-4515-9788-bb30ff372348" Jul 6 23:39:53.404260 containerd[1533]: time="2025-07-06T23:39:53.404198922Z" level=error msg="Failed to destroy network for sandbox \"c37eca52c3ab2edd18d3cf1c77fa2c22cee73273e545789baab431c9b40e0707\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:53.408114 containerd[1533]: time="2025-07-06T23:39:53.407871862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sw5kv,Uid:9e7c8d04-4b45-4f3a-8697-5eb725212899,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c37eca52c3ab2edd18d3cf1c77fa2c22cee73273e545789baab431c9b40e0707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:53.408294 kubelet[2659]: E0706 23:39:53.408248 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c37eca52c3ab2edd18d3cf1c77fa2c22cee73273e545789baab431c9b40e0707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:39:53.408343 kubelet[2659]: E0706 23:39:53.408308 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c37eca52c3ab2edd18d3cf1c77fa2c22cee73273e545789baab431c9b40e0707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sw5kv" Jul 6 23:39:53.408370 kubelet[2659]: E0706 23:39:53.408333 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c37eca52c3ab2edd18d3cf1c77fa2c22cee73273e545789baab431c9b40e0707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sw5kv" Jul 6 23:39:53.408406 kubelet[2659]: E0706 23:39:53.408386 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sw5kv_kube-system(9e7c8d04-4b45-4f3a-8697-5eb725212899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sw5kv_kube-system(9e7c8d04-4b45-4f3a-8697-5eb725212899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c37eca52c3ab2edd18d3cf1c77fa2c22cee73273e545789baab431c9b40e0707\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sw5kv" podUID="9e7c8d04-4b45-4f3a-8697-5eb725212899" Jul 6 23:39:53.623001 systemd[1]: run-netns-cni\x2d0849ede4\x2dd7c3\x2dd8ca\x2dc0a5\x2d782ef2788fbd.mount: Deactivated successfully. Jul 6 23:39:53.623531 systemd[1]: run-netns-cni\x2d5a6c8067\x2d113a\x2dea09\x2d07a0\x2d5cf02944fa84.mount: Deactivated successfully. Jul 6 23:39:55.066612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375266059.mount: Deactivated successfully. Jul 6 23:39:55.296439 containerd[1533]: time="2025-07-06T23:39:55.296384658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:55.297018 containerd[1533]: time="2025-07-06T23:39:55.296987716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 6 23:39:55.298368 containerd[1533]: time="2025-07-06T23:39:55.298314897Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:55.301814 containerd[1533]: time="2025-07-06T23:39:55.301754479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:55.304869 containerd[1533]: time="2025-07-06T23:39:55.304829018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.648564236s" Jul 6 23:39:55.305033 containerd[1533]: time="2025-07-06T23:39:55.304870628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 6 23:39:55.321155 containerd[1533]: time="2025-07-06T23:39:55.320834537Z" level=info msg="CreateContainer within sandbox \"7924325a064e31e2664f4a9b7f5893ffe14502801b9fcd5d56f199667706875a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:39:55.331190 containerd[1533]: time="2025-07-06T23:39:55.331128998Z" level=info msg="Container ad8f3802fa1f66d11433774a82a4d61cf478442561c7ad63fcda422723d59ff1: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:55.335446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1177031442.mount: Deactivated successfully. Jul 6 23:39:55.353572 containerd[1533]: time="2025-07-06T23:39:55.353496523Z" level=info msg="CreateContainer within sandbox \"7924325a064e31e2664f4a9b7f5893ffe14502801b9fcd5d56f199667706875a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ad8f3802fa1f66d11433774a82a4d61cf478442561c7ad63fcda422723d59ff1\"" Jul 6 23:39:55.354343 containerd[1533]: time="2025-07-06T23:39:55.354291224Z" level=info msg="StartContainer for \"ad8f3802fa1f66d11433774a82a4d61cf478442561c7ad63fcda422723d59ff1\"" Jul 6 23:39:55.356476 containerd[1533]: time="2025-07-06T23:39:55.356441113Z" level=info msg="connecting to shim ad8f3802fa1f66d11433774a82a4d61cf478442561c7ad63fcda422723d59ff1" address="unix:///run/containerd/s/5777823fff217e5c324379fcb6e1766b1eb05d86f0e3f5139cda1868c235fa20" protocol=ttrpc version=3 Jul 6 23:39:55.387113 systemd[1]: Started cri-containerd-ad8f3802fa1f66d11433774a82a4d61cf478442561c7ad63fcda422723d59ff1.scope - libcontainer container ad8f3802fa1f66d11433774a82a4d61cf478442561c7ad63fcda422723d59ff1. Jul 6 23:39:55.459066 containerd[1533]: time="2025-07-06T23:39:55.459007192Z" level=info msg="StartContainer for \"ad8f3802fa1f66d11433774a82a4d61cf478442561c7ad63fcda422723d59ff1\" returns successfully" Jul 6 23:39:56.067605 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:39:56.067870 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:39:56.325697 kubelet[2659]: I0706 23:39:56.325460 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-khs2s" podStartSLOduration=1.7625522820000001 podStartE2EDuration="12.325437766s" podCreationTimestamp="2025-07-06 23:39:44 +0000 UTC" firstStartedPulling="2025-07-06 23:39:44.748203238 +0000 UTC m=+20.311358224" lastFinishedPulling="2025-07-06 23:39:55.311088722 +0000 UTC m=+30.874243708" observedRunningTime="2025-07-06 23:39:55.710511014 +0000 UTC m=+31.273666000" watchObservedRunningTime="2025-07-06 23:39:56.325437766 +0000 UTC m=+31.888592752" Jul 6 23:39:56.350864 kubelet[2659]: I0706 23:39:56.350824 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ca78c76-44cb-44db-913c-bf2ef022a55c-whisker-backend-key-pair\") pod \"5ca78c76-44cb-44db-913c-bf2ef022a55c\" (UID: \"5ca78c76-44cb-44db-913c-bf2ef022a55c\") " Jul 6 23:39:56.350864 kubelet[2659]: I0706 23:39:56.350873 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w62b\" (UniqueName: \"kubernetes.io/projected/5ca78c76-44cb-44db-913c-bf2ef022a55c-kube-api-access-8w62b\") pod \"5ca78c76-44cb-44db-913c-bf2ef022a55c\" (UID: \"5ca78c76-44cb-44db-913c-bf2ef022a55c\") " Jul 6 23:39:56.350864 kubelet[2659]: I0706 23:39:56.350938 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca78c76-44cb-44db-913c-bf2ef022a55c-whisker-ca-bundle\") pod \"5ca78c76-44cb-44db-913c-bf2ef022a55c\" (UID: \"5ca78c76-44cb-44db-913c-bf2ef022a55c\") " Jul 6 23:39:56.351486 kubelet[2659]: I0706 23:39:56.351404 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ca78c76-44cb-44db-913c-bf2ef022a55c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5ca78c76-44cb-44db-913c-bf2ef022a55c" (UID: "5ca78c76-44cb-44db-913c-bf2ef022a55c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:39:56.363057 systemd[1]: var-lib-kubelet-pods-5ca78c76\x2d44cb\x2d44db\x2d913c\x2dbf2ef022a55c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8w62b.mount: Deactivated successfully. Jul 6 23:39:56.367154 kubelet[2659]: I0706 23:39:56.367087 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca78c76-44cb-44db-913c-bf2ef022a55c-kube-api-access-8w62b" (OuterVolumeSpecName: "kube-api-access-8w62b") pod "5ca78c76-44cb-44db-913c-bf2ef022a55c" (UID: "5ca78c76-44cb-44db-913c-bf2ef022a55c"). InnerVolumeSpecName "kube-api-access-8w62b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:39:56.370947 systemd[1]: var-lib-kubelet-pods-5ca78c76\x2d44cb\x2d44db\x2d913c\x2dbf2ef022a55c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:39:56.371438 kubelet[2659]: I0706 23:39:56.370941 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca78c76-44cb-44db-913c-bf2ef022a55c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5ca78c76-44cb-44db-913c-bf2ef022a55c" (UID: "5ca78c76-44cb-44db-913c-bf2ef022a55c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:39:56.451622 kubelet[2659]: I0706 23:39:56.451571 2659 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca78c76-44cb-44db-913c-bf2ef022a55c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 6 23:39:56.451622 kubelet[2659]: I0706 23:39:56.451611 2659 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ca78c76-44cb-44db-913c-bf2ef022a55c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 6 23:39:56.451622 kubelet[2659]: I0706 23:39:56.451622 2659 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8w62b\" (UniqueName: \"kubernetes.io/projected/5ca78c76-44cb-44db-913c-bf2ef022a55c-kube-api-access-8w62b\") on node \"localhost\" DevicePath \"\"" Jul 6 23:39:56.573442 systemd[1]: Removed slice kubepods-besteffort-pod5ca78c76_44cb_44db_913c_bf2ef022a55c.slice - libcontainer container kubepods-besteffort-pod5ca78c76_44cb_44db_913c_bf2ef022a55c.slice. Jul 6 23:39:56.688585 kubelet[2659]: I0706 23:39:56.688460 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:39:56.790574 systemd[1]: Created slice kubepods-besteffort-pod241418ef_6164_409b_9339_e63d238f117f.slice - libcontainer container kubepods-besteffort-pod241418ef_6164_409b_9339_e63d238f117f.slice. Jul 6 23:39:56.863504 kubelet[2659]: I0706 23:39:56.863460 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssqb5\" (UniqueName: \"kubernetes.io/projected/241418ef-6164-409b-9339-e63d238f117f-kube-api-access-ssqb5\") pod \"whisker-6bfd65fb8-bv8hf\" (UID: \"241418ef-6164-409b-9339-e63d238f117f\") " pod="calico-system/whisker-6bfd65fb8-bv8hf" Jul 6 23:39:56.863770 kubelet[2659]: I0706 23:39:56.863730 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/241418ef-6164-409b-9339-e63d238f117f-whisker-backend-key-pair\") pod \"whisker-6bfd65fb8-bv8hf\" (UID: \"241418ef-6164-409b-9339-e63d238f117f\") " pod="calico-system/whisker-6bfd65fb8-bv8hf" Jul 6 23:39:56.863948 kubelet[2659]: I0706 23:39:56.863921 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/241418ef-6164-409b-9339-e63d238f117f-whisker-ca-bundle\") pod \"whisker-6bfd65fb8-bv8hf\" (UID: \"241418ef-6164-409b-9339-e63d238f117f\") " pod="calico-system/whisker-6bfd65fb8-bv8hf" Jul 6 23:39:57.095973 containerd[1533]: time="2025-07-06T23:39:57.095917350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bfd65fb8-bv8hf,Uid:241418ef-6164-409b-9339-e63d238f117f,Namespace:calico-system,Attempt:0,}" Jul 6 23:39:57.536073 systemd-networkd[1429]: calida58abf21ed: Link UP Jul 6 23:39:57.537169 systemd-networkd[1429]: calida58abf21ed: Gained carrier Jul 6 23:39:57.549796 containerd[1533]: 2025-07-06 23:39:57.131 [INFO][3800] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:39:57.549796 containerd[1533]: 2025-07-06 23:39:57.269 [INFO][3800] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0 whisker-6bfd65fb8- calico-system 241418ef-6164-409b-9339-e63d238f117f 858 0 2025-07-06 23:39:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bfd65fb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6bfd65fb8-bv8hf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calida58abf21ed [] [] }} ContainerID="df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" Namespace="calico-system" Pod="whisker-6bfd65fb8-bv8hf" WorkloadEndpoint="localhost-k8s-whisker--6bfd65fb8--bv8hf-" Jul 6 23:39:57.549796 containerd[1533]: 2025-07-06 23:39:57.269 [INFO][3800] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" Namespace="calico-system" Pod="whisker-6bfd65fb8-bv8hf" WorkloadEndpoint="localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0" Jul 6 23:39:57.549796 containerd[1533]: 2025-07-06 23:39:57.483 [INFO][3814] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" HandleID="k8s-pod-network.df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" Workload="localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0" Jul 6 23:39:57.551821 containerd[1533]: 2025-07-06 23:39:57.483 [INFO][3814] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" HandleID="k8s-pod-network.df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" Workload="localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000185760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6bfd65fb8-bv8hf", "timestamp":"2025-07-06 23:39:57.483050719 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:39:57.551821 containerd[1533]: 2025-07-06 23:39:57.483 [INFO][3814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:39:57.551821 containerd[1533]: 2025-07-06 23:39:57.483 [INFO][3814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:39:57.551821 containerd[1533]: 2025-07-06 23:39:57.483 [INFO][3814] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:39:57.551821 containerd[1533]: 2025-07-06 23:39:57.494 [INFO][3814] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" host="localhost" Jul 6 23:39:57.551821 containerd[1533]: 2025-07-06 23:39:57.499 [INFO][3814] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:39:57.551821 containerd[1533]: 2025-07-06 23:39:57.505 [INFO][3814] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:39:57.551821 containerd[1533]: 2025-07-06 23:39:57.507 [INFO][3814] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:39:57.551821 containerd[1533]: 2025-07-06 23:39:57.509 [INFO][3814] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:39:57.551821 containerd[1533]: 2025-07-06 23:39:57.509 [INFO][3814] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" host="localhost" Jul 6 23:39:57.552134 containerd[1533]: 2025-07-06 23:39:57.511 [INFO][3814] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4 Jul 6 23:39:57.552134 containerd[1533]: 2025-07-06 23:39:57.514 [INFO][3814] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" host="localhost" Jul 6 23:39:57.552134 containerd[1533]: 2025-07-06 23:39:57.519 [INFO][3814] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" host="localhost" Jul 6 23:39:57.552134 containerd[1533]: 2025-07-06 23:39:57.519 [INFO][3814] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" host="localhost" Jul 6 23:39:57.552134 containerd[1533]: 2025-07-06 23:39:57.519 [INFO][3814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:39:57.552134 containerd[1533]: 2025-07-06 23:39:57.519 [INFO][3814] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" HandleID="k8s-pod-network.df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" Workload="localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0" Jul 6 23:39:57.552266 containerd[1533]: 2025-07-06 23:39:57.522 [INFO][3800] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" Namespace="calico-system" Pod="whisker-6bfd65fb8-bv8hf" WorkloadEndpoint="localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0", GenerateName:"whisker-6bfd65fb8-", Namespace:"calico-system", SelfLink:"", UID:"241418ef-6164-409b-9339-e63d238f117f", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bfd65fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6bfd65fb8-bv8hf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calida58abf21ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:39:57.552266 containerd[1533]: 2025-07-06 23:39:57.522 [INFO][3800] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" Namespace="calico-system" Pod="whisker-6bfd65fb8-bv8hf" WorkloadEndpoint="localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0" Jul 6 23:39:57.552333 containerd[1533]: 2025-07-06 23:39:57.522 [INFO][3800] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida58abf21ed ContainerID="df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" Namespace="calico-system" Pod="whisker-6bfd65fb8-bv8hf" WorkloadEndpoint="localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0" Jul 6 23:39:57.552333 containerd[1533]: 2025-07-06 23:39:57.538 [INFO][3800] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" Namespace="calico-system" Pod="whisker-6bfd65fb8-bv8hf" WorkloadEndpoint="localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0" Jul 6 23:39:57.552383 containerd[1533]: 2025-07-06 23:39:57.539 [INFO][3800] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" Namespace="calico-system" Pod="whisker-6bfd65fb8-bv8hf" WorkloadEndpoint="localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0", GenerateName:"whisker-6bfd65fb8-", Namespace:"calico-system", SelfLink:"", UID:"241418ef-6164-409b-9339-e63d238f117f", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bfd65fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4", Pod:"whisker-6bfd65fb8-bv8hf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calida58abf21ed", MAC:"7a:97:b0:68:35:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:39:57.552432 containerd[1533]: 2025-07-06 23:39:57.548 [INFO][3800] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" Namespace="calico-system" Pod="whisker-6bfd65fb8-bv8hf" WorkloadEndpoint="localhost-k8s-whisker--6bfd65fb8--bv8hf-eth0" Jul 6 23:39:57.653931 containerd[1533]: time="2025-07-06T23:39:57.653832959Z" level=info msg="connecting to shim df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4" address="unix:///run/containerd/s/40d48517b277b80467ea9697be1214c826f6ee5e68312d69d17caa75147de269" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:39:57.704090 systemd[1]: Started cri-containerd-df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4.scope - libcontainer container df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4. Jul 6 23:39:57.717005 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:39:57.739109 containerd[1533]: time="2025-07-06T23:39:57.739050082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bfd65fb8-bv8hf,Uid:241418ef-6164-409b-9339-e63d238f117f,Namespace:calico-system,Attempt:0,} returns sandbox id \"df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4\"" Jul 6 23:39:57.741571 containerd[1533]: time="2025-07-06T23:39:57.741537009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:39:58.538197 kubelet[2659]: I0706 23:39:58.538137 2659 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca78c76-44cb-44db-913c-bf2ef022a55c" path="/var/lib/kubelet/pods/5ca78c76-44cb-44db-913c-bf2ef022a55c/volumes" Jul 6 23:39:58.615414 containerd[1533]: time="2025-07-06T23:39:58.615359031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:58.615871 containerd[1533]: time="2025-07-06T23:39:58.615836689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 6 23:39:58.616967 containerd[1533]: time="2025-07-06T23:39:58.616931033Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:58.619846 containerd[1533]: time="2025-07-06T23:39:58.619770694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:58.620671 containerd[1533]: time="2025-07-06T23:39:58.620334449Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 878.75331ms" Jul 6 23:39:58.620671 containerd[1533]: time="2025-07-06T23:39:58.620368576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 6 23:39:58.623895 containerd[1533]: time="2025-07-06T23:39:58.623835846Z" level=info msg="CreateContainer within sandbox \"df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:39:58.632900 containerd[1533]: time="2025-07-06T23:39:58.631682171Z" level=info msg="Container af3a79e348efa17b7a6a2366a1bce9d1ab0fe4b6612e318e8e45293065a80420: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:58.640733 containerd[1533]: time="2025-07-06T23:39:58.640681813Z" level=info msg="CreateContainer within sandbox \"df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"af3a79e348efa17b7a6a2366a1bce9d1ab0fe4b6612e318e8e45293065a80420\"" Jul 6 23:39:58.641356 containerd[1533]: time="2025-07-06T23:39:58.641275775Z" level=info msg="StartContainer for \"af3a79e348efa17b7a6a2366a1bce9d1ab0fe4b6612e318e8e45293065a80420\"" Jul 6 23:39:58.642893 containerd[1533]: time="2025-07-06T23:39:58.642819811Z" level=info msg="connecting to shim af3a79e348efa17b7a6a2366a1bce9d1ab0fe4b6612e318e8e45293065a80420" address="unix:///run/containerd/s/40d48517b277b80467ea9697be1214c826f6ee5e68312d69d17caa75147de269" protocol=ttrpc version=3 Jul 6 23:39:58.666106 systemd[1]: Started cri-containerd-af3a79e348efa17b7a6a2366a1bce9d1ab0fe4b6612e318e8e45293065a80420.scope - libcontainer container af3a79e348efa17b7a6a2366a1bce9d1ab0fe4b6612e318e8e45293065a80420. Jul 6 23:39:58.707023 containerd[1533]: time="2025-07-06T23:39:58.706982421Z" level=info msg="StartContainer for \"af3a79e348efa17b7a6a2366a1bce9d1ab0fe4b6612e318e8e45293065a80420\" returns successfully" Jul 6 23:39:58.712274 containerd[1533]: time="2025-07-06T23:39:58.712221653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:39:59.364013 systemd-networkd[1429]: calida58abf21ed: Gained IPv6LL Jul 6 23:39:59.880093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount126158569.mount: Deactivated successfully. Jul 6 23:39:59.898668 containerd[1533]: time="2025-07-06T23:39:59.898597961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:59.899535 containerd[1533]: time="2025-07-06T23:39:59.899493018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 6 23:39:59.900314 containerd[1533]: time="2025-07-06T23:39:59.900274452Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:59.903261 containerd[1533]: time="2025-07-06T23:39:59.903121696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.190860475s" Jul 6 23:39:59.903261 containerd[1533]: time="2025-07-06T23:39:59.903159784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 6 23:39:59.903428 containerd[1533]: time="2025-07-06T23:39:59.903394550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:59.906990 containerd[1533]: time="2025-07-06T23:39:59.906126051Z" level=info msg="CreateContainer within sandbox \"df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:39:59.912902 containerd[1533]: time="2025-07-06T23:39:59.912471988Z" level=info msg="Container 57b24fbade107d478c529ad4fb27e0346667282b25c65c13040d07bdf70c6d33: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:59.920150 containerd[1533]: time="2025-07-06T23:39:59.920094937Z" level=info msg="CreateContainer within sandbox \"df267bba635f4920f3194b285f15eb263d720e2389e05276e862a97445cca5e4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"57b24fbade107d478c529ad4fb27e0346667282b25c65c13040d07bdf70c6d33\"" Jul 6 23:39:59.922051 containerd[1533]: time="2025-07-06T23:39:59.921991552Z" level=info msg="StartContainer for \"57b24fbade107d478c529ad4fb27e0346667282b25c65c13040d07bdf70c6d33\"" Jul 6 23:39:59.923340 containerd[1533]: time="2025-07-06T23:39:59.923299211Z" level=info msg="connecting to shim 57b24fbade107d478c529ad4fb27e0346667282b25c65c13040d07bdf70c6d33" address="unix:///run/containerd/s/40d48517b277b80467ea9697be1214c826f6ee5e68312d69d17caa75147de269" protocol=ttrpc version=3 Jul 6 23:39:59.945122 systemd[1]: Started cri-containerd-57b24fbade107d478c529ad4fb27e0346667282b25c65c13040d07bdf70c6d33.scope - libcontainer container 57b24fbade107d478c529ad4fb27e0346667282b25c65c13040d07bdf70c6d33. Jul 6 23:39:59.988102 containerd[1533]: time="2025-07-06T23:39:59.988064115Z" level=info msg="StartContainer for \"57b24fbade107d478c529ad4fb27e0346667282b25c65c13040d07bdf70c6d33\" returns successfully" Jul 6 23:40:02.066722 kubelet[2659]: I0706 23:40:02.066662 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:40:02.107626 kubelet[2659]: I0706 23:40:02.106924 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6bfd65fb8-bv8hf" podStartSLOduration=3.943039191 podStartE2EDuration="6.106907747s" podCreationTimestamp="2025-07-06 23:39:56 +0000 UTC" firstStartedPulling="2025-07-06 23:39:57.740293786 +0000 UTC m=+33.303448772" lastFinishedPulling="2025-07-06 23:39:59.904162342 +0000 UTC m=+35.467317328" observedRunningTime="2025-07-06 23:40:00.728594369 +0000 UTC m=+36.291749355" watchObservedRunningTime="2025-07-06 23:40:02.106907747 +0000 UTC m=+37.670062733" Jul 6 23:40:02.474281 systemd-networkd[1429]: vxlan.calico: Link UP Jul 6 23:40:02.474288 systemd-networkd[1429]: vxlan.calico: Gained carrier Jul 6 23:40:03.536871 containerd[1533]: time="2025-07-06T23:40:03.536586065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8894d8485-nh88g,Uid:5aae3a8c-45b2-472c-9498-0c8bd37dfa4d,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:40:03.603057 containerd[1533]: time="2025-07-06T23:40:03.602754106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4xsf,Uid:f22b58de-d418-473b-9edf-74d8e58c1351,Namespace:calico-system,Attempt:0,}" Jul 6 23:40:03.738753 systemd-networkd[1429]: calidfe5e8aba54: Link UP Jul 6 23:40:03.739355 systemd-networkd[1429]: calidfe5e8aba54: Gained carrier Jul 6 23:40:03.752952 containerd[1533]: 2025-07-06 23:40:03.646 [INFO][4275] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0 calico-apiserver-8894d8485- calico-apiserver 5aae3a8c-45b2-472c-9498-0c8bd37dfa4d 783 0 2025-07-06 23:39:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8894d8485 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8894d8485-nh88g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidfe5e8aba54 [] [] }} ContainerID="62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-nh88g" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--nh88g-" Jul 6 23:40:03.752952 containerd[1533]: 2025-07-06 23:40:03.646 [INFO][4275] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-nh88g" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0" Jul 6 23:40:03.752952 containerd[1533]: 2025-07-06 23:40:03.686 [INFO][4305] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" HandleID="k8s-pod-network.62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" Workload="localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0" Jul 6 23:40:03.753180 containerd[1533]: 2025-07-06 23:40:03.686 [INFO][4305] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" HandleID="k8s-pod-network.62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" Workload="localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c36c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8894d8485-nh88g", "timestamp":"2025-07-06 23:40:03.686533794 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:40:03.753180 containerd[1533]: 2025-07-06 23:40:03.686 [INFO][4305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:40:03.753180 containerd[1533]: 2025-07-06 23:40:03.686 [INFO][4305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:40:03.753180 containerd[1533]: 2025-07-06 23:40:03.686 [INFO][4305] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:40:03.753180 containerd[1533]: 2025-07-06 23:40:03.698 [INFO][4305] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" host="localhost" Jul 6 23:40:03.753180 containerd[1533]: 2025-07-06 23:40:03.704 [INFO][4305] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:40:03.753180 containerd[1533]: 2025-07-06 23:40:03.711 [INFO][4305] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:40:03.753180 containerd[1533]: 2025-07-06 23:40:03.714 [INFO][4305] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:03.753180 containerd[1533]: 2025-07-06 23:40:03.717 [INFO][4305] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:03.753180 containerd[1533]: 2025-07-06 23:40:03.717 [INFO][4305] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" host="localhost" Jul 6 23:40:03.753450 containerd[1533]: 2025-07-06 23:40:03.719 [INFO][4305] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8 Jul 6 23:40:03.753450 containerd[1533]: 2025-07-06 23:40:03.723 [INFO][4305] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" host="localhost" Jul 6 23:40:03.753450 containerd[1533]: 2025-07-06 23:40:03.732 [INFO][4305] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" host="localhost" Jul 6 23:40:03.753450 containerd[1533]: 2025-07-06 23:40:03.732 [INFO][4305] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" host="localhost" Jul 6 23:40:03.753450 containerd[1533]: 2025-07-06 23:40:03.732 [INFO][4305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:40:03.753450 containerd[1533]: 2025-07-06 23:40:03.732 [INFO][4305] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" HandleID="k8s-pod-network.62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" Workload="localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0" Jul 6 23:40:03.753564 containerd[1533]: 2025-07-06 23:40:03.736 [INFO][4275] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-nh88g" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0", GenerateName:"calico-apiserver-8894d8485-", Namespace:"calico-apiserver", SelfLink:"", UID:"5aae3a8c-45b2-472c-9498-0c8bd37dfa4d", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8894d8485", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8894d8485-nh88g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidfe5e8aba54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:03.753616 containerd[1533]: 2025-07-06 23:40:03.736 [INFO][4275] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-nh88g" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0" Jul 6 23:40:03.753616 containerd[1533]: 2025-07-06 23:40:03.736 [INFO][4275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfe5e8aba54 ContainerID="62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-nh88g" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0" Jul 6 23:40:03.753616 containerd[1533]: 2025-07-06 23:40:03.739 [INFO][4275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-nh88g" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0" Jul 6 23:40:03.753692 containerd[1533]: 2025-07-06 23:40:03.739 [INFO][4275] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-nh88g" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0", GenerateName:"calico-apiserver-8894d8485-", Namespace:"calico-apiserver", SelfLink:"", UID:"5aae3a8c-45b2-472c-9498-0c8bd37dfa4d", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8894d8485", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8", Pod:"calico-apiserver-8894d8485-nh88g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidfe5e8aba54", MAC:"ae:71:b7:4f:42:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:03.753741 containerd[1533]: 2025-07-06 23:40:03.750 [INFO][4275] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-nh88g" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--nh88g-eth0" Jul 6 23:40:03.795622 containerd[1533]: time="2025-07-06T23:40:03.795473453Z" level=info msg="connecting to shim 62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8" address="unix:///run/containerd/s/8a256bb8183328927a72d1e9696ca5e5efcbc0211df7b88775d60e0cfa1c6cf0" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:40:03.840121 systemd[1]: Started cri-containerd-62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8.scope - libcontainer container 62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8. Jul 6 23:40:03.848078 systemd-networkd[1429]: cali089972b5b0e: Link UP Jul 6 23:40:03.849625 systemd-networkd[1429]: cali089972b5b0e: Gained carrier Jul 6 23:40:03.861186 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:40:03.872855 containerd[1533]: 2025-07-06 23:40:03.658 [INFO][4287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--m4xsf-eth0 csi-node-driver- calico-system f22b58de-d418-473b-9edf-74d8e58c1351 685 0 2025-07-06 23:39:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-m4xsf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali089972b5b0e [] [] }} ContainerID="4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" Namespace="calico-system" Pod="csi-node-driver-m4xsf" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4xsf-" Jul 6 23:40:03.872855 containerd[1533]: 2025-07-06 23:40:03.662 [INFO][4287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" Namespace="calico-system" Pod="csi-node-driver-m4xsf" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4xsf-eth0" Jul 6 23:40:03.872855 containerd[1533]: 2025-07-06 23:40:03.694 [INFO][4312] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" HandleID="k8s-pod-network.4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" Workload="localhost-k8s-csi--node--driver--m4xsf-eth0" Jul 6 23:40:03.873110 containerd[1533]: 2025-07-06 23:40:03.695 [INFO][4312] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" HandleID="k8s-pod-network.4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" Workload="localhost-k8s-csi--node--driver--m4xsf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012f400), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-m4xsf", "timestamp":"2025-07-06 23:40:03.694809325 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:40:03.873110 containerd[1533]: 2025-07-06 23:40:03.695 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:40:03.873110 containerd[1533]: 2025-07-06 23:40:03.732 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:40:03.873110 containerd[1533]: 2025-07-06 23:40:03.732 [INFO][4312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:40:03.873110 containerd[1533]: 2025-07-06 23:40:03.799 [INFO][4312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" host="localhost" Jul 6 23:40:03.873110 containerd[1533]: 2025-07-06 23:40:03.806 [INFO][4312] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:40:03.873110 containerd[1533]: 2025-07-06 23:40:03.816 [INFO][4312] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:40:03.873110 containerd[1533]: 2025-07-06 23:40:03.820 [INFO][4312] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:03.873110 containerd[1533]: 2025-07-06 23:40:03.823 [INFO][4312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:03.873110 containerd[1533]: 2025-07-06 23:40:03.824 [INFO][4312] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" host="localhost" Jul 6 23:40:03.873313 containerd[1533]: 2025-07-06 23:40:03.827 [INFO][4312] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f Jul 6 23:40:03.873313 containerd[1533]: 2025-07-06 23:40:03.833 [INFO][4312] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" host="localhost" Jul 6 23:40:03.873313 containerd[1533]: 2025-07-06 23:40:03.842 [INFO][4312] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" host="localhost" Jul 6 23:40:03.873313 containerd[1533]: 2025-07-06 23:40:03.842 [INFO][4312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" host="localhost" Jul 6 23:40:03.873313 containerd[1533]: 2025-07-06 23:40:03.842 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:40:03.873313 containerd[1533]: 2025-07-06 23:40:03.842 [INFO][4312] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" HandleID="k8s-pod-network.4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" Workload="localhost-k8s-csi--node--driver--m4xsf-eth0" Jul 6 23:40:03.873442 containerd[1533]: 2025-07-06 23:40:03.844 [INFO][4287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" Namespace="calico-system" Pod="csi-node-driver-m4xsf" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4xsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m4xsf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f22b58de-d418-473b-9edf-74d8e58c1351", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-m4xsf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali089972b5b0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:03.873500 containerd[1533]: 2025-07-06 23:40:03.845 [INFO][4287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" Namespace="calico-system" Pod="csi-node-driver-m4xsf" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4xsf-eth0" Jul 6 23:40:03.873500 containerd[1533]: 2025-07-06 23:40:03.845 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali089972b5b0e ContainerID="4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" Namespace="calico-system" Pod="csi-node-driver-m4xsf" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4xsf-eth0" Jul 6 23:40:03.873500 containerd[1533]: 2025-07-06 23:40:03.849 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" Namespace="calico-system" Pod="csi-node-driver-m4xsf" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4xsf-eth0" Jul 6 23:40:03.873556 containerd[1533]: 2025-07-06 23:40:03.851 [INFO][4287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" Namespace="calico-system" Pod="csi-node-driver-m4xsf" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4xsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m4xsf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f22b58de-d418-473b-9edf-74d8e58c1351", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f", Pod:"csi-node-driver-m4xsf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali089972b5b0e", MAC:"da:c8:7d:a2:57:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:03.873605 containerd[1533]: 2025-07-06 23:40:03.868 [INFO][4287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" Namespace="calico-system" Pod="csi-node-driver-m4xsf" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4xsf-eth0" Jul 6 23:40:03.900460 containerd[1533]: time="2025-07-06T23:40:03.900404370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8894d8485-nh88g,Uid:5aae3a8c-45b2-472c-9498-0c8bd37dfa4d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8\"" Jul 6 23:40:03.903218 containerd[1533]: time="2025-07-06T23:40:03.903165294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:40:03.909006 systemd-networkd[1429]: vxlan.calico: Gained IPv6LL Jul 6 23:40:03.914787 containerd[1533]: time="2025-07-06T23:40:03.914741803Z" level=info msg="connecting to shim 4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f" address="unix:///run/containerd/s/0b515225706e8c7da2a7c0b644275f04ea06690a951f9bad8e36a69587f889d0" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:40:03.948119 systemd[1]: Started cri-containerd-4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f.scope - libcontainer container 4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f. Jul 6 23:40:03.960728 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:40:03.975929 containerd[1533]: time="2025-07-06T23:40:03.975859998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4xsf,Uid:f22b58de-d418-473b-9edf-74d8e58c1351,Namespace:calico-system,Attempt:0,} returns sandbox id \"4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f\"" Jul 6 23:40:04.538696 containerd[1533]: time="2025-07-06T23:40:04.538649798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hs8w5,Uid:00f8e3fa-c9d3-485d-ba30-e0a92fd58537,Namespace:calico-system,Attempt:0,}" Jul 6 23:40:04.672508 systemd-networkd[1429]: cali6d4a6bd23dc: Link UP Jul 6 23:40:04.673040 systemd-networkd[1429]: cali6d4a6bd23dc: Gained carrier Jul 6 23:40:04.691426 containerd[1533]: 2025-07-06 23:40:04.583 [INFO][4435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0 goldmane-768f4c5c69- calico-system 00f8e3fa-c9d3-485d-ba30-e0a92fd58537 785 0 2025-07-06 23:39:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-hs8w5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6d4a6bd23dc [] [] }} ContainerID="efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" Namespace="calico-system" Pod="goldmane-768f4c5c69-hs8w5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hs8w5-" Jul 6 23:40:04.691426 containerd[1533]: 2025-07-06 23:40:04.583 [INFO][4435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" Namespace="calico-system" Pod="goldmane-768f4c5c69-hs8w5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0" Jul 6 23:40:04.691426 containerd[1533]: 2025-07-06 23:40:04.618 [INFO][4450] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" HandleID="k8s-pod-network.efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" Workload="localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0" Jul 6 23:40:04.691657 containerd[1533]: 2025-07-06 23:40:04.619 [INFO][4450] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" HandleID="k8s-pod-network.efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" Workload="localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c440), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-hs8w5", "timestamp":"2025-07-06 23:40:04.618693966 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:40:04.691657 containerd[1533]: 2025-07-06 23:40:04.619 [INFO][4450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:40:04.691657 containerd[1533]: 2025-07-06 23:40:04.619 [INFO][4450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:40:04.691657 containerd[1533]: 2025-07-06 23:40:04.619 [INFO][4450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:40:04.691657 containerd[1533]: 2025-07-06 23:40:04.629 [INFO][4450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" host="localhost" Jul 6 23:40:04.691657 containerd[1533]: 2025-07-06 23:40:04.636 [INFO][4450] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:40:04.691657 containerd[1533]: 2025-07-06 23:40:04.641 [INFO][4450] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:40:04.691657 containerd[1533]: 2025-07-06 23:40:04.643 [INFO][4450] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:04.691657 containerd[1533]: 2025-07-06 23:40:04.645 [INFO][4450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:04.691657 containerd[1533]: 2025-07-06 23:40:04.646 [INFO][4450] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" host="localhost" Jul 6 23:40:04.691911 containerd[1533]: 2025-07-06 23:40:04.647 [INFO][4450] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337 Jul 6 23:40:04.691911 containerd[1533]: 2025-07-06 23:40:04.651 [INFO][4450] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" host="localhost" Jul 6 23:40:04.691911 containerd[1533]: 2025-07-06 23:40:04.668 [INFO][4450] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" host="localhost" Jul 6 23:40:04.691911 containerd[1533]: 2025-07-06 23:40:04.668 [INFO][4450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" host="localhost" Jul 6 23:40:04.691911 containerd[1533]: 2025-07-06 23:40:04.668 [INFO][4450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:40:04.691911 containerd[1533]: 2025-07-06 23:40:04.668 [INFO][4450] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" HandleID="k8s-pod-network.efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" Workload="localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0" Jul 6 23:40:04.692047 containerd[1533]: 2025-07-06 23:40:04.670 [INFO][4435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" Namespace="calico-system" Pod="goldmane-768f4c5c69-hs8w5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"00f8e3fa-c9d3-485d-ba30-e0a92fd58537", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-hs8w5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d4a6bd23dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:04.692047 containerd[1533]: 2025-07-06 23:40:04.670 [INFO][4435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" Namespace="calico-system" Pod="goldmane-768f4c5c69-hs8w5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0" Jul 6 23:40:04.692126 containerd[1533]: 2025-07-06 23:40:04.670 [INFO][4435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d4a6bd23dc ContainerID="efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" Namespace="calico-system" Pod="goldmane-768f4c5c69-hs8w5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0" Jul 6 23:40:04.692126 containerd[1533]: 2025-07-06 23:40:04.673 [INFO][4435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" Namespace="calico-system" Pod="goldmane-768f4c5c69-hs8w5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0" Jul 6 23:40:04.692170 containerd[1533]: 2025-07-06 23:40:04.674 [INFO][4435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" Namespace="calico-system" Pod="goldmane-768f4c5c69-hs8w5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"00f8e3fa-c9d3-485d-ba30-e0a92fd58537", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337", Pod:"goldmane-768f4c5c69-hs8w5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d4a6bd23dc", MAC:"3e:14:b9:0d:23:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:04.692216 containerd[1533]: 2025-07-06 23:40:04.687 [INFO][4435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" Namespace="calico-system" Pod="goldmane-768f4c5c69-hs8w5" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--hs8w5-eth0" Jul 6 23:40:04.730932 containerd[1533]: time="2025-07-06T23:40:04.730020067Z" level=info msg="connecting to shim efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337" address="unix:///run/containerd/s/60de1c43eb76dcd6563e134b8bb7df064903959c6c04bf69887f4c5811154411" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:40:04.757087 systemd[1]: Started cri-containerd-efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337.scope - libcontainer container efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337. Jul 6 23:40:04.772216 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:40:04.798599 containerd[1533]: time="2025-07-06T23:40:04.798473139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-hs8w5,Uid:00f8e3fa-c9d3-485d-ba30-e0a92fd58537,Namespace:calico-system,Attempt:0,} returns sandbox id \"efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337\"" Jul 6 23:40:05.060058 systemd-networkd[1429]: cali089972b5b0e: Gained IPv6LL Jul 6 23:40:05.536164 containerd[1533]: time="2025-07-06T23:40:05.536110420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8894d8485-shntm,Uid:db18717e-96a8-4ef1-bdd8-3bb06b81a7fa,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:40:05.536296 containerd[1533]: time="2025-07-06T23:40:05.536117381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-667774fdfc-fmclb,Uid:10290fb3-8639-44d7-ab20-491341382d8b,Namespace:calico-system,Attempt:0,}" Jul 6 23:40:05.573176 systemd-networkd[1429]: calidfe5e8aba54: Gained IPv6LL Jul 6 23:40:05.707393 systemd-networkd[1429]: cali26de0d3289b: Link UP Jul 6 23:40:05.708043 systemd-networkd[1429]: cali26de0d3289b: Gained carrier Jul 6 23:40:05.727802 containerd[1533]: 2025-07-06 23:40:05.600 [INFO][4520] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8894d8485--shntm-eth0 calico-apiserver-8894d8485- calico-apiserver db18717e-96a8-4ef1-bdd8-3bb06b81a7fa 787 0 2025-07-06 23:39:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8894d8485 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8894d8485-shntm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali26de0d3289b [] [] }} ContainerID="2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-shntm" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--shntm-" Jul 6 23:40:05.727802 containerd[1533]: 2025-07-06 23:40:05.601 [INFO][4520] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-shntm" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--shntm-eth0" Jul 6 23:40:05.727802 containerd[1533]: 2025-07-06 23:40:05.642 [INFO][4549] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" HandleID="k8s-pod-network.2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" Workload="localhost-k8s-calico--apiserver--8894d8485--shntm-eth0" Jul 6 23:40:05.728210 containerd[1533]: 2025-07-06 23:40:05.642 [INFO][4549] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" HandleID="k8s-pod-network.2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" Workload="localhost-k8s-calico--apiserver--8894d8485--shntm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000520ac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8894d8485-shntm", "timestamp":"2025-07-06 23:40:05.642412786 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:40:05.728210 containerd[1533]: 2025-07-06 23:40:05.642 [INFO][4549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:40:05.728210 containerd[1533]: 2025-07-06 23:40:05.642 [INFO][4549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:40:05.728210 containerd[1533]: 2025-07-06 23:40:05.642 [INFO][4549] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:40:05.728210 containerd[1533]: 2025-07-06 23:40:05.654 [INFO][4549] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" host="localhost" Jul 6 23:40:05.728210 containerd[1533]: 2025-07-06 23:40:05.667 [INFO][4549] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:40:05.728210 containerd[1533]: 2025-07-06 23:40:05.677 [INFO][4549] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:40:05.728210 containerd[1533]: 2025-07-06 23:40:05.683 [INFO][4549] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:05.728210 containerd[1533]: 2025-07-06 23:40:05.688 [INFO][4549] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:05.728210 containerd[1533]: 2025-07-06 23:40:05.688 [INFO][4549] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" host="localhost" Jul 6 23:40:05.728460 containerd[1533]: 2025-07-06 23:40:05.690 [INFO][4549] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b Jul 6 23:40:05.728460 containerd[1533]: 2025-07-06 23:40:05.695 [INFO][4549] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" host="localhost" Jul 6 23:40:05.728460 containerd[1533]: 2025-07-06 23:40:05.701 [INFO][4549] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" host="localhost" Jul 6 23:40:05.728460 containerd[1533]: 2025-07-06 23:40:05.701 [INFO][4549] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" host="localhost" Jul 6 23:40:05.728460 containerd[1533]: 2025-07-06 23:40:05.701 [INFO][4549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:40:05.728460 containerd[1533]: 2025-07-06 23:40:05.701 [INFO][4549] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" HandleID="k8s-pod-network.2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" Workload="localhost-k8s-calico--apiserver--8894d8485--shntm-eth0" Jul 6 23:40:05.728571 containerd[1533]: 2025-07-06 23:40:05.703 [INFO][4520] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-shntm" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--shntm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8894d8485--shntm-eth0", GenerateName:"calico-apiserver-8894d8485-", Namespace:"calico-apiserver", SelfLink:"", UID:"db18717e-96a8-4ef1-bdd8-3bb06b81a7fa", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8894d8485", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8894d8485-shntm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali26de0d3289b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:05.728635 containerd[1533]: 2025-07-06 23:40:05.703 [INFO][4520] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-shntm" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--shntm-eth0" Jul 6 23:40:05.728635 containerd[1533]: 2025-07-06 23:40:05.703 [INFO][4520] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26de0d3289b ContainerID="2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-shntm" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--shntm-eth0" Jul 6 23:40:05.728635 containerd[1533]: 2025-07-06 23:40:05.708 [INFO][4520] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-shntm" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--shntm-eth0" Jul 6 23:40:05.728696 containerd[1533]: 2025-07-06 23:40:05.710 [INFO][4520] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-shntm" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--shntm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8894d8485--shntm-eth0", GenerateName:"calico-apiserver-8894d8485-", Namespace:"calico-apiserver", SelfLink:"", UID:"db18717e-96a8-4ef1-bdd8-3bb06b81a7fa", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8894d8485", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b", Pod:"calico-apiserver-8894d8485-shntm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali26de0d3289b", MAC:"0a:f0:d3:6f:24:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:05.728759 containerd[1533]: 2025-07-06 23:40:05.721 [INFO][4520] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" Namespace="calico-apiserver" Pod="calico-apiserver-8894d8485-shntm" WorkloadEndpoint="localhost-k8s-calico--apiserver--8894d8485--shntm-eth0" Jul 6 23:40:05.796045 containerd[1533]: time="2025-07-06T23:40:05.795855096Z" level=info msg="connecting to shim 2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b" address="unix:///run/containerd/s/ccec581bc01342cbd112fd2bf56f2726b0da3ccc83369dc914711cb1532b0463" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:40:05.821488 systemd-networkd[1429]: cali9689e40a2a8: Link UP Jul 6 23:40:05.821690 systemd-networkd[1429]: cali9689e40a2a8: Gained carrier Jul 6 23:40:05.843235 containerd[1533]: 2025-07-06 23:40:05.605 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0 calico-kube-controllers-667774fdfc- calico-system 10290fb3-8639-44d7-ab20-491341382d8b 779 0 2025-07-06 23:39:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:667774fdfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-667774fdfc-fmclb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9689e40a2a8 [] [] }} ContainerID="12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" Namespace="calico-system" Pod="calico-kube-controllers-667774fdfc-fmclb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-" Jul 6 23:40:05.843235 containerd[1533]: 2025-07-06 23:40:05.605 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" Namespace="calico-system" Pod="calico-kube-controllers-667774fdfc-fmclb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0" Jul 6 23:40:05.843235 containerd[1533]: 2025-07-06 23:40:05.649 [INFO][4554] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" HandleID="k8s-pod-network.12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" Workload="localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0" Jul 6 23:40:05.843431 containerd[1533]: 2025-07-06 23:40:05.650 [INFO][4554] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" HandleID="k8s-pod-network.12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" Workload="localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000118850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-667774fdfc-fmclb", "timestamp":"2025-07-06 23:40:05.649924993 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:40:05.843431 containerd[1533]: 2025-07-06 23:40:05.650 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:40:05.843431 containerd[1533]: 2025-07-06 23:40:05.701 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:40:05.843431 containerd[1533]: 2025-07-06 23:40:05.701 [INFO][4554] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:40:05.843431 containerd[1533]: 2025-07-06 23:40:05.756 [INFO][4554] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" host="localhost" Jul 6 23:40:05.843431 containerd[1533]: 2025-07-06 23:40:05.774 [INFO][4554] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:40:05.843431 containerd[1533]: 2025-07-06 23:40:05.787 [INFO][4554] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:40:05.843431 containerd[1533]: 2025-07-06 23:40:05.789 [INFO][4554] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:05.843431 containerd[1533]: 2025-07-06 23:40:05.792 [INFO][4554] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:05.843431 containerd[1533]: 2025-07-06 23:40:05.792 [INFO][4554] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" host="localhost" Jul 6 23:40:05.845099 containerd[1533]: 2025-07-06 23:40:05.794 [INFO][4554] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef Jul 6 23:40:05.845099 containerd[1533]: 2025-07-06 23:40:05.801 [INFO][4554] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" host="localhost" Jul 6 23:40:05.845099 containerd[1533]: 2025-07-06 23:40:05.813 [INFO][4554] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" host="localhost" Jul 6 23:40:05.845099 containerd[1533]: 2025-07-06 23:40:05.813 [INFO][4554] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" host="localhost" Jul 6 23:40:05.845099 containerd[1533]: 2025-07-06 23:40:05.813 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:40:05.845099 containerd[1533]: 2025-07-06 23:40:05.813 [INFO][4554] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" HandleID="k8s-pod-network.12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" Workload="localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0" Jul 6 23:40:05.845215 containerd[1533]: 2025-07-06 23:40:05.818 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" Namespace="calico-system" Pod="calico-kube-controllers-667774fdfc-fmclb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0", GenerateName:"calico-kube-controllers-667774fdfc-", Namespace:"calico-system", SelfLink:"", UID:"10290fb3-8639-44d7-ab20-491341382d8b", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"667774fdfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-667774fdfc-fmclb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9689e40a2a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:05.845276 containerd[1533]: 2025-07-06 23:40:05.818 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" Namespace="calico-system" Pod="calico-kube-controllers-667774fdfc-fmclb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0" Jul 6 23:40:05.845276 containerd[1533]: 2025-07-06 23:40:05.818 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9689e40a2a8 ContainerID="12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" Namespace="calico-system" Pod="calico-kube-controllers-667774fdfc-fmclb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0" Jul 6 23:40:05.845276 containerd[1533]: 2025-07-06 23:40:05.821 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" Namespace="calico-system" Pod="calico-kube-controllers-667774fdfc-fmclb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0" Jul 6 23:40:05.845334 containerd[1533]: 2025-07-06 23:40:05.821 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" Namespace="calico-system" Pod="calico-kube-controllers-667774fdfc-fmclb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0", GenerateName:"calico-kube-controllers-667774fdfc-", Namespace:"calico-system", SelfLink:"", UID:"10290fb3-8639-44d7-ab20-491341382d8b", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"667774fdfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef", Pod:"calico-kube-controllers-667774fdfc-fmclb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9689e40a2a8", MAC:"36:06:3d:ee:79:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:05.845398 containerd[1533]: 2025-07-06 23:40:05.835 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" Namespace="calico-system" Pod="calico-kube-controllers-667774fdfc-fmclb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--667774fdfc--fmclb-eth0" Jul 6 23:40:05.854279 systemd[1]: Started cri-containerd-2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b.scope - libcontainer container 2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b. Jul 6 23:40:05.897858 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:40:05.923936 containerd[1533]: time="2025-07-06T23:40:05.923557894Z" level=info msg="connecting to shim 12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef" address="unix:///run/containerd/s/f49f96d18c41e1cd0c9a92f30632fcd982e4c9094dfab05c6f734f43dc1d94fe" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:40:05.953504 containerd[1533]: time="2025-07-06T23:40:05.953466899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8894d8485-shntm,Uid:db18717e-96a8-4ef1-bdd8-3bb06b81a7fa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b\"" Jul 6 23:40:05.971128 systemd[1]: Started cri-containerd-12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef.scope - libcontainer container 12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef. Jul 6 23:40:05.991230 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:40:06.029117 containerd[1533]: time="2025-07-06T23:40:06.029074490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-667774fdfc-fmclb,Uid:10290fb3-8639-44d7-ab20-491341382d8b,Namespace:calico-system,Attempt:0,} returns sandbox id \"12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef\"" Jul 6 23:40:06.171446 systemd[1]: Started sshd@7-10.0.0.120:22-10.0.0.1:40146.service - OpenSSH per-connection server daemon (10.0.0.1:40146). Jul 6 23:40:06.235059 containerd[1533]: time="2025-07-06T23:40:06.235006962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:06.235604 containerd[1533]: time="2025-07-06T23:40:06.235570693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 6 23:40:06.236276 containerd[1533]: time="2025-07-06T23:40:06.236246763Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:06.238589 containerd[1533]: time="2025-07-06T23:40:06.238552816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:06.239664 containerd[1533]: time="2025-07-06T23:40:06.239552497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.336343196s" Jul 6 23:40:06.239664 containerd[1533]: time="2025-07-06T23:40:06.239592184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:40:06.241648 containerd[1533]: time="2025-07-06T23:40:06.241069583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:40:06.245885 sshd[4682]: Accepted publickey for core from 10.0.0.1 port 40146 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:06.246859 sshd-session[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:06.255746 containerd[1533]: time="2025-07-06T23:40:06.255710031Z" level=info msg="CreateContainer within sandbox \"62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:40:06.257163 systemd-logind[1504]: New session 8 of user core. Jul 6 23:40:06.265140 containerd[1533]: time="2025-07-06T23:40:06.265085988Z" level=info msg="Container 9611c9bd1fa2611f5942d35d936dd13530212ccd3aacbf47513b61aee4ab05fe: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:40:06.272347 containerd[1533]: time="2025-07-06T23:40:06.272295874Z" level=info msg="CreateContainer within sandbox \"62f9f9cbe663a54555abaabf5726a7bd1b67cec6f736310d9013bb58450b2dd8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9611c9bd1fa2611f5942d35d936dd13530212ccd3aacbf47513b61aee4ab05fe\"" Jul 6 23:40:06.273594 containerd[1533]: time="2025-07-06T23:40:06.272817959Z" level=info msg="StartContainer for \"9611c9bd1fa2611f5942d35d936dd13530212ccd3aacbf47513b61aee4ab05fe\"" Jul 6 23:40:06.275109 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:40:06.275293 containerd[1533]: time="2025-07-06T23:40:06.275249552Z" level=info msg="connecting to shim 9611c9bd1fa2611f5942d35d936dd13530212ccd3aacbf47513b61aee4ab05fe" address="unix:///run/containerd/s/8a256bb8183328927a72d1e9696ca5e5efcbc0211df7b88775d60e0cfa1c6cf0" protocol=ttrpc version=3 Jul 6 23:40:06.304114 systemd[1]: Started cri-containerd-9611c9bd1fa2611f5942d35d936dd13530212ccd3aacbf47513b61aee4ab05fe.scope - libcontainer container 9611c9bd1fa2611f5942d35d936dd13530212ccd3aacbf47513b61aee4ab05fe. Jul 6 23:40:06.347936 containerd[1533]: time="2025-07-06T23:40:06.347858737Z" level=info msg="StartContainer for \"9611c9bd1fa2611f5942d35d936dd13530212ccd3aacbf47513b61aee4ab05fe\" returns successfully" Jul 6 23:40:06.549660 sshd[4691]: Connection closed by 10.0.0.1 port 40146 Jul 6 23:40:06.550345 sshd-session[4682]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:06.554809 systemd[1]: sshd@7-10.0.0.120:22-10.0.0.1:40146.service: Deactivated successfully. Jul 6 23:40:06.558244 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:40:06.559174 systemd-logind[1504]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:40:06.562749 systemd-logind[1504]: Removed session 8. Jul 6 23:40:06.660009 systemd-networkd[1429]: cali6d4a6bd23dc: Gained IPv6LL Jul 6 23:40:06.808604 kubelet[2659]: I0706 23:40:06.807541 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8894d8485-nh88g" podStartSLOduration=25.468361224 podStartE2EDuration="27.80718876s" podCreationTimestamp="2025-07-06 23:39:39 +0000 UTC" firstStartedPulling="2025-07-06 23:40:03.901819218 +0000 UTC m=+39.464974164" lastFinishedPulling="2025-07-06 23:40:06.240646714 +0000 UTC m=+41.803801700" observedRunningTime="2025-07-06 23:40:06.805618906 +0000 UTC m=+42.368773852" watchObservedRunningTime="2025-07-06 23:40:06.80718876 +0000 UTC m=+42.370343746" Jul 6 23:40:07.275344 containerd[1533]: time="2025-07-06T23:40:07.275287193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:07.276831 containerd[1533]: time="2025-07-06T23:40:07.276795191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 6 23:40:07.277809 containerd[1533]: time="2025-07-06T23:40:07.277765424Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:07.279889 containerd[1533]: time="2025-07-06T23:40:07.279840632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:07.281338 containerd[1533]: time="2025-07-06T23:40:07.281294821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.040180792s" Jul 6 23:40:07.281379 containerd[1533]: time="2025-07-06T23:40:07.281344869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 6 23:40:07.283918 containerd[1533]: time="2025-07-06T23:40:07.282840025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:40:07.284418 containerd[1533]: time="2025-07-06T23:40:07.284372467Z" level=info msg="CreateContainer within sandbox \"4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:40:07.311108 containerd[1533]: time="2025-07-06T23:40:07.311064359Z" level=info msg="Container c6eec5716bfdc62a5db3fce3ed2e15ada87835c7cb6bb9b394f7d4f99dc93695: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:40:07.314614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1822331253.mount: Deactivated successfully. Jul 6 23:40:07.320976 containerd[1533]: time="2025-07-06T23:40:07.320854984Z" level=info msg="CreateContainer within sandbox \"4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c6eec5716bfdc62a5db3fce3ed2e15ada87835c7cb6bb9b394f7d4f99dc93695\"" Jul 6 23:40:07.321644 containerd[1533]: time="2025-07-06T23:40:07.321609223Z" level=info msg="StartContainer for \"c6eec5716bfdc62a5db3fce3ed2e15ada87835c7cb6bb9b394f7d4f99dc93695\"" Jul 6 23:40:07.323032 containerd[1533]: time="2025-07-06T23:40:07.322985680Z" level=info msg="connecting to shim c6eec5716bfdc62a5db3fce3ed2e15ada87835c7cb6bb9b394f7d4f99dc93695" address="unix:///run/containerd/s/0b515225706e8c7da2a7c0b644275f04ea06690a951f9bad8e36a69587f889d0" protocol=ttrpc version=3 Jul 6 23:40:07.349107 systemd[1]: Started cri-containerd-c6eec5716bfdc62a5db3fce3ed2e15ada87835c7cb6bb9b394f7d4f99dc93695.scope - libcontainer container c6eec5716bfdc62a5db3fce3ed2e15ada87835c7cb6bb9b394f7d4f99dc93695. Jul 6 23:40:07.364179 systemd-networkd[1429]: cali9689e40a2a8: Gained IPv6LL Jul 6 23:40:07.392368 containerd[1533]: time="2025-07-06T23:40:07.392325222Z" level=info msg="StartContainer for \"c6eec5716bfdc62a5db3fce3ed2e15ada87835c7cb6bb9b394f7d4f99dc93695\" returns successfully" Jul 6 23:40:07.518741 kubelet[2659]: I0706 23:40:07.518589 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:40:07.536625 containerd[1533]: time="2025-07-06T23:40:07.536290259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wchll,Uid:b21cb156-a9a7-4515-9788-bb30ff372348,Namespace:kube-system,Attempt:0,}" Jul 6 23:40:07.556015 systemd-networkd[1429]: cali26de0d3289b: Gained IPv6LL Jul 6 23:40:07.683284 systemd-networkd[1429]: cali32ec3dde013: Link UP Jul 6 23:40:07.684630 systemd-networkd[1429]: cali32ec3dde013: Gained carrier Jul 6 23:40:07.700423 containerd[1533]: 2025-07-06 23:40:07.584 [INFO][4775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wchll-eth0 coredns-668d6bf9bc- kube-system b21cb156-a9a7-4515-9788-bb30ff372348 782 0 2025-07-06 23:39:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wchll eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali32ec3dde013 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wchll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wchll-" Jul 6 23:40:07.700423 containerd[1533]: 2025-07-06 23:40:07.584 [INFO][4775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wchll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wchll-eth0" Jul 6 23:40:07.700423 containerd[1533]: 2025-07-06 23:40:07.630 [INFO][4805] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" HandleID="k8s-pod-network.16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" Workload="localhost-k8s-coredns--668d6bf9bc--wchll-eth0" Jul 6 23:40:07.700660 containerd[1533]: 2025-07-06 23:40:07.630 [INFO][4805] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" HandleID="k8s-pod-network.16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" Workload="localhost-k8s-coredns--668d6bf9bc--wchll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wchll", "timestamp":"2025-07-06 23:40:07.630304294 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:40:07.700660 containerd[1533]: 2025-07-06 23:40:07.630 [INFO][4805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:40:07.700660 containerd[1533]: 2025-07-06 23:40:07.630 [INFO][4805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:40:07.700660 containerd[1533]: 2025-07-06 23:40:07.630 [INFO][4805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:40:07.700660 containerd[1533]: 2025-07-06 23:40:07.642 [INFO][4805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" host="localhost" Jul 6 23:40:07.700660 containerd[1533]: 2025-07-06 23:40:07.646 [INFO][4805] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:40:07.700660 containerd[1533]: 2025-07-06 23:40:07.650 [INFO][4805] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:40:07.700660 containerd[1533]: 2025-07-06 23:40:07.653 [INFO][4805] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:07.700660 containerd[1533]: 2025-07-06 23:40:07.655 [INFO][4805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:07.700660 containerd[1533]: 2025-07-06 23:40:07.655 [INFO][4805] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" host="localhost" Jul 6 23:40:07.700876 containerd[1533]: 2025-07-06 23:40:07.662 [INFO][4805] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f Jul 6 23:40:07.700876 containerd[1533]: 2025-07-06 23:40:07.667 [INFO][4805] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" host="localhost" Jul 6 23:40:07.700876 containerd[1533]: 2025-07-06 23:40:07.676 [INFO][4805] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" host="localhost" Jul 6 23:40:07.700876 containerd[1533]: 2025-07-06 23:40:07.676 [INFO][4805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" host="localhost" Jul 6 23:40:07.700876 containerd[1533]: 2025-07-06 23:40:07.676 [INFO][4805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:40:07.700876 containerd[1533]: 2025-07-06 23:40:07.676 [INFO][4805] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" HandleID="k8s-pod-network.16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" Workload="localhost-k8s-coredns--668d6bf9bc--wchll-eth0" Jul 6 23:40:07.701456 containerd[1533]: 2025-07-06 23:40:07.679 [INFO][4775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wchll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wchll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wchll-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b21cb156-a9a7-4515-9788-bb30ff372348", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wchll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32ec3dde013", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:07.701568 containerd[1533]: 2025-07-06 23:40:07.679 [INFO][4775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wchll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wchll-eth0" Jul 6 23:40:07.701568 containerd[1533]: 2025-07-06 23:40:07.679 [INFO][4775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32ec3dde013 ContainerID="16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wchll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wchll-eth0" Jul 6 23:40:07.701568 containerd[1533]: 2025-07-06 23:40:07.685 [INFO][4775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wchll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wchll-eth0" Jul 6 23:40:07.703010 containerd[1533]: 2025-07-06 23:40:07.685 [INFO][4775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wchll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wchll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wchll-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b21cb156-a9a7-4515-9788-bb30ff372348", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f", Pod:"coredns-668d6bf9bc-wchll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32ec3dde013", MAC:"d6:c4:8e:d6:00:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:07.703010 containerd[1533]: 2025-07-06 23:40:07.696 [INFO][4775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wchll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wchll-eth0" Jul 6 23:40:07.759384 containerd[1533]: time="2025-07-06T23:40:07.759333295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad8f3802fa1f66d11433774a82a4d61cf478442561c7ad63fcda422723d59ff1\" id:\"31c0b21fe85f2aef162bdfb8f0d78bbb6a9badd156047d932e5b5dede30b814a\" pid:4798 exited_at:{seconds:1751845207 nanos:759057131}" Jul 6 23:40:07.767622 containerd[1533]: time="2025-07-06T23:40:07.767177732Z" level=info msg="connecting to shim 16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f" address="unix:///run/containerd/s/0433a46be1ad558a11c37cda368e295d5c13132298b400efcc50bb333aa013f0" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:40:07.804142 systemd[1]: Started cri-containerd-16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f.scope - libcontainer container 16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f. Jul 6 23:40:07.828680 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:40:07.846344 kubelet[2659]: I0706 23:40:07.846258 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:40:07.867554 containerd[1533]: time="2025-07-06T23:40:07.867515606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wchll,Uid:b21cb156-a9a7-4515-9788-bb30ff372348,Namespace:kube-system,Attempt:0,} returns sandbox id \"16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f\"" Jul 6 23:40:07.874082 containerd[1533]: time="2025-07-06T23:40:07.874016591Z" level=info msg="CreateContainer within sandbox \"16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:40:07.889194 containerd[1533]: time="2025-07-06T23:40:07.886098058Z" level=info msg="Container cf602c77a3b536ffb0272616f0d08657435ad6438ea18b9520b199dfde78650d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:40:07.895976 containerd[1533]: time="2025-07-06T23:40:07.895928689Z" level=info msg="CreateContainer within sandbox \"16c9fe374ef415bc2d532b93648883eb659a39712cb06365d81cfb7c4136c79f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf602c77a3b536ffb0272616f0d08657435ad6438ea18b9520b199dfde78650d\"" Jul 6 23:40:07.897623 containerd[1533]: time="2025-07-06T23:40:07.897590751Z" level=info msg="StartContainer for \"cf602c77a3b536ffb0272616f0d08657435ad6438ea18b9520b199dfde78650d\"" Jul 6 23:40:07.898537 containerd[1533]: time="2025-07-06T23:40:07.898510497Z" level=info msg="connecting to shim cf602c77a3b536ffb0272616f0d08657435ad6438ea18b9520b199dfde78650d" address="unix:///run/containerd/s/0433a46be1ad558a11c37cda368e295d5c13132298b400efcc50bb333aa013f0" protocol=ttrpc version=3 Jul 6 23:40:07.909449 containerd[1533]: time="2025-07-06T23:40:07.909363849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad8f3802fa1f66d11433774a82a4d61cf478442561c7ad63fcda422723d59ff1\" id:\"a3c58354c13d2e04c3bd56828a572489c9f22941c18967e23c30097951542ed9\" pid:4884 exited_at:{seconds:1751845207 nanos:909051440}" Jul 6 23:40:07.927074 systemd[1]: Started cri-containerd-cf602c77a3b536ffb0272616f0d08657435ad6438ea18b9520b199dfde78650d.scope - libcontainer container cf602c77a3b536ffb0272616f0d08657435ad6438ea18b9520b199dfde78650d. Jul 6 23:40:07.954602 containerd[1533]: time="2025-07-06T23:40:07.954567382Z" level=info msg="StartContainer for \"cf602c77a3b536ffb0272616f0d08657435ad6438ea18b9520b199dfde78650d\" returns successfully" Jul 6 23:40:08.537708 containerd[1533]: time="2025-07-06T23:40:08.537589146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sw5kv,Uid:9e7c8d04-4b45-4f3a-8697-5eb725212899,Namespace:kube-system,Attempt:0,}" Jul 6 23:40:08.694425 systemd-networkd[1429]: calie36af3186ae: Link UP Jul 6 23:40:08.694594 systemd-networkd[1429]: calie36af3186ae: Gained carrier Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.597 [INFO][4950] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0 coredns-668d6bf9bc- kube-system 9e7c8d04-4b45-4f3a-8697-5eb725212899 784 0 2025-07-06 23:39:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-sw5kv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie36af3186ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw5kv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sw5kv-" Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.598 [INFO][4950] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw5kv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0" Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.629 [INFO][4964] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" HandleID="k8s-pod-network.b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" Workload="localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0" Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.629 [INFO][4964] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" HandleID="k8s-pod-network.b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" Workload="localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000122610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-sw5kv", "timestamp":"2025-07-06 23:40:08.629625727 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.629 [INFO][4964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.630 [INFO][4964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.630 [INFO][4964] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.641 [INFO][4964] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" host="localhost" Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.648 [INFO][4964] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.657 [INFO][4964] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.661 [INFO][4964] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.668 [INFO][4964] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.668 [INFO][4964] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" host="localhost" Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.672 [INFO][4964] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80 Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.677 [INFO][4964] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" host="localhost" Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.686 [INFO][4964] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" host="localhost" Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.686 [INFO][4964] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" host="localhost" Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.686 [INFO][4964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:40:08.720414 containerd[1533]: 2025-07-06 23:40:08.686 [INFO][4964] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" HandleID="k8s-pod-network.b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" Workload="localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0" Jul 6 23:40:08.721038 containerd[1533]: 2025-07-06 23:40:08.689 [INFO][4950] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw5kv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9e7c8d04-4b45-4f3a-8697-5eb725212899", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-sw5kv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie36af3186ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:08.721038 containerd[1533]: 2025-07-06 23:40:08.690 [INFO][4950] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw5kv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0" Jul 6 23:40:08.721038 containerd[1533]: 2025-07-06 23:40:08.690 [INFO][4950] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie36af3186ae ContainerID="b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw5kv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0" Jul 6 23:40:08.721038 containerd[1533]: 2025-07-06 23:40:08.694 [INFO][4950] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw5kv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0" Jul 6 23:40:08.721038 containerd[1533]: 2025-07-06 23:40:08.694 [INFO][4950] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw5kv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9e7c8d04-4b45-4f3a-8697-5eb725212899", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80", Pod:"coredns-668d6bf9bc-sw5kv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie36af3186ae", MAC:"fe:92:5e:bc:65:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:40:08.721038 containerd[1533]: 2025-07-06 23:40:08.715 [INFO][4950] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw5kv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sw5kv-eth0" Jul 6 23:40:08.773975 containerd[1533]: time="2025-07-06T23:40:08.773889035Z" level=info msg="connecting to shim b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80" address="unix:///run/containerd/s/314bb41e2d54be44d2fb645be142af0a28b3f40a1fa80205fb6db5fca29d9835" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:40:08.827097 systemd[1]: Started cri-containerd-b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80.scope - libcontainer container b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80. Jul 6 23:40:08.851783 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:40:08.892806 kubelet[2659]: I0706 23:40:08.892695 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wchll" podStartSLOduration=37.892681779 podStartE2EDuration="37.892681779s" podCreationTimestamp="2025-07-06 23:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:40:08.875435881 +0000 UTC m=+44.438590867" watchObservedRunningTime="2025-07-06 23:40:08.892681779 +0000 UTC m=+44.455836765" Jul 6 23:40:08.896867 containerd[1533]: time="2025-07-06T23:40:08.896826057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sw5kv,Uid:9e7c8d04-4b45-4f3a-8697-5eb725212899,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80\"" Jul 6 23:40:08.903110 containerd[1533]: time="2025-07-06T23:40:08.903066819Z" level=info msg="CreateContainer within sandbox \"b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:40:08.926429 containerd[1533]: time="2025-07-06T23:40:08.926381451Z" level=info msg="Container 063c14418317f88a547ef02db72153dc79346437c28640bc456011fc4f476e7e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:40:08.932795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515451988.mount: Deactivated successfully. Jul 6 23:40:08.937646 containerd[1533]: time="2025-07-06T23:40:08.937594259Z" level=info msg="CreateContainer within sandbox \"b5127fe65862a43731d1a0e711423d09dadcb76dfffe1642c3fff30f62ce5e80\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"063c14418317f88a547ef02db72153dc79346437c28640bc456011fc4f476e7e\"" Jul 6 23:40:08.938399 containerd[1533]: time="2025-07-06T23:40:08.938366178Z" level=info msg="StartContainer for \"063c14418317f88a547ef02db72153dc79346437c28640bc456011fc4f476e7e\"" Jul 6 23:40:08.939344 containerd[1533]: time="2025-07-06T23:40:08.939310363Z" level=info msg="connecting to shim 063c14418317f88a547ef02db72153dc79346437c28640bc456011fc4f476e7e" address="unix:///run/containerd/s/314bb41e2d54be44d2fb645be142af0a28b3f40a1fa80205fb6db5fca29d9835" protocol=ttrpc version=3 Jul 6 23:40:08.968052 systemd[1]: Started cri-containerd-063c14418317f88a547ef02db72153dc79346437c28640bc456011fc4f476e7e.scope - libcontainer container 063c14418317f88a547ef02db72153dc79346437c28640bc456011fc4f476e7e. Jul 6 23:40:09.031719 containerd[1533]: time="2025-07-06T23:40:09.031676088Z" level=info msg="StartContainer for \"063c14418317f88a547ef02db72153dc79346437c28640bc456011fc4f476e7e\" returns successfully" Jul 6 23:40:09.419188 containerd[1533]: time="2025-07-06T23:40:09.419136917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:09.420280 containerd[1533]: time="2025-07-06T23:40:09.420244884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 6 23:40:09.421444 containerd[1533]: time="2025-07-06T23:40:09.421412620Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:09.423644 containerd[1533]: time="2025-07-06T23:40:09.423609150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:09.424668 containerd[1533]: time="2025-07-06T23:40:09.424541091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.14166966s" Jul 6 23:40:09.424668 containerd[1533]: time="2025-07-06T23:40:09.424578816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 6 23:40:09.425675 containerd[1533]: time="2025-07-06T23:40:09.425651658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:40:09.427976 containerd[1533]: time="2025-07-06T23:40:09.427938922Z" level=info msg="CreateContainer within sandbox \"efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:40:09.434660 containerd[1533]: time="2025-07-06T23:40:09.434613168Z" level=info msg="Container ec3fe69861464a3a5c6ae95e6bbd795bef92926ad525d5918a36cadd7299df92: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:40:09.440415 containerd[1533]: time="2025-07-06T23:40:09.440366314Z" level=info msg="CreateContainer within sandbox \"efaff11b00139f84a7e45e992514a1eaf565310ecfca2200f2d0855f7c2d8337\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ec3fe69861464a3a5c6ae95e6bbd795bef92926ad525d5918a36cadd7299df92\"" Jul 6 23:40:09.440929 containerd[1533]: time="2025-07-06T23:40:09.440874070Z" level=info msg="StartContainer for \"ec3fe69861464a3a5c6ae95e6bbd795bef92926ad525d5918a36cadd7299df92\"" Jul 6 23:40:09.443058 containerd[1533]: time="2025-07-06T23:40:09.443021954Z" level=info msg="connecting to shim ec3fe69861464a3a5c6ae95e6bbd795bef92926ad525d5918a36cadd7299df92" address="unix:///run/containerd/s/60de1c43eb76dcd6563e134b8bb7df064903959c6c04bf69887f4c5811154411" protocol=ttrpc version=3 Jul 6 23:40:09.462090 systemd[1]: Started cri-containerd-ec3fe69861464a3a5c6ae95e6bbd795bef92926ad525d5918a36cadd7299df92.scope - libcontainer container ec3fe69861464a3a5c6ae95e6bbd795bef92926ad525d5918a36cadd7299df92. Jul 6 23:40:09.475992 systemd-networkd[1429]: cali32ec3dde013: Gained IPv6LL Jul 6 23:40:09.507020 containerd[1533]: time="2025-07-06T23:40:09.506970504Z" level=info msg="StartContainer for \"ec3fe69861464a3a5c6ae95e6bbd795bef92926ad525d5918a36cadd7299df92\" returns successfully" Jul 6 23:40:09.639669 containerd[1533]: time="2025-07-06T23:40:09.639157851Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:09.640172 containerd[1533]: time="2025-07-06T23:40:09.640144759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:40:09.641894 containerd[1533]: time="2025-07-06T23:40:09.641848136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 216.045896ms" Jul 6 23:40:09.642015 containerd[1533]: time="2025-07-06T23:40:09.642000039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:40:09.643075 containerd[1533]: time="2025-07-06T23:40:09.643050957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:40:09.644330 containerd[1533]: time="2025-07-06T23:40:09.644276502Z" level=info msg="CreateContainer within sandbox \"2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:40:09.650940 containerd[1533]: time="2025-07-06T23:40:09.650897299Z" level=info msg="Container 2747173f8312a3c7e259cb3011462b7e9f469f38d75e574fee3c502896fda934: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:40:09.659725 containerd[1533]: time="2025-07-06T23:40:09.659678941Z" level=info msg="CreateContainer within sandbox \"2cf54531a528751b13136a68920cd482ff7e13a54bb2ab2a974c518eb119e27b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2747173f8312a3c7e259cb3011462b7e9f469f38d75e574fee3c502896fda934\"" Jul 6 23:40:09.660203 containerd[1533]: time="2025-07-06T23:40:09.660164894Z" level=info msg="StartContainer for \"2747173f8312a3c7e259cb3011462b7e9f469f38d75e574fee3c502896fda934\"" Jul 6 23:40:09.662234 containerd[1533]: time="2025-07-06T23:40:09.662167996Z" level=info msg="connecting to shim 2747173f8312a3c7e259cb3011462b7e9f469f38d75e574fee3c502896fda934" address="unix:///run/containerd/s/ccec581bc01342cbd112fd2bf56f2726b0da3ccc83369dc914711cb1532b0463" protocol=ttrpc version=3 Jul 6 23:40:09.687071 systemd[1]: Started cri-containerd-2747173f8312a3c7e259cb3011462b7e9f469f38d75e574fee3c502896fda934.scope - libcontainer container 2747173f8312a3c7e259cb3011462b7e9f469f38d75e574fee3c502896fda934. Jul 6 23:40:09.727815 containerd[1533]: time="2025-07-06T23:40:09.727695784Z" level=info msg="StartContainer for \"2747173f8312a3c7e259cb3011462b7e9f469f38d75e574fee3c502896fda934\" returns successfully" Jul 6 23:40:09.784698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873293239.mount: Deactivated successfully. Jul 6 23:40:09.874937 kubelet[2659]: I0706 23:40:09.874405 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-hs8w5" podStartSLOduration=21.2493688 podStartE2EDuration="25.874387995s" podCreationTimestamp="2025-07-06 23:39:44 +0000 UTC" firstStartedPulling="2025-07-06 23:40:04.800433473 +0000 UTC m=+40.363588459" lastFinishedPulling="2025-07-06 23:40:09.425452668 +0000 UTC m=+44.988607654" observedRunningTime="2025-07-06 23:40:09.872526595 +0000 UTC m=+45.435681581" watchObservedRunningTime="2025-07-06 23:40:09.874387995 +0000 UTC m=+45.437542981" Jul 6 23:40:09.889028 kubelet[2659]: I0706 23:40:09.887600 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sw5kv" podStartSLOduration=38.887583822 podStartE2EDuration="38.887583822s" podCreationTimestamp="2025-07-06 23:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:40:09.887295339 +0000 UTC m=+45.450450325" watchObservedRunningTime="2025-07-06 23:40:09.887583822 +0000 UTC m=+45.450738808" Jul 6 23:40:09.917287 kubelet[2659]: I0706 23:40:09.917213 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8894d8485-shntm" podStartSLOduration=27.231097319 podStartE2EDuration="30.917193601s" podCreationTimestamp="2025-07-06 23:39:39 +0000 UTC" firstStartedPulling="2025-07-06 23:40:05.956756885 +0000 UTC m=+41.519911871" lastFinishedPulling="2025-07-06 23:40:09.642853167 +0000 UTC m=+45.206008153" observedRunningTime="2025-07-06 23:40:09.913948993 +0000 UTC m=+45.477103979" watchObservedRunningTime="2025-07-06 23:40:09.917193601 +0000 UTC m=+45.480348587" Jul 6 23:40:09.994923 containerd[1533]: time="2025-07-06T23:40:09.994438994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec3fe69861464a3a5c6ae95e6bbd795bef92926ad525d5918a36cadd7299df92\" id:\"0c3c8afb345f266e725a1799632a80c26c7fc60af4d51fe13c5326934ed9123c\" pid:5162 exit_status:1 exited_at:{seconds:1751845209 nanos:993896232}" Jul 6 23:40:10.564436 systemd-networkd[1429]: calie36af3186ae: Gained IPv6LL Jul 6 23:40:10.879332 kubelet[2659]: I0706 23:40:10.878986 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:40:10.976958 containerd[1533]: time="2025-07-06T23:40:10.976752856Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec3fe69861464a3a5c6ae95e6bbd795bef92926ad525d5918a36cadd7299df92\" id:\"a331693466680380dc3c7483115d200c8f77f6288757465e63b56b6ead035bf9\" pid:5188 exit_status:1 exited_at:{seconds:1751845210 nanos:975802236}" Jul 6 23:40:11.569164 systemd[1]: Started sshd@8-10.0.0.120:22-10.0.0.1:40154.service - OpenSSH per-connection server daemon (10.0.0.1:40154). Jul 6 23:40:11.571926 containerd[1533]: time="2025-07-06T23:40:11.571865705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:11.572892 containerd[1533]: time="2025-07-06T23:40:11.572844246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 6 23:40:11.574436 containerd[1533]: time="2025-07-06T23:40:11.573668885Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:11.577312 containerd[1533]: time="2025-07-06T23:40:11.577267724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:11.577943 containerd[1533]: time="2025-07-06T23:40:11.577910057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.934684434s" Jul 6 23:40:11.577987 containerd[1533]: time="2025-07-06T23:40:11.577945382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 6 23:40:11.581639 containerd[1533]: time="2025-07-06T23:40:11.581415443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:40:11.592631 containerd[1533]: time="2025-07-06T23:40:11.592219561Z" level=info msg="CreateContainer within sandbox \"12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:40:11.604125 containerd[1533]: time="2025-07-06T23:40:11.604074912Z" level=info msg="Container 371b1e4596ec79c38e49d59780ca3c2e3beccd27ac1ab4a34db9d1b0d1bb4d03: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:40:11.629828 containerd[1533]: time="2025-07-06T23:40:11.629776820Z" level=info msg="CreateContainer within sandbox \"12b33799ddff674929bd87b32ee3a9cfa8a4e56a802f25f973d05c256908d7ef\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"371b1e4596ec79c38e49d59780ca3c2e3beccd27ac1ab4a34db9d1b0d1bb4d03\"" Jul 6 23:40:11.633055 containerd[1533]: time="2025-07-06T23:40:11.632810137Z" level=info msg="StartContainer for \"371b1e4596ec79c38e49d59780ca3c2e3beccd27ac1ab4a34db9d1b0d1bb4d03\"" Jul 6 23:40:11.635330 containerd[1533]: time="2025-07-06T23:40:11.635298336Z" level=info msg="connecting to shim 371b1e4596ec79c38e49d59780ca3c2e3beccd27ac1ab4a34db9d1b0d1bb4d03" address="unix:///run/containerd/s/f49f96d18c41e1cd0c9a92f30632fcd982e4c9094dfab05c6f734f43dc1d94fe" protocol=ttrpc version=3 Jul 6 23:40:11.653811 sshd[5208]: Accepted publickey for core from 10.0.0.1 port 40154 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:11.658353 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:11.668064 systemd[1]: Started cri-containerd-371b1e4596ec79c38e49d59780ca3c2e3beccd27ac1ab4a34db9d1b0d1bb4d03.scope - libcontainer container 371b1e4596ec79c38e49d59780ca3c2e3beccd27ac1ab4a34db9d1b0d1bb4d03. Jul 6 23:40:11.671689 systemd-logind[1504]: New session 9 of user core. Jul 6 23:40:11.679058 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:40:11.723245 containerd[1533]: time="2025-07-06T23:40:11.722960782Z" level=info msg="StartContainer for \"371b1e4596ec79c38e49d59780ca3c2e3beccd27ac1ab4a34db9d1b0d1bb4d03\" returns successfully" Jul 6 23:40:11.904278 kubelet[2659]: I0706 23:40:11.902247 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-667774fdfc-fmclb" podStartSLOduration=22.354759271 podStartE2EDuration="27.902228764s" podCreationTimestamp="2025-07-06 23:39:44 +0000 UTC" firstStartedPulling="2025-07-06 23:40:06.03117803 +0000 UTC m=+41.594333016" lastFinishedPulling="2025-07-06 23:40:11.578647523 +0000 UTC m=+47.141802509" observedRunningTime="2025-07-06 23:40:11.901255224 +0000 UTC m=+47.464410210" watchObservedRunningTime="2025-07-06 23:40:11.902228764 +0000 UTC m=+47.465383750" Jul 6 23:40:11.966828 sshd[5230]: Connection closed by 10.0.0.1 port 40154 Jul 6 23:40:11.967692 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:11.974250 systemd[1]: sshd@8-10.0.0.120:22-10.0.0.1:40154.service: Deactivated successfully. Jul 6 23:40:11.980435 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:40:11.985704 systemd-logind[1504]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:40:11.988128 systemd-logind[1504]: Removed session 9. Jul 6 23:40:12.038036 containerd[1533]: time="2025-07-06T23:40:12.037949517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec3fe69861464a3a5c6ae95e6bbd795bef92926ad525d5918a36cadd7299df92\" id:\"304165ec53465a28ca82902147c8af8bc4326c7eb6b14300f49448c1a6076ac2\" pid:5276 exit_status:1 exited_at:{seconds:1751845212 nanos:37622791}" Jul 6 23:40:12.748000 containerd[1533]: time="2025-07-06T23:40:12.747943503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:12.748560 containerd[1533]: time="2025-07-06T23:40:12.748532587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 6 23:40:12.749469 containerd[1533]: time="2025-07-06T23:40:12.749431234Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:12.751900 containerd[1533]: time="2025-07-06T23:40:12.751751802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:40:12.752609 containerd[1533]: time="2025-07-06T23:40:12.752576439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.171110868s" Jul 6 23:40:12.752609 containerd[1533]: time="2025-07-06T23:40:12.752609843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 6 23:40:12.755686 containerd[1533]: time="2025-07-06T23:40:12.755589145Z" level=info msg="CreateContainer within sandbox \"4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:40:12.766785 containerd[1533]: time="2025-07-06T23:40:12.766730840Z" level=info msg="Container 110a73031643bb3e3e4089c0f806ad43c43ac54ca2a471060d62d9c47bb2c1fa: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:40:12.775549 containerd[1533]: time="2025-07-06T23:40:12.775506601Z" level=info msg="CreateContainer within sandbox \"4252c50092a2ac4bfe1ed5315078d160826722b337b56605345297d672346c9f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"110a73031643bb3e3e4089c0f806ad43c43ac54ca2a471060d62d9c47bb2c1fa\"" Jul 6 23:40:12.776367 containerd[1533]: time="2025-07-06T23:40:12.776237664Z" level=info msg="StartContainer for \"110a73031643bb3e3e4089c0f806ad43c43ac54ca2a471060d62d9c47bb2c1fa\"" Jul 6 23:40:12.778404 containerd[1533]: time="2025-07-06T23:40:12.778365445Z" level=info msg="connecting to shim 110a73031643bb3e3e4089c0f806ad43c43ac54ca2a471060d62d9c47bb2c1fa" address="unix:///run/containerd/s/0b515225706e8c7da2a7c0b644275f04ea06690a951f9bad8e36a69587f889d0" protocol=ttrpc version=3 Jul 6 23:40:12.799085 systemd[1]: Started cri-containerd-110a73031643bb3e3e4089c0f806ad43c43ac54ca2a471060d62d9c47bb2c1fa.scope - libcontainer container 110a73031643bb3e3e4089c0f806ad43c43ac54ca2a471060d62d9c47bb2c1fa. Jul 6 23:40:12.924654 containerd[1533]: time="2025-07-06T23:40:12.924599041Z" level=info msg="StartContainer for \"110a73031643bb3e3e4089c0f806ad43c43ac54ca2a471060d62d9c47bb2c1fa\" returns successfully" Jul 6 23:40:12.976642 containerd[1533]: time="2025-07-06T23:40:12.976588592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"371b1e4596ec79c38e49d59780ca3c2e3beccd27ac1ab4a34db9d1b0d1bb4d03\" id:\"590414edd6206469eb6c6e74598d9e57794bed3c1885a2d4f7a425c113cd2901\" pid:5351 exited_at:{seconds:1751845212 nanos:964382706}" Jul 6 23:40:13.618224 kubelet[2659]: I0706 23:40:13.618180 2659 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:40:13.620616 kubelet[2659]: I0706 23:40:13.620589 2659 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:40:16.981922 systemd[1]: Started sshd@9-10.0.0.120:22-10.0.0.1:47356.service - OpenSSH per-connection server daemon (10.0.0.1:47356). Jul 6 23:40:17.064399 sshd[5367]: Accepted publickey for core from 10.0.0.1 port 47356 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:17.066118 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:17.071567 systemd-logind[1504]: New session 10 of user core. Jul 6 23:40:17.081082 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:40:17.312333 sshd[5369]: Connection closed by 10.0.0.1 port 47356 Jul 6 23:40:17.312706 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:17.324349 systemd[1]: sshd@9-10.0.0.120:22-10.0.0.1:47356.service: Deactivated successfully. Jul 6 23:40:17.326823 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:40:17.327959 systemd-logind[1504]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:40:17.330871 systemd[1]: Started sshd@10-10.0.0.120:22-10.0.0.1:47372.service - OpenSSH per-connection server daemon (10.0.0.1:47372). Jul 6 23:40:17.332174 systemd-logind[1504]: Removed session 10. Jul 6 23:40:17.401478 sshd[5384]: Accepted publickey for core from 10.0.0.1 port 47372 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:17.403016 sshd-session[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:17.407790 systemd-logind[1504]: New session 11 of user core. Jul 6 23:40:17.412072 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:40:17.637036 sshd[5386]: Connection closed by 10.0.0.1 port 47372 Jul 6 23:40:17.636358 sshd-session[5384]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:17.644467 systemd[1]: sshd@10-10.0.0.120:22-10.0.0.1:47372.service: Deactivated successfully. Jul 6 23:40:17.646525 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:40:17.648721 systemd-logind[1504]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:40:17.654967 systemd[1]: Started sshd@11-10.0.0.120:22-10.0.0.1:47374.service - OpenSSH per-connection server daemon (10.0.0.1:47374). Jul 6 23:40:17.662227 systemd-logind[1504]: Removed session 11. Jul 6 23:40:17.705037 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 47374 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:17.706384 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:17.710952 systemd-logind[1504]: New session 12 of user core. Jul 6 23:40:17.718050 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:40:17.968762 sshd[5400]: Connection closed by 10.0.0.1 port 47374 Jul 6 23:40:17.969111 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:17.974228 systemd-logind[1504]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:40:17.974420 systemd[1]: sshd@11-10.0.0.120:22-10.0.0.1:47374.service: Deactivated successfully. Jul 6 23:40:17.978327 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:40:17.980597 systemd-logind[1504]: Removed session 12. Jul 6 23:40:22.894977 containerd[1533]: time="2025-07-06T23:40:22.894771108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"371b1e4596ec79c38e49d59780ca3c2e3beccd27ac1ab4a34db9d1b0d1bb4d03\" id:\"2cd0a0f609a173fd5a4352947954b2e51fb134eb8fa11a7083833c9b5cf1b18f\" pid:5438 exited_at:{seconds:1751845222 nanos:894541121}" Jul 6 23:40:22.988105 systemd[1]: Started sshd@12-10.0.0.120:22-10.0.0.1:55918.service - OpenSSH per-connection server daemon (10.0.0.1:55918). Jul 6 23:40:23.048742 sshd[5449]: Accepted publickey for core from 10.0.0.1 port 55918 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:23.050260 sshd-session[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:23.055224 systemd-logind[1504]: New session 13 of user core. Jul 6 23:40:23.065097 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:40:23.204271 sshd[5451]: Connection closed by 10.0.0.1 port 55918 Jul 6 23:40:23.204769 sshd-session[5449]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:23.212998 systemd[1]: sshd@12-10.0.0.120:22-10.0.0.1:55918.service: Deactivated successfully. Jul 6 23:40:23.214862 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:40:23.219405 systemd-logind[1504]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:40:23.222732 systemd-logind[1504]: Removed session 13. Jul 6 23:40:26.147445 containerd[1533]: time="2025-07-06T23:40:26.147345675Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec3fe69861464a3a5c6ae95e6bbd795bef92926ad525d5918a36cadd7299df92\" id:\"961338ea6cd0263267fde5d83008c1b8d610f7235e51b700e7077391f9c09687\" pid:5477 exited_at:{seconds:1751845226 nanos:147091365}" Jul 6 23:40:28.222652 systemd[1]: Started sshd@13-10.0.0.120:22-10.0.0.1:55922.service - OpenSSH per-connection server daemon (10.0.0.1:55922). Jul 6 23:40:28.282671 sshd[5489]: Accepted publickey for core from 10.0.0.1 port 55922 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:28.284178 sshd-session[5489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:28.288325 systemd-logind[1504]: New session 14 of user core. Jul 6 23:40:28.296068 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:40:28.422794 sshd[5491]: Connection closed by 10.0.0.1 port 55922 Jul 6 23:40:28.424037 sshd-session[5489]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:28.428850 systemd[1]: sshd@13-10.0.0.120:22-10.0.0.1:55922.service: Deactivated successfully. Jul 6 23:40:28.431113 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:40:28.432763 systemd-logind[1504]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:40:28.434325 systemd-logind[1504]: Removed session 14. Jul 6 23:40:33.438599 systemd[1]: Started sshd@14-10.0.0.120:22-10.0.0.1:47546.service - OpenSSH per-connection server daemon (10.0.0.1:47546). Jul 6 23:40:33.495513 sshd[5509]: Accepted publickey for core from 10.0.0.1 port 47546 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:33.496979 sshd-session[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:33.501193 systemd-logind[1504]: New session 15 of user core. Jul 6 23:40:33.514094 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:40:33.669040 sshd[5511]: Connection closed by 10.0.0.1 port 47546 Jul 6 23:40:33.668938 sshd-session[5509]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:33.673357 systemd[1]: sshd@14-10.0.0.120:22-10.0.0.1:47546.service: Deactivated successfully. Jul 6 23:40:33.679196 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:40:33.681056 systemd-logind[1504]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:40:33.683803 systemd-logind[1504]: Removed session 15. Jul 6 23:40:37.595761 kubelet[2659]: I0706 23:40:37.595519 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:40:37.630896 kubelet[2659]: I0706 23:40:37.630723 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-m4xsf" podStartSLOduration=44.853714328 podStartE2EDuration="53.630528739s" podCreationTimestamp="2025-07-06 23:39:44 +0000 UTC" firstStartedPulling="2025-07-06 23:40:03.977143223 +0000 UTC m=+39.540298169" lastFinishedPulling="2025-07-06 23:40:12.753957634 +0000 UTC m=+48.317112580" observedRunningTime="2025-07-06 23:40:13.940396095 +0000 UTC m=+49.503551081" watchObservedRunningTime="2025-07-06 23:40:37.630528739 +0000 UTC m=+73.193683725" Jul 6 23:40:37.863752 containerd[1533]: time="2025-07-06T23:40:37.863638320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad8f3802fa1f66d11433774a82a4d61cf478442561c7ad63fcda422723d59ff1\" id:\"99174ff5e2475baca194593439a2df8e1e827dd4fe7ccd703c06117a4da074cf\" pid:5538 exited_at:{seconds:1751845237 nanos:863329549}" Jul 6 23:40:38.693100 systemd[1]: Started sshd@15-10.0.0.120:22-10.0.0.1:47554.service - OpenSSH per-connection server daemon (10.0.0.1:47554). Jul 6 23:40:38.744755 sshd[5551]: Accepted publickey for core from 10.0.0.1 port 47554 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:38.748595 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:38.756249 systemd-logind[1504]: New session 16 of user core. Jul 6 23:40:38.765039 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:40:38.924202 sshd[5553]: Connection closed by 10.0.0.1 port 47554 Jul 6 23:40:38.923715 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:38.932472 systemd[1]: sshd@15-10.0.0.120:22-10.0.0.1:47554.service: Deactivated successfully. Jul 6 23:40:38.934360 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:40:38.935753 systemd-logind[1504]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:40:38.939918 systemd[1]: Started sshd@16-10.0.0.120:22-10.0.0.1:47562.service - OpenSSH per-connection server daemon (10.0.0.1:47562). Jul 6 23:40:38.940862 systemd-logind[1504]: Removed session 16. Jul 6 23:40:39.019413 sshd[5566]: Accepted publickey for core from 10.0.0.1 port 47562 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:39.020850 sshd-session[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:39.025367 systemd-logind[1504]: New session 17 of user core. Jul 6 23:40:39.036070 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:40:39.331475 sshd[5568]: Connection closed by 10.0.0.1 port 47562 Jul 6 23:40:39.332248 sshd-session[5566]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:39.344742 systemd[1]: sshd@16-10.0.0.120:22-10.0.0.1:47562.service: Deactivated successfully. Jul 6 23:40:39.346933 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:40:39.347757 systemd-logind[1504]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:40:39.350958 systemd[1]: Started sshd@17-10.0.0.120:22-10.0.0.1:47566.service - OpenSSH per-connection server daemon (10.0.0.1:47566). Jul 6 23:40:39.352332 systemd-logind[1504]: Removed session 17. Jul 6 23:40:39.417668 sshd[5580]: Accepted publickey for core from 10.0.0.1 port 47566 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:39.419268 sshd-session[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:39.424115 systemd-logind[1504]: New session 18 of user core. Jul 6 23:40:39.436178 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:40:40.202362 sshd[5582]: Connection closed by 10.0.0.1 port 47566 Jul 6 23:40:40.202741 sshd-session[5580]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:40.214667 systemd[1]: sshd@17-10.0.0.120:22-10.0.0.1:47566.service: Deactivated successfully. Jul 6 23:40:40.219205 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:40:40.221708 systemd-logind[1504]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:40:40.226979 systemd[1]: Started sshd@18-10.0.0.120:22-10.0.0.1:47574.service - OpenSSH per-connection server daemon (10.0.0.1:47574). Jul 6 23:40:40.229978 systemd-logind[1504]: Removed session 18. Jul 6 23:40:40.278407 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 47574 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:40.279840 sshd-session[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:40.283898 systemd-logind[1504]: New session 19 of user core. Jul 6 23:40:40.290058 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:40:40.680623 sshd[5606]: Connection closed by 10.0.0.1 port 47574 Jul 6 23:40:40.680934 sshd-session[5603]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:40.690495 systemd[1]: sshd@18-10.0.0.120:22-10.0.0.1:47574.service: Deactivated successfully. Jul 6 23:40:40.692392 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:40:40.695443 systemd-logind[1504]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:40:40.700491 systemd[1]: Started sshd@19-10.0.0.120:22-10.0.0.1:47580.service - OpenSSH per-connection server daemon (10.0.0.1:47580). Jul 6 23:40:40.703763 systemd-logind[1504]: Removed session 19. Jul 6 23:40:40.764037 sshd[5617]: Accepted publickey for core from 10.0.0.1 port 47580 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:40.764809 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:40.769228 systemd-logind[1504]: New session 20 of user core. Jul 6 23:40:40.779611 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:40:40.943126 sshd[5619]: Connection closed by 10.0.0.1 port 47580 Jul 6 23:40:40.943559 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:40.947663 systemd[1]: sshd@19-10.0.0.120:22-10.0.0.1:47580.service: Deactivated successfully. Jul 6 23:40:40.950715 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:40:40.952341 systemd-logind[1504]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:40:40.954293 systemd-logind[1504]: Removed session 20. Jul 6 23:40:41.965812 containerd[1533]: time="2025-07-06T23:40:41.965766108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec3fe69861464a3a5c6ae95e6bbd795bef92926ad525d5918a36cadd7299df92\" id:\"2f7311646b0413ffe7ae53cb4be7783d21ec6d96481929b999ea5d64f84c35ab\" pid:5643 exited_at:{seconds:1751845241 nanos:965479089}" Jul 6 23:40:42.858941 kubelet[2659]: I0706 23:40:42.858898 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:40:42.964853 containerd[1533]: time="2025-07-06T23:40:42.964772127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"371b1e4596ec79c38e49d59780ca3c2e3beccd27ac1ab4a34db9d1b0d1bb4d03\" id:\"984f2f828655517b7b2ef61aa3be387e9062ec9f6c1ed82a89d84b23613d99e7\" pid:5676 exited_at:{seconds:1751845242 nanos:961023182}" Jul 6 23:40:45.960613 systemd[1]: Started sshd@20-10.0.0.120:22-10.0.0.1:58952.service - OpenSSH per-connection server daemon (10.0.0.1:58952). Jul 6 23:40:46.037202 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 58952 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:46.039048 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:46.043966 systemd-logind[1504]: New session 21 of user core. Jul 6 23:40:46.054121 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:40:46.276803 sshd[5691]: Connection closed by 10.0.0.1 port 58952 Jul 6 23:40:46.277057 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:46.281444 systemd[1]: sshd@20-10.0.0.120:22-10.0.0.1:58952.service: Deactivated successfully. Jul 6 23:40:46.284526 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:40:46.285468 systemd-logind[1504]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:40:46.286694 systemd-logind[1504]: Removed session 21. Jul 6 23:40:51.290859 systemd[1]: Started sshd@21-10.0.0.120:22-10.0.0.1:58958.service - OpenSSH per-connection server daemon (10.0.0.1:58958). Jul 6 23:40:51.334977 sshd[5707]: Accepted publickey for core from 10.0.0.1 port 58958 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:51.336198 sshd-session[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:51.340248 systemd-logind[1504]: New session 22 of user core. Jul 6 23:40:51.351056 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:40:51.551613 sshd[5709]: Connection closed by 10.0.0.1 port 58958 Jul 6 23:40:51.551870 sshd-session[5707]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:51.557036 systemd[1]: sshd@21-10.0.0.120:22-10.0.0.1:58958.service: Deactivated successfully. Jul 6 23:40:51.559039 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:40:51.559717 systemd-logind[1504]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:40:51.560962 systemd-logind[1504]: Removed session 22. Jul 6 23:40:56.568787 systemd[1]: Started sshd@22-10.0.0.120:22-10.0.0.1:49068.service - OpenSSH per-connection server daemon (10.0.0.1:49068). Jul 6 23:40:56.650724 sshd[5722]: Accepted publickey for core from 10.0.0.1 port 49068 ssh2: RSA SHA256:jyTvj9WiqpnTWeC15mq15pBzt3VkG8C4RFcxi7WEalo Jul 6 23:40:56.652352 sshd-session[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:40:56.657869 systemd-logind[1504]: New session 23 of user core. Jul 6 23:40:56.665123 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:40:56.881610 sshd[5724]: Connection closed by 10.0.0.1 port 49068 Jul 6 23:40:56.881872 sshd-session[5722]: pam_unix(sshd:session): session closed for user core Jul 6 23:40:56.885817 systemd[1]: sshd@22-10.0.0.120:22-10.0.0.1:49068.service: Deactivated successfully. Jul 6 23:40:56.888319 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:40:56.889997 systemd-logind[1504]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:40:56.891960 systemd-logind[1504]: Removed session 23.