Jul 6 23:37:35.826096 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:37:35.826116 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:57:11 -00 2025 Jul 6 23:37:35.826126 kernel: KASLR enabled Jul 6 23:37:35.826132 kernel: efi: EFI v2.7 by EDK II Jul 6 23:37:35.826137 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Jul 6 23:37:35.826153 kernel: random: crng init done Jul 6 23:37:35.826161 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 6 23:37:35.826167 kernel: secureboot: Secure boot enabled Jul 6 23:37:35.826172 kernel: ACPI: Early table checksum verification disabled Jul 6 23:37:35.826180 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Jul 6 23:37:35.826186 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:37:35.826192 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:37:35.826198 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:37:35.826204 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:37:35.826212 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:37:35.826219 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:37:35.826225 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:37:35.826231 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:37:35.826237 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:37:35.826243 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:37:35.826249 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 6 23:37:35.826255 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:37:35.826261 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:37:35.826267 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Jul 6 23:37:35.826273 kernel: Zone ranges: Jul 6 23:37:35.826280 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:37:35.826286 kernel: DMA32 empty Jul 6 23:37:35.826292 kernel: Normal empty Jul 6 23:37:35.826298 kernel: Device empty Jul 6 23:37:35.826304 kernel: Movable zone start for each node Jul 6 23:37:35.826310 kernel: Early memory node ranges Jul 6 23:37:35.826322 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Jul 6 23:37:35.826328 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Jul 6 23:37:35.826334 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Jul 6 23:37:35.826340 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Jul 6 23:37:35.826346 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Jul 6 23:37:35.826352 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Jul 6 23:37:35.826359 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Jul 6 23:37:35.826365 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Jul 6 23:37:35.826372 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 6 23:37:35.826380 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:37:35.826387 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 6 23:37:35.826394 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Jul 6 23:37:35.826401 kernel: psci: probing for conduit method from ACPI. Jul 6 23:37:35.826408 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:37:35.826415 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:37:35.826421 kernel: psci: Trusted OS migration not required Jul 6 23:37:35.826428 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:37:35.826435 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 6 23:37:35.826441 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:37:35.826448 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:37:35.826454 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 6 23:37:35.826461 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:37:35.826469 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:37:35.826475 kernel: CPU features: detected: Spectre-v4 Jul 6 23:37:35.826482 kernel: CPU features: detected: Spectre-BHB Jul 6 23:37:35.826488 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:37:35.826495 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:37:35.826501 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:37:35.826508 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:37:35.826514 kernel: alternatives: applying boot alternatives Jul 6 23:37:35.826522 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:37:35.826529 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:37:35.826535 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:37:35.826543 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:37:35.826550 kernel: Fallback order for Node 0: 0 Jul 6 23:37:35.826557 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 6 23:37:35.826563 kernel: Policy zone: DMA Jul 6 23:37:35.826570 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:37:35.826576 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 6 23:37:35.826583 kernel: software IO TLB: area num 4. Jul 6 23:37:35.826589 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 6 23:37:35.826596 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Jul 6 23:37:35.826602 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:37:35.826609 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:37:35.826616 kernel: rcu: RCU event tracing is enabled. Jul 6 23:37:35.826624 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:37:35.826631 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:37:35.826638 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:37:35.826644 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:37:35.826651 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:37:35.826658 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:37:35.826664 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:37:35.826671 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:37:35.826677 kernel: GICv3: 256 SPIs implemented Jul 6 23:37:35.826684 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:37:35.826690 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:37:35.826698 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:37:35.826704 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 6 23:37:35.826711 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 6 23:37:35.826717 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 6 23:37:35.826724 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:37:35.826731 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:37:35.826737 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 6 23:37:35.826744 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 6 23:37:35.826750 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:37:35.826757 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:37:35.826763 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:37:35.826770 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:37:35.826778 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:37:35.826784 kernel: arm-pv: using stolen time PV Jul 6 23:37:35.826791 kernel: Console: colour dummy device 80x25 Jul 6 23:37:35.826797 kernel: ACPI: Core revision 20240827 Jul 6 23:37:35.826804 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:37:35.826811 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:37:35.826818 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:37:35.826824 kernel: landlock: Up and running. Jul 6 23:37:35.826831 kernel: SELinux: Initializing. Jul 6 23:37:35.826838 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:37:35.826845 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:37:35.826852 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:37:35.826859 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:37:35.826865 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:37:35.826872 kernel: Remapping and enabling EFI services. Jul 6 23:37:35.826879 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:37:35.826885 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:37:35.826892 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 6 23:37:35.826899 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 6 23:37:35.826910 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:37:35.826917 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:37:35.826946 kernel: Detected PIPT I-cache on CPU2 Jul 6 23:37:35.826954 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 6 23:37:35.826961 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 6 23:37:35.826968 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:37:35.826975 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 6 23:37:35.826983 kernel: Detected PIPT I-cache on CPU3 Jul 6 23:37:35.826992 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 6 23:37:35.826999 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 6 23:37:35.827006 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:37:35.827013 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 6 23:37:35.827021 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:37:35.827028 kernel: SMP: Total of 4 processors activated. Jul 6 23:37:35.827035 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:37:35.827042 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:37:35.827049 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:37:35.827058 kernel: CPU features: detected: Common not Private translations Jul 6 23:37:35.827065 kernel: CPU features: detected: CRC32 instructions Jul 6 23:37:35.827072 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 6 23:37:35.827079 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:37:35.827086 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:37:35.827093 kernel: CPU features: detected: Privileged Access Never Jul 6 23:37:35.827100 kernel: CPU features: detected: RAS Extension Support Jul 6 23:37:35.827107 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:37:35.827114 kernel: alternatives: applying system-wide alternatives Jul 6 23:37:35.827123 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 6 23:37:35.827130 kernel: Memory: 2421860K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 128092K reserved, 16384K cma-reserved) Jul 6 23:37:35.827137 kernel: devtmpfs: initialized Jul 6 23:37:35.827150 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:37:35.827157 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:37:35.827164 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:37:35.827172 kernel: 0 pages in range for non-PLT usage Jul 6 23:37:35.827179 kernel: 508432 pages in range for PLT usage Jul 6 23:37:35.827186 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:37:35.827194 kernel: SMBIOS 3.0.0 present. Jul 6 23:37:35.827201 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 6 23:37:35.827208 kernel: DMI: Memory slots populated: 1/1 Jul 6 23:37:35.827215 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:37:35.827222 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:37:35.827229 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:37:35.827236 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:37:35.827243 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:37:35.827250 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 6 23:37:35.827259 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:37:35.827265 kernel: cpuidle: using governor menu Jul 6 23:37:35.827272 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:37:35.827279 kernel: ASID allocator initialised with 32768 entries Jul 6 23:37:35.827286 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:37:35.827294 kernel: Serial: AMBA PL011 UART driver Jul 6 23:37:35.827301 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:37:35.827307 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:37:35.827315 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:37:35.827323 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:37:35.827330 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:37:35.827338 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:37:35.827345 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:37:35.827352 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:37:35.827359 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:37:35.827366 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:37:35.827372 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:37:35.827379 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:37:35.827387 kernel: ACPI: Interpreter enabled Jul 6 23:37:35.827394 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:37:35.827401 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:37:35.827408 kernel: ACPI: CPU0 has been hot-added Jul 6 23:37:35.827416 kernel: ACPI: CPU1 has been hot-added Jul 6 23:37:35.827423 kernel: ACPI: CPU2 has been hot-added Jul 6 23:37:35.827429 kernel: ACPI: CPU3 has been hot-added Jul 6 23:37:35.827437 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:37:35.827444 kernel: printk: legacy console [ttyAMA0] enabled Jul 6 23:37:35.827452 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:37:35.827584 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:37:35.827653 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:37:35.827713 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:37:35.827771 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 6 23:37:35.828421 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 6 23:37:35.828440 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 6 23:37:35.828452 kernel: PCI host bridge to bus 0000:00 Jul 6 23:37:35.828529 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 6 23:37:35.828586 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:37:35.828639 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 6 23:37:35.828691 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:37:35.828772 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 6 23:37:35.828845 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 6 23:37:35.828908 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 6 23:37:35.829079 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 6 23:37:35.829160 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 6 23:37:35.829228 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 6 23:37:35.829288 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 6 23:37:35.829347 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 6 23:37:35.829408 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 6 23:37:35.829461 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:37:35.829515 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 6 23:37:35.829524 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:37:35.829531 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:37:35.829538 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:37:35.829545 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:37:35.829552 kernel: iommu: Default domain type: Translated Jul 6 23:37:35.829561 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:37:35.829568 kernel: efivars: Registered efivars operations Jul 6 23:37:35.829575 kernel: vgaarb: loaded Jul 6 23:37:35.829582 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:37:35.829589 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:37:35.829596 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:37:35.829603 kernel: pnp: PnP ACPI init Jul 6 23:37:35.829671 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 6 23:37:35.829681 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:37:35.829690 kernel: NET: Registered PF_INET protocol family Jul 6 23:37:35.829697 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:37:35.829704 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:37:35.829711 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:37:35.829719 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:37:35.829726 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:37:35.829733 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:37:35.829740 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:37:35.829747 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:37:35.829756 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:37:35.829763 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:37:35.829770 kernel: kvm [1]: HYP mode not available Jul 6 23:37:35.829777 kernel: Initialise system trusted keyrings Jul 6 23:37:35.829784 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:37:35.829791 kernel: Key type asymmetric registered Jul 6 23:37:35.829798 kernel: Asymmetric key parser 'x509' registered Jul 6 23:37:35.829805 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:37:35.829812 kernel: io scheduler mq-deadline registered Jul 6 23:37:35.829820 kernel: io scheduler kyber registered Jul 6 23:37:35.829828 kernel: io scheduler bfq registered Jul 6 23:37:35.829835 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:37:35.829842 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:37:35.829850 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:37:35.829911 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 6 23:37:35.829921 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:37:35.829939 kernel: thunder_xcv, ver 1.0 Jul 6 23:37:35.829947 kernel: thunder_bgx, ver 1.0 Jul 6 23:37:35.829956 kernel: nicpf, ver 1.0 Jul 6 23:37:35.829963 kernel: nicvf, ver 1.0 Jul 6 23:37:35.830038 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:37:35.830097 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:37:35 UTC (1751845055) Jul 6 23:37:35.830106 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:37:35.830113 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 6 23:37:35.830120 kernel: watchdog: NMI not fully supported Jul 6 23:37:35.830127 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:37:35.830136 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:37:35.830151 kernel: Segment Routing with IPv6 Jul 6 23:37:35.830158 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:37:35.830165 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:37:35.830172 kernel: Key type dns_resolver registered Jul 6 23:37:35.830179 kernel: registered taskstats version 1 Jul 6 23:37:35.830186 kernel: Loading compiled-in X.509 certificates Jul 6 23:37:35.830193 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: f8c1d02496b1c3f2ac4a0c4b5b2a55d3dc0ca718' Jul 6 23:37:35.830200 kernel: Demotion targets for Node 0: null Jul 6 23:37:35.830209 kernel: Key type .fscrypt registered Jul 6 23:37:35.830216 kernel: Key type fscrypt-provisioning registered Jul 6 23:37:35.830223 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:37:35.830230 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:37:35.830237 kernel: ima: No architecture policies found Jul 6 23:37:35.830244 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:37:35.830251 kernel: clk: Disabling unused clocks Jul 6 23:37:35.830258 kernel: PM: genpd: Disabling unused power domains Jul 6 23:37:35.830265 kernel: Warning: unable to open an initial console. Jul 6 23:37:35.830273 kernel: Freeing unused kernel memory: 39488K Jul 6 23:37:35.830280 kernel: Run /init as init process Jul 6 23:37:35.830287 kernel: with arguments: Jul 6 23:37:35.830293 kernel: /init Jul 6 23:37:35.830300 kernel: with environment: Jul 6 23:37:35.830307 kernel: HOME=/ Jul 6 23:37:35.830314 kernel: TERM=linux Jul 6 23:37:35.830320 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:37:35.830328 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:37:35.830339 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:37:35.830347 systemd[1]: Detected virtualization kvm. Jul 6 23:37:35.830355 systemd[1]: Detected architecture arm64. Jul 6 23:37:35.830362 systemd[1]: Running in initrd. Jul 6 23:37:35.830369 systemd[1]: No hostname configured, using default hostname. Jul 6 23:37:35.830377 systemd[1]: Hostname set to . Jul 6 23:37:35.830384 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:37:35.830394 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:37:35.830402 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:37:35.830409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:37:35.830417 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:37:35.830425 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:37:35.830433 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:37:35.830441 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:37:35.830451 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:37:35.830459 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:37:35.830466 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:37:35.830474 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:37:35.830482 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:37:35.830489 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:37:35.830497 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:37:35.830504 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:37:35.830513 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:37:35.830521 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:37:35.830528 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:37:35.830536 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:37:35.830544 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:37:35.830552 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:37:35.830559 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:37:35.830567 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:37:35.830574 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:37:35.830583 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:37:35.830591 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:37:35.830599 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:37:35.830606 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:37:35.830614 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:37:35.830621 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:37:35.830629 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:37:35.830636 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:37:35.830646 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:37:35.830654 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:37:35.830662 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:37:35.830689 systemd-journald[240]: Collecting audit messages is disabled. Jul 6 23:37:35.830709 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:37:35.830717 systemd-journald[240]: Journal started Jul 6 23:37:35.830737 systemd-journald[240]: Runtime Journal (/run/log/journal/19cc866e948b445db4917b4255fe2063) is 6M, max 48.5M, 42.4M free. Jul 6 23:37:35.817463 systemd-modules-load[245]: Inserted module 'overlay' Jul 6 23:37:35.833403 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:37:35.833870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:37:35.838129 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:37:35.841950 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:37:35.842848 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:37:35.845664 kernel: Bridge firewalling registered Jul 6 23:37:35.843299 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 6 23:37:35.847498 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:37:35.851514 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:37:35.856438 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:37:35.856689 systemd-tmpfiles[269]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:37:35.857698 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:37:35.861267 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:37:35.871110 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:37:35.873228 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:37:35.879168 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:37:35.881822 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:37:35.889651 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:37:35.919105 systemd-resolved[293]: Positive Trust Anchors: Jul 6 23:37:35.919123 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:37:35.919172 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:37:35.924158 systemd-resolved[293]: Defaulting to hostname 'linux'. Jul 6 23:37:35.925130 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:37:35.928855 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:37:35.966962 kernel: SCSI subsystem initialized Jul 6 23:37:35.971951 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:37:35.978958 kernel: iscsi: registered transport (tcp) Jul 6 23:37:35.992040 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:37:35.992099 kernel: QLogic iSCSI HBA Driver Jul 6 23:37:36.010873 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:37:36.026441 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:37:36.028652 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:37:36.074290 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:37:36.076586 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:37:36.142969 kernel: raid6: neonx8 gen() 15638 MB/s Jul 6 23:37:36.159971 kernel: raid6: neonx4 gen() 15786 MB/s Jul 6 23:37:36.176949 kernel: raid6: neonx2 gen() 13231 MB/s Jul 6 23:37:36.193969 kernel: raid6: neonx1 gen() 10475 MB/s Jul 6 23:37:36.210971 kernel: raid6: int64x8 gen() 6877 MB/s Jul 6 23:37:36.227967 kernel: raid6: int64x4 gen() 7335 MB/s Jul 6 23:37:36.244971 kernel: raid6: int64x2 gen() 6092 MB/s Jul 6 23:37:36.262248 kernel: raid6: int64x1 gen() 5044 MB/s Jul 6 23:37:36.262306 kernel: raid6: using algorithm neonx4 gen() 15786 MB/s Jul 6 23:37:36.280132 kernel: raid6: .... xor() 12333 MB/s, rmw enabled Jul 6 23:37:36.280204 kernel: raid6: using neon recovery algorithm Jul 6 23:37:36.286065 kernel: xor: measuring software checksum speed Jul 6 23:37:36.286149 kernel: 8regs : 21601 MB/sec Jul 6 23:37:36.286162 kernel: 32regs : 21664 MB/sec Jul 6 23:37:36.287336 kernel: arm64_neon : 28032 MB/sec Jul 6 23:37:36.287375 kernel: xor: using function: arm64_neon (28032 MB/sec) Jul 6 23:37:36.356108 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:37:36.363261 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:37:36.368078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:37:36.397874 systemd-udevd[498]: Using default interface naming scheme 'v255'. Jul 6 23:37:36.402161 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:37:36.404305 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:37:36.430264 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Jul 6 23:37:36.453997 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:37:36.458052 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:37:36.508729 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:37:36.511991 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:37:36.564949 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 6 23:37:36.565301 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:37:36.569308 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:37:36.569353 kernel: GPT:9289727 != 19775487 Jul 6 23:37:36.570109 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:37:36.569817 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:37:36.573995 kernel: GPT:9289727 != 19775487 Jul 6 23:37:36.574015 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:37:36.574024 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:37:36.569886 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:37:36.573982 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:37:36.576679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:37:36.604474 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:37:36.605906 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:37:36.614998 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:37:36.623243 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:37:36.635257 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:37:36.641542 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:37:36.642760 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:37:36.645056 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:37:36.647958 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:37:36.650091 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:37:36.652820 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:37:36.654851 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:37:36.671216 disk-uuid[591]: Primary Header is updated. Jul 6 23:37:36.671216 disk-uuid[591]: Secondary Entries is updated. Jul 6 23:37:36.671216 disk-uuid[591]: Secondary Header is updated. Jul 6 23:37:36.674213 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:37:36.677956 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:37:36.686947 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:37:37.685952 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:37:37.688235 disk-uuid[596]: The operation has completed successfully. Jul 6 23:37:37.714480 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:37:37.715585 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:37:37.742350 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:37:37.773037 sh[613]: Success Jul 6 23:37:37.790384 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:37:37.790450 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:37:37.792206 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:37:37.804969 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:37:37.838015 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:37:37.841494 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:37:37.858326 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:37:37.867363 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:37:37.867409 kernel: BTRFS: device fsid 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (625) Jul 6 23:37:37.868920 kernel: BTRFS info (device dm-0): first mount of filesystem 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d Jul 6 23:37:37.868969 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:37:37.870940 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:37:37.875331 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:37:37.876689 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:37:37.878071 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:37:37.882349 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:37:37.884189 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:37:37.918960 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (653) Jul 6 23:37:37.921519 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:37:37.921584 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:37:37.922350 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:37:37.932001 kernel: BTRFS info (device vda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:37:37.932078 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:37:37.935336 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:37:38.016153 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:37:38.021429 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:37:38.064713 systemd-networkd[798]: lo: Link UP Jul 6 23:37:38.064727 systemd-networkd[798]: lo: Gained carrier Jul 6 23:37:38.065610 systemd-networkd[798]: Enumeration completed Jul 6 23:37:38.066126 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:37:38.066129 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:37:38.067011 systemd-networkd[798]: eth0: Link UP Jul 6 23:37:38.067014 systemd-networkd[798]: eth0: Gained carrier Jul 6 23:37:38.067023 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:37:38.067443 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:37:38.068613 systemd[1]: Reached target network.target - Network. Jul 6 23:37:38.100005 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:37:38.143040 ignition[703]: Ignition 2.21.0 Jul 6 23:37:38.143054 ignition[703]: Stage: fetch-offline Jul 6 23:37:38.143107 ignition[703]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:37:38.143115 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:37:38.143371 ignition[703]: parsed url from cmdline: "" Jul 6 23:37:38.143374 ignition[703]: no config URL provided Jul 6 23:37:38.143379 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:37:38.143386 ignition[703]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:37:38.143408 ignition[703]: op(1): [started] loading QEMU firmware config module Jul 6 23:37:38.143412 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:37:38.151855 ignition[703]: op(1): [finished] loading QEMU firmware config module Jul 6 23:37:38.193157 ignition[703]: parsing config with SHA512: c5444ac887e3828f5a2200ac0ee2cc10d1fd97bc146d8feb1bc795de9de661e8a5417b11edd12569f9380f0b43c2a023f6cbe003a229e6962d464f2122226449 Jul 6 23:37:38.197430 unknown[703]: fetched base config from "system" Jul 6 23:37:38.198191 unknown[703]: fetched user config from "qemu" Jul 6 23:37:38.198319 systemd-resolved[293]: Detected conflict on linux IN A 10.0.0.97 Jul 6 23:37:38.198613 ignition[703]: fetch-offline: fetch-offline passed Jul 6 23:37:38.198329 systemd-resolved[293]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jul 6 23:37:38.198678 ignition[703]: Ignition finished successfully Jul 6 23:37:38.201814 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:37:38.207478 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:37:38.208419 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:37:38.249251 ignition[813]: Ignition 2.21.0 Jul 6 23:37:38.249265 ignition[813]: Stage: kargs Jul 6 23:37:38.249429 ignition[813]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:37:38.249439 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:37:38.250284 ignition[813]: kargs: kargs passed Jul 6 23:37:38.250341 ignition[813]: Ignition finished successfully Jul 6 23:37:38.255214 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:37:38.258370 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:37:38.283183 ignition[821]: Ignition 2.21.0 Jul 6 23:37:38.283197 ignition[821]: Stage: disks Jul 6 23:37:38.283347 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:37:38.283355 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:37:38.287374 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:37:38.285850 ignition[821]: disks: disks passed Jul 6 23:37:38.289537 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:37:38.285911 ignition[821]: Ignition finished successfully Jul 6 23:37:38.290924 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:37:38.292583 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:37:38.294556 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:37:38.296221 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:37:38.298981 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:37:38.340574 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 6 23:37:38.345442 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:37:38.347922 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:37:38.435963 kernel: EXT4-fs (vda9): mounted filesystem 8d88df29-f94d-4ab8-8fb6-af875603e6d4 r/w with ordered data mode. Quota mode: none. Jul 6 23:37:38.436751 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:37:38.438106 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:37:38.441566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:37:38.443974 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:37:38.445017 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:37:38.445083 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:37:38.445113 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:37:38.458727 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:37:38.461233 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:37:38.466968 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (839) Jul 6 23:37:38.469969 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:37:38.470017 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:37:38.470028 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:37:38.476584 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:37:38.535519 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:37:38.539839 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:37:38.543598 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:37:38.547838 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:37:38.643565 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:37:38.648277 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:37:38.650119 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:37:38.670955 kernel: BTRFS info (device vda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:37:38.687104 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:37:38.700854 ignition[953]: INFO : Ignition 2.21.0 Jul 6 23:37:38.700854 ignition[953]: INFO : Stage: mount Jul 6 23:37:38.702534 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:37:38.702534 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:37:38.702534 ignition[953]: INFO : mount: mount passed Jul 6 23:37:38.702534 ignition[953]: INFO : Ignition finished successfully Jul 6 23:37:38.704624 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:37:38.707860 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:37:38.865888 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:37:38.868582 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:37:38.892967 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (965) Jul 6 23:37:38.895668 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:37:38.895724 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:37:38.896486 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:37:38.899335 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:37:38.926996 ignition[982]: INFO : Ignition 2.21.0 Jul 6 23:37:38.926996 ignition[982]: INFO : Stage: files Jul 6 23:37:38.928631 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:37:38.928631 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:37:38.931005 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:37:38.931005 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:37:38.931005 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:37:38.934985 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:37:38.934985 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:37:38.934985 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:37:38.934985 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:37:38.934985 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 6 23:37:38.932372 unknown[982]: wrote ssh authorized keys file for user: core Jul 6 23:37:39.817259 systemd-networkd[798]: eth0: Gained IPv6LL Jul 6 23:37:39.927008 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:37:41.961001 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:37:41.961001 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:37:41.964784 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:37:41.964784 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:37:41.964784 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:37:41.964784 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:37:41.964784 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:37:41.964784 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:37:41.964784 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:37:41.975970 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:37:41.975970 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:37:41.975970 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:37:41.975970 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:37:41.975970 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:37:41.975970 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 6 23:37:42.440944 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:37:42.994071 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:37:42.994071 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:37:42.997615 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:37:43.104721 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:37:43.104721 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:37:43.104721 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 6 23:37:43.109441 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:37:43.109441 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:37:43.109441 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 6 23:37:43.109441 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:37:43.140584 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:37:43.143953 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:37:43.146784 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:37:43.146784 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:37:43.146784 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:37:43.146784 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:37:43.146784 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:37:43.146784 ignition[982]: INFO : files: files passed Jul 6 23:37:43.146784 ignition[982]: INFO : Ignition finished successfully Jul 6 23:37:43.147568 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:37:43.153073 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:37:43.155635 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:37:43.179291 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:37:43.179398 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:37:43.184560 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:37:43.187662 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:37:43.187662 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:37:43.191030 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:37:43.194041 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:37:43.195506 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:37:43.198346 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:37:43.253823 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:37:43.253950 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:37:43.256197 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:37:43.257906 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:37:43.259636 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:37:43.260531 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:37:43.304721 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:37:43.316485 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:37:43.354698 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:37:43.355922 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:37:43.357876 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:37:43.359573 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:37:43.359744 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:37:43.361914 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:37:43.363862 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:37:43.365416 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:37:43.367178 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:37:43.369137 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:37:43.370972 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:37:43.373032 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:37:43.374908 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:37:43.376910 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:37:43.378890 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:37:43.380581 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:37:43.382072 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:37:43.382233 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:37:43.384651 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:37:43.386485 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:37:43.388478 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:37:43.392001 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:37:43.393203 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:37:43.393343 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:37:43.396425 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:37:43.396555 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:37:43.398490 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:37:43.400036 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:37:43.401171 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:37:43.402446 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:37:43.406862 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:37:43.408607 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:37:43.408711 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:37:43.411264 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:37:43.411342 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:37:43.412973 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:37:43.413097 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:37:43.415001 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:37:43.415111 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:37:43.417371 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:37:43.419498 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:37:43.420374 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:37:43.420507 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:37:43.422460 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:37:43.422565 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:37:43.428187 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:37:43.429089 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:37:43.441632 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:37:43.450968 ignition[1038]: INFO : Ignition 2.21.0 Jul 6 23:37:43.450968 ignition[1038]: INFO : Stage: umount Jul 6 23:37:43.453102 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:37:43.453102 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:37:43.458069 ignition[1038]: INFO : umount: umount passed Jul 6 23:37:43.458069 ignition[1038]: INFO : Ignition finished successfully Jul 6 23:37:43.459829 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:37:43.459956 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:37:43.461984 systemd[1]: Stopped target network.target - Network. Jul 6 23:37:43.463241 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:37:43.463304 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:37:43.464756 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:37:43.464799 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:37:43.466413 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:37:43.466458 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:37:43.468000 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:37:43.468041 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:37:43.469745 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:37:43.471398 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:37:43.479088 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:37:43.479220 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:37:43.483017 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:37:43.483259 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:37:43.483302 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:37:43.487185 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:37:43.489827 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:37:43.490866 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:37:43.492846 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:37:43.493050 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:37:43.494189 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:37:43.494228 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:37:43.498217 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:37:43.499055 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:37:43.499112 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:37:43.501092 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:37:43.501158 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:37:43.504050 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:37:43.504098 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:37:43.506898 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:37:43.509509 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:37:43.521656 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:37:43.521801 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:37:43.524370 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:37:43.524477 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:37:43.526668 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:37:43.526737 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:37:43.527872 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:37:43.527907 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:37:43.529994 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:37:43.530051 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:37:43.532879 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:37:43.532951 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:37:43.537021 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:37:43.537085 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:37:43.540663 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:37:43.542427 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:37:43.542496 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:37:43.545981 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:37:43.546029 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:37:43.549004 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:37:43.549048 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:37:43.552011 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:37:43.552068 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:37:43.554237 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:37:43.554286 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:37:43.558192 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:37:43.559067 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:37:43.560687 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:37:43.560774 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:37:43.563943 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:37:43.564046 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:37:43.566816 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:37:43.568970 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:37:43.603355 systemd[1]: Switching root. Jul 6 23:37:43.641601 systemd-journald[240]: Journal stopped Jul 6 23:37:44.489507 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Jul 6 23:37:44.489556 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:37:44.489568 kernel: SELinux: policy capability open_perms=1 Jul 6 23:37:44.489581 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:37:44.489590 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:37:44.489599 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:37:44.489612 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:37:44.489627 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:37:44.489637 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:37:44.489649 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:37:44.489658 kernel: audit: type=1403 audit(1751845063.819:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:37:44.489673 systemd[1]: Successfully loaded SELinux policy in 48.661ms. Jul 6 23:37:44.489691 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.945ms. Jul 6 23:37:44.489702 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:37:44.489713 systemd[1]: Detected virtualization kvm. Jul 6 23:37:44.489722 systemd[1]: Detected architecture arm64. Jul 6 23:37:44.489732 systemd[1]: Detected first boot. Jul 6 23:37:44.489742 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:37:44.489753 zram_generator::config[1084]: No configuration found. Jul 6 23:37:44.489763 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:37:44.489773 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:37:44.489783 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:37:44.489797 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:37:44.489807 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:37:44.489817 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:37:44.489828 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:37:44.489840 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:37:44.489849 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:37:44.489860 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:37:44.489870 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:37:44.489880 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:37:44.489890 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:37:44.489899 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:37:44.489909 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:37:44.489919 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:37:44.489943 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:37:44.489954 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:37:44.489965 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:37:44.489975 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:37:44.489985 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:37:44.489995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:37:44.490005 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:37:44.490014 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:37:44.490026 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:37:44.490036 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:37:44.490046 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:37:44.490055 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:37:44.490065 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:37:44.490075 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:37:44.490085 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:37:44.490094 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:37:44.490104 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:37:44.490121 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:37:44.490132 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:37:44.490143 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:37:44.490153 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:37:44.490162 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:37:44.490172 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:37:44.490182 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:37:44.490192 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:37:44.490202 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:37:44.490214 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:37:44.490223 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:37:44.490233 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:37:44.490243 systemd[1]: Reached target machines.target - Containers. Jul 6 23:37:44.490253 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:37:44.490263 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:37:44.490273 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:37:44.490283 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:37:44.490294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:37:44.490304 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:37:44.490314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:37:44.490324 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:37:44.490334 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:37:44.490344 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:37:44.490353 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:37:44.490363 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:37:44.490372 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:37:44.490384 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:37:44.490394 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:37:44.490404 kernel: fuse: init (API version 7.41) Jul 6 23:37:44.490413 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:37:44.490423 kernel: loop: module loaded Jul 6 23:37:44.490432 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:37:44.490441 kernel: ACPI: bus type drm_connector registered Jul 6 23:37:44.490451 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:37:44.490461 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:37:44.490472 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:37:44.490482 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:37:44.490492 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:37:44.490502 systemd[1]: Stopped verity-setup.service. Jul 6 23:37:44.490534 systemd-journald[1152]: Collecting audit messages is disabled. Jul 6 23:37:44.490555 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:37:44.490566 systemd-journald[1152]: Journal started Jul 6 23:37:44.490586 systemd-journald[1152]: Runtime Journal (/run/log/journal/19cc866e948b445db4917b4255fe2063) is 6M, max 48.5M, 42.4M free. Jul 6 23:37:44.490625 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:37:44.263568 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:37:44.288091 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:37:44.288487 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:37:44.494724 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:37:44.496191 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:37:44.497333 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:37:44.498531 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:37:44.499822 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:37:44.501250 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:37:44.503980 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:37:44.505507 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:37:44.505690 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:37:44.507342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:37:44.508955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:37:44.510352 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:37:44.510540 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:37:44.511887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:37:44.513975 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:37:44.515570 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:37:44.515758 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:37:44.517408 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:37:44.517604 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:37:44.519266 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:37:44.520815 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:37:44.522404 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:37:44.525959 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:37:44.538349 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:37:44.540815 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:37:44.543165 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:37:44.544295 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:37:44.544338 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:37:44.546435 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:37:44.555886 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:37:44.557106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:37:44.558596 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:37:44.560689 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:37:44.561947 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:37:44.562869 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:37:44.564095 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:37:44.565153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:37:44.569067 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:37:44.571985 systemd-journald[1152]: Time spent on flushing to /var/log/journal/19cc866e948b445db4917b4255fe2063 is 19.636ms for 883 entries. Jul 6 23:37:44.571985 systemd-journald[1152]: System Journal (/var/log/journal/19cc866e948b445db4917b4255fe2063) is 8M, max 195.6M, 187.6M free. Jul 6 23:37:44.619383 systemd-journald[1152]: Received client request to flush runtime journal. Jul 6 23:37:44.619444 kernel: loop0: detected capacity change from 0 to 211168 Jul 6 23:37:44.572196 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:37:44.579120 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:37:44.580722 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:37:44.582710 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:37:44.588945 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:37:44.590667 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:37:44.593361 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:37:44.615683 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:37:44.622079 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:37:44.624389 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jul 6 23:37:44.624399 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jul 6 23:37:44.632000 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:37:44.632205 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:37:44.633963 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:37:44.637395 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:37:44.653971 kernel: loop1: detected capacity change from 0 to 138376 Jul 6 23:37:44.680106 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:37:44.683093 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:37:44.687955 kernel: loop2: detected capacity change from 0 to 107312 Jul 6 23:37:44.711404 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jul 6 23:37:44.711419 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jul 6 23:37:44.711939 kernel: loop3: detected capacity change from 0 to 211168 Jul 6 23:37:44.717132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:37:44.721984 kernel: loop4: detected capacity change from 0 to 138376 Jul 6 23:37:44.730961 kernel: loop5: detected capacity change from 0 to 107312 Jul 6 23:37:44.739770 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:37:44.740178 (sd-merge)[1227]: Merged extensions into '/usr'. Jul 6 23:37:44.743942 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:37:44.743957 systemd[1]: Reloading... Jul 6 23:37:44.797987 zram_generator::config[1256]: No configuration found. Jul 6 23:37:44.886853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:37:44.894849 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:37:44.965511 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:37:44.965715 systemd[1]: Reloading finished in 221 ms. Jul 6 23:37:44.992724 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:37:44.994263 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:37:45.005306 systemd[1]: Starting ensure-sysext.service... Jul 6 23:37:45.007116 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:37:45.024346 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:37:45.024361 systemd[1]: Reloading... Jul 6 23:37:45.029961 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:37:45.030315 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:37:45.030605 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:37:45.030857 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:37:45.031650 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:37:45.031983 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jul 6 23:37:45.032101 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jul 6 23:37:45.034634 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:37:45.034727 systemd-tmpfiles[1289]: Skipping /boot Jul 6 23:37:45.044163 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:37:45.044269 systemd-tmpfiles[1289]: Skipping /boot Jul 6 23:37:45.078118 zram_generator::config[1316]: No configuration found. Jul 6 23:37:45.154603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:37:45.232071 systemd[1]: Reloading finished in 207 ms. Jul 6 23:37:45.253731 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:37:45.260985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:37:45.272284 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:37:45.274891 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:37:45.277269 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:37:45.280912 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:37:45.286092 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:37:45.288623 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:37:45.301747 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:37:45.307874 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:37:45.309475 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:37:45.312125 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:37:45.314878 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:37:45.316104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:37:45.316291 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:37:45.319922 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:37:45.322152 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:37:45.322308 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:37:45.324489 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:37:45.324655 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:37:45.327486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:37:45.328080 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:37:45.331615 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:37:45.337755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:37:45.339464 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:37:45.341954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:37:45.348653 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:37:45.349874 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:37:45.350010 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:37:45.351518 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:37:45.354123 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:37:45.354980 systemd-udevd[1362]: Using default interface naming scheme 'v255'. Jul 6 23:37:45.356338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:37:45.356502 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:37:45.358450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:37:45.358608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:37:45.363513 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:37:45.364963 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:37:45.366818 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:37:45.375369 systemd[1]: Finished ensure-sysext.service. Jul 6 23:37:45.375782 augenrules[1395]: No rules Jul 6 23:37:45.376645 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:37:45.378148 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:37:45.379039 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:37:45.381195 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:37:45.393295 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:37:45.395501 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:37:45.398602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:37:45.400021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:37:45.400069 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:37:45.403392 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:37:45.404844 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:37:45.406519 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:37:45.417750 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:37:45.419661 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:37:45.419981 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:37:45.422379 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:37:45.422766 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:37:45.426052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:37:45.426245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:37:45.439296 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:37:45.439363 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:37:45.451796 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:37:45.519841 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:37:45.523699 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:37:45.560396 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:37:45.593340 systemd-networkd[1436]: lo: Link UP Jul 6 23:37:45.593612 systemd-networkd[1436]: lo: Gained carrier Jul 6 23:37:45.594536 systemd-networkd[1436]: Enumeration completed Jul 6 23:37:45.595046 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:37:45.595317 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:37:45.595401 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:37:45.596285 systemd-resolved[1355]: Positive Trust Anchors: Jul 6 23:37:45.596303 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:37:45.596337 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:37:45.597850 systemd-networkd[1436]: eth0: Link UP Jul 6 23:37:45.598053 systemd-networkd[1436]: eth0: Gained carrier Jul 6 23:37:45.598154 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:37:45.600274 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:37:45.604675 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:37:45.605877 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:37:45.608310 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:37:45.613798 systemd-resolved[1355]: Defaulting to hostname 'linux'. Jul 6 23:37:45.614059 systemd-networkd[1436]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:37:45.614942 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jul 6 23:37:45.617282 systemd-timesyncd[1425]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:37:45.617394 systemd-timesyncd[1425]: Initial clock synchronization to Sun 2025-07-06 23:37:45.266676 UTC. Jul 6 23:37:45.619546 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:37:45.621435 systemd[1]: Reached target network.target - Network. Jul 6 23:37:45.622428 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:37:45.623600 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:37:45.626176 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:37:45.627395 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:37:45.628775 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:37:45.629991 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:37:45.631292 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:37:45.633039 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:37:45.633080 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:37:45.633973 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:37:45.636697 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:37:45.640340 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:37:45.645101 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:37:45.646608 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:37:45.649009 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:37:45.658750 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:37:45.660283 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:37:45.663987 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:37:45.665357 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:37:45.674142 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:37:45.675150 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:37:45.676155 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:37:45.676188 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:37:45.677451 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:37:45.679715 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:37:45.693327 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:37:45.695711 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:37:45.697769 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:37:45.698876 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:37:45.699882 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:37:45.702028 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:37:45.705074 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:37:45.707128 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:37:45.710619 jq[1477]: false Jul 6 23:37:45.711272 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:37:45.714249 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:37:45.717862 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:37:45.718460 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:37:45.720398 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:37:45.725659 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:37:45.725779 extend-filesystems[1478]: Found /dev/vda6 Jul 6 23:37:45.730009 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:37:45.732698 extend-filesystems[1478]: Found /dev/vda9 Jul 6 23:37:45.733060 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:37:45.733272 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:37:45.733525 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:37:45.733679 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:37:45.736803 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:37:45.737405 jq[1494]: true Jul 6 23:37:45.739270 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:37:45.740390 extend-filesystems[1478]: Checking size of /dev/vda9 Jul 6 23:37:45.758004 extend-filesystems[1478]: Resized partition /dev/vda9 Jul 6 23:37:45.773653 jq[1503]: true Jul 6 23:37:45.781238 update_engine[1493]: I20250706 23:37:45.781052 1493 main.cc:92] Flatcar Update Engine starting Jul 6 23:37:45.785507 (ntainerd)[1513]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:37:45.787712 extend-filesystems[1518]: resize2fs 1.47.2 (1-Jan-2025) Jul 6 23:37:45.795954 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:37:45.800434 tar[1500]: linux-arm64/LICENSE Jul 6 23:37:45.800434 tar[1500]: linux-arm64/helm Jul 6 23:37:45.807981 dbus-daemon[1475]: [system] SELinux support is enabled Jul 6 23:37:45.809957 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:37:45.814417 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:37:45.814446 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:37:45.817710 update_engine[1493]: I20250706 23:37:45.816120 1493 update_check_scheduler.cc:74] Next update check in 3m46s Jul 6 23:37:45.816351 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:37:45.816367 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:37:45.818707 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:37:45.821953 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:37:45.822903 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:37:45.837922 extend-filesystems[1518]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:37:45.837922 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:37:45.837922 extend-filesystems[1518]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:37:45.841462 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:37:45.845626 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Jul 6 23:37:45.841739 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:37:45.847660 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:37:45.847900 systemd-logind[1486]: New seat seat0. Jul 6 23:37:45.867671 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:37:45.871760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:37:45.884523 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:37:45.887972 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:37:45.890403 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:37:45.915838 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:37:46.021769 containerd[1513]: time="2025-07-06T23:37:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:37:46.025997 containerd[1513]: time="2025-07-06T23:37:46.025949578Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:37:46.035214 containerd[1513]: time="2025-07-06T23:37:46.035164468Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.944µs" Jul 6 23:37:46.036302 containerd[1513]: time="2025-07-06T23:37:46.035308775Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:37:46.036302 containerd[1513]: time="2025-07-06T23:37:46.035332373Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:37:46.036302 containerd[1513]: time="2025-07-06T23:37:46.035503033Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:37:46.036302 containerd[1513]: time="2025-07-06T23:37:46.035518829Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:37:46.036302 containerd[1513]: time="2025-07-06T23:37:46.035557803Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:37:46.036302 containerd[1513]: time="2025-07-06T23:37:46.035610890Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:37:46.036302 containerd[1513]: time="2025-07-06T23:37:46.035622441Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:37:46.036302 containerd[1513]: time="2025-07-06T23:37:46.035840833Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:37:46.036302 containerd[1513]: time="2025-07-06T23:37:46.035855291Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:37:46.036302 containerd[1513]: time="2025-07-06T23:37:46.035877971Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:37:46.036302 containerd[1513]: time="2025-07-06T23:37:46.035898013Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:37:46.036302 containerd[1513]: time="2025-07-06T23:37:46.035980895Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:37:46.036557 containerd[1513]: time="2025-07-06T23:37:46.036148342Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:37:46.036557 containerd[1513]: time="2025-07-06T23:37:46.036172744Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:37:46.036557 containerd[1513]: time="2025-07-06T23:37:46.036182305Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:37:46.036557 containerd[1513]: time="2025-07-06T23:37:46.036220782Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:37:46.036847 containerd[1513]: time="2025-07-06T23:37:46.036818626Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:37:46.037005 containerd[1513]: time="2025-07-06T23:37:46.036977429Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:37:46.040500 containerd[1513]: time="2025-07-06T23:37:46.040466388Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:37:46.040623 containerd[1513]: time="2025-07-06T23:37:46.040608668Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:37:46.040693 containerd[1513]: time="2025-07-06T23:37:46.040680497Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:37:46.040774 containerd[1513]: time="2025-07-06T23:37:46.040760701Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:37:46.040823 containerd[1513]: time="2025-07-06T23:37:46.040812335Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:37:46.040870 containerd[1513]: time="2025-07-06T23:37:46.040859303Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:37:46.040935 containerd[1513]: time="2025-07-06T23:37:46.040915373Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:37:46.040986 containerd[1513]: time="2025-07-06T23:37:46.040973892Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:37:46.041032 containerd[1513]: time="2025-07-06T23:37:46.041020668Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:37:46.041098 containerd[1513]: time="2025-07-06T23:37:46.041085230Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:37:46.041151 containerd[1513]: time="2025-07-06T23:37:46.041139044Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:37:46.041200 containerd[1513]: time="2025-07-06T23:37:46.041189301Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:37:46.041375 containerd[1513]: time="2025-07-06T23:37:46.041354567Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:37:46.041440 containerd[1513]: time="2025-07-06T23:37:46.041427543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:37:46.041494 containerd[1513]: time="2025-07-06T23:37:46.041483423Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:37:46.041553 containerd[1513]: time="2025-07-06T23:37:46.041541023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:37:46.041607 containerd[1513]: time="2025-07-06T23:37:46.041595870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:37:46.041652 containerd[1513]: time="2025-07-06T23:37:46.041642378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:37:46.041701 containerd[1513]: time="2025-07-06T23:37:46.041690532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:37:46.041759 containerd[1513]: time="2025-07-06T23:37:46.041747367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:37:46.041815 containerd[1513]: time="2025-07-06T23:37:46.041803285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:37:46.041864 containerd[1513]: time="2025-07-06T23:37:46.041852165Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:37:46.041948 containerd[1513]: time="2025-07-06T23:37:46.041913743Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:37:46.042180 containerd[1513]: time="2025-07-06T23:37:46.042164531Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:37:46.042245 containerd[1513]: time="2025-07-06T23:37:46.042233414Z" level=info msg="Start snapshots syncer" Jul 6 23:37:46.042314 containerd[1513]: time="2025-07-06T23:37:46.042301992Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:37:46.042604 containerd[1513]: time="2025-07-06T23:37:46.042564674Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:37:46.042744 containerd[1513]: time="2025-07-06T23:37:46.042728602Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:37:46.042874 containerd[1513]: time="2025-07-06T23:37:46.042857764Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:37:46.043065 containerd[1513]: time="2025-07-06T23:37:46.043043569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:37:46.043136 containerd[1513]: time="2025-07-06T23:37:46.043123736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:37:46.043211 containerd[1513]: time="2025-07-06T23:37:46.043197553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:37:46.043265 containerd[1513]: time="2025-07-06T23:37:46.043253164Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:37:46.043313 containerd[1513]: time="2025-07-06T23:37:46.043302771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:37:46.043370 containerd[1513]: time="2025-07-06T23:37:46.043358574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:37:46.043419 containerd[1513]: time="2025-07-06T23:37:46.043407263Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:37:46.043494 containerd[1513]: time="2025-07-06T23:37:46.043480545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:37:46.043546 containerd[1513]: time="2025-07-06T23:37:46.043533517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:37:46.043613 containerd[1513]: time="2025-07-06T23:37:46.043601598Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:37:46.043919 containerd[1513]: time="2025-07-06T23:37:46.043895184Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:37:46.044005 containerd[1513]: time="2025-07-06T23:37:46.043989425Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:37:46.044050 containerd[1513]: time="2025-07-06T23:37:46.044037847Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:37:46.044112 containerd[1513]: time="2025-07-06T23:37:46.044099539Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:37:46.044166 containerd[1513]: time="2025-07-06T23:37:46.044153430Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:37:46.044223 containerd[1513]: time="2025-07-06T23:37:46.044210954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:37:46.044271 containerd[1513]: time="2025-07-06T23:37:46.044259834Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:37:46.044382 containerd[1513]: time="2025-07-06T23:37:46.044372166Z" level=info msg="runtime interface created" Jul 6 23:37:46.044420 containerd[1513]: time="2025-07-06T23:37:46.044410911Z" level=info msg="created NRI interface" Jul 6 23:37:46.044468 containerd[1513]: time="2025-07-06T23:37:46.044457305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:37:46.044517 containerd[1513]: time="2025-07-06T23:37:46.044506300Z" level=info msg="Connect containerd service" Jul 6 23:37:46.044645 containerd[1513]: time="2025-07-06T23:37:46.044617179Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:37:46.047127 containerd[1513]: time="2025-07-06T23:37:46.047069712Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:37:46.151507 containerd[1513]: time="2025-07-06T23:37:46.150879143Z" level=info msg="Start subscribing containerd event" Jul 6 23:37:46.151507 containerd[1513]: time="2025-07-06T23:37:46.150950933Z" level=info msg="Start recovering state" Jul 6 23:37:46.151507 containerd[1513]: time="2025-07-06T23:37:46.151036454Z" level=info msg="Start event monitor" Jul 6 23:37:46.151507 containerd[1513]: time="2025-07-06T23:37:46.151057413Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:37:46.151507 containerd[1513]: time="2025-07-06T23:37:46.151067396Z" level=info msg="Start streaming server" Jul 6 23:37:46.151507 containerd[1513]: time="2025-07-06T23:37:46.151076958Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:37:46.151507 containerd[1513]: time="2025-07-06T23:37:46.151084378Z" level=info msg="runtime interface starting up..." Jul 6 23:37:46.151507 containerd[1513]: time="2025-07-06T23:37:46.151089847Z" level=info msg="starting plugins..." Jul 6 23:37:46.151507 containerd[1513]: time="2025-07-06T23:37:46.151102890Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:37:46.151507 containerd[1513]: time="2025-07-06T23:37:46.151380489Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:37:46.151507 containerd[1513]: time="2025-07-06T23:37:46.151432849Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:37:46.151507 containerd[1513]: time="2025-07-06T23:37:46.151483642Z" level=info msg="containerd successfully booted in 0.130062s" Jul 6 23:37:46.151595 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:37:46.206168 tar[1500]: linux-arm64/README.md Jul 6 23:37:46.231998 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:37:46.686274 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:37:46.704964 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:37:46.709052 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:37:46.727353 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:37:46.727574 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:37:46.731168 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:37:46.749639 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:37:46.753312 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:37:46.755896 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:37:46.757330 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:37:46.857071 systemd-networkd[1436]: eth0: Gained IPv6LL Jul 6 23:37:46.859560 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:37:46.861247 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:37:46.863794 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:37:46.866208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:37:46.877676 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:37:46.898121 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:37:46.900104 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:37:46.900639 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:37:46.903004 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:37:47.432994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:37:47.434558 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:37:47.437883 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:37:47.440315 systemd[1]: Startup finished in 2.185s (kernel) + 8.195s (initrd) + 3.673s (userspace) = 14.054s. Jul 6 23:37:47.936082 kubelet[1614]: E0706 23:37:47.936036 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:37:47.938621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:37:47.938760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:37:47.939100 systemd[1]: kubelet.service: Consumed 862ms CPU time, 260.1M memory peak. Jul 6 23:37:48.377834 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:37:48.379212 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:55488.service - OpenSSH per-connection server daemon (10.0.0.1:55488). Jul 6 23:37:48.471119 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 55488 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:37:48.473493 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:48.481230 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:37:48.482181 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:37:48.487956 systemd-logind[1486]: New session 1 of user core. Jul 6 23:37:48.504954 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:37:48.507409 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:37:48.533906 (systemd)[1631]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:37:48.536320 systemd-logind[1486]: New session c1 of user core. Jul 6 23:37:48.651067 systemd[1631]: Queued start job for default target default.target. Jul 6 23:37:48.667962 systemd[1631]: Created slice app.slice - User Application Slice. Jul 6 23:37:48.667988 systemd[1631]: Reached target paths.target - Paths. Jul 6 23:37:48.668026 systemd[1631]: Reached target timers.target - Timers. Jul 6 23:37:48.669259 systemd[1631]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:37:48.678570 systemd[1631]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:37:48.678639 systemd[1631]: Reached target sockets.target - Sockets. Jul 6 23:37:48.678677 systemd[1631]: Reached target basic.target - Basic System. Jul 6 23:37:48.678708 systemd[1631]: Reached target default.target - Main User Target. Jul 6 23:37:48.678735 systemd[1631]: Startup finished in 136ms. Jul 6 23:37:48.679071 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:37:48.680632 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:37:48.737352 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:55498.service - OpenSSH per-connection server daemon (10.0.0.1:55498). Jul 6 23:37:48.812536 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 55498 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:37:48.813823 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:48.817986 systemd-logind[1486]: New session 2 of user core. Jul 6 23:37:48.830091 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:37:48.879269 sshd[1644]: Connection closed by 10.0.0.1 port 55498 Jul 6 23:37:48.879558 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:48.890120 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:55498.service: Deactivated successfully. Jul 6 23:37:48.892518 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:37:48.893911 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:37:48.896205 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:55500.service - OpenSSH per-connection server daemon (10.0.0.1:55500). Jul 6 23:37:48.897769 systemd-logind[1486]: Removed session 2. Jul 6 23:37:48.949210 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 55500 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:37:48.950428 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:48.954991 systemd-logind[1486]: New session 3 of user core. Jul 6 23:37:48.964109 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:37:49.010975 sshd[1652]: Connection closed by 10.0.0.1 port 55500 Jul 6 23:37:49.011307 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:49.025407 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:55500.service: Deactivated successfully. Jul 6 23:37:49.027094 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:37:49.027745 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:37:49.030400 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:55514.service - OpenSSH per-connection server daemon (10.0.0.1:55514). Jul 6 23:37:49.031111 systemd-logind[1486]: Removed session 3. Jul 6 23:37:49.085398 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 55514 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:37:49.086634 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:49.090349 systemd-logind[1486]: New session 4 of user core. Jul 6 23:37:49.099096 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:37:49.148706 sshd[1660]: Connection closed by 10.0.0.1 port 55514 Jul 6 23:37:49.148579 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:49.162322 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:55514.service: Deactivated successfully. Jul 6 23:37:49.165302 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:37:49.165961 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:37:49.168436 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:55530.service - OpenSSH per-connection server daemon (10.0.0.1:55530). Jul 6 23:37:49.168948 systemd-logind[1486]: Removed session 4. Jul 6 23:37:49.221028 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 55530 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:37:49.222288 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:49.226724 systemd-logind[1486]: New session 5 of user core. Jul 6 23:37:49.240105 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:37:49.306372 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:37:49.306643 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:37:49.319644 sudo[1669]: pam_unix(sudo:session): session closed for user root Jul 6 23:37:49.322918 sshd[1668]: Connection closed by 10.0.0.1 port 55530 Jul 6 23:37:49.323335 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:49.347601 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:55530.service: Deactivated successfully. Jul 6 23:37:49.349987 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:37:49.350913 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:37:49.353875 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:55532.service - OpenSSH per-connection server daemon (10.0.0.1:55532). Jul 6 23:37:49.354666 systemd-logind[1486]: Removed session 5. Jul 6 23:37:49.411775 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 55532 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:37:49.413167 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:49.417734 systemd-logind[1486]: New session 6 of user core. Jul 6 23:37:49.428111 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:37:49.482819 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:37:49.484571 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:37:49.496774 sudo[1679]: pam_unix(sudo:session): session closed for user root Jul 6 23:37:49.501992 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:37:49.502278 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:37:49.511887 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:37:49.554338 augenrules[1701]: No rules Jul 6 23:37:49.555573 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:37:49.555781 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:37:49.556728 sudo[1678]: pam_unix(sudo:session): session closed for user root Jul 6 23:37:49.558016 sshd[1677]: Connection closed by 10.0.0.1 port 55532 Jul 6 23:37:49.558382 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Jul 6 23:37:49.570114 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:55532.service: Deactivated successfully. Jul 6 23:37:49.571814 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:37:49.572648 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:37:49.574982 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:55540.service - OpenSSH per-connection server daemon (10.0.0.1:55540). Jul 6 23:37:49.575815 systemd-logind[1486]: Removed session 6. Jul 6 23:37:49.631160 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 55540 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:37:49.632443 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:37:49.638012 systemd-logind[1486]: New session 7 of user core. Jul 6 23:37:49.655294 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:37:49.706452 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:37:49.706737 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:37:50.104021 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:37:50.123334 (dockerd)[1734]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:37:50.503178 dockerd[1734]: time="2025-07-06T23:37:50.503053529Z" level=info msg="Starting up" Jul 6 23:37:50.503834 dockerd[1734]: time="2025-07-06T23:37:50.503803440Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:37:50.954072 dockerd[1734]: time="2025-07-06T23:37:50.953957658Z" level=info msg="Loading containers: start." Jul 6 23:37:50.961950 kernel: Initializing XFRM netlink socket Jul 6 23:37:51.175782 systemd-networkd[1436]: docker0: Link UP Jul 6 23:37:51.180620 dockerd[1734]: time="2025-07-06T23:37:51.180575367Z" level=info msg="Loading containers: done." Jul 6 23:37:51.198705 dockerd[1734]: time="2025-07-06T23:37:51.198652636Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:37:51.198846 dockerd[1734]: time="2025-07-06T23:37:51.198745736Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:37:51.198871 dockerd[1734]: time="2025-07-06T23:37:51.198852286Z" level=info msg="Initializing buildkit" Jul 6 23:37:51.247744 dockerd[1734]: time="2025-07-06T23:37:51.247637672Z" level=info msg="Completed buildkit initialization" Jul 6 23:37:51.252295 dockerd[1734]: time="2025-07-06T23:37:51.252259062Z" level=info msg="Daemon has completed initialization" Jul 6 23:37:51.252378 dockerd[1734]: time="2025-07-06T23:37:51.252327919Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:37:51.252534 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:37:51.749602 containerd[1513]: time="2025-07-06T23:37:51.749568721Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:37:52.345265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809214026.mount: Deactivated successfully. Jul 6 23:37:53.262820 containerd[1513]: time="2025-07-06T23:37:53.262775339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:53.263850 containerd[1513]: time="2025-07-06T23:37:53.263814545Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 6 23:37:53.264987 containerd[1513]: time="2025-07-06T23:37:53.264943185Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:53.267631 containerd[1513]: time="2025-07-06T23:37:53.267592284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:53.268649 containerd[1513]: time="2025-07-06T23:37:53.268618989Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.519010108s" Jul 6 23:37:53.268704 containerd[1513]: time="2025-07-06T23:37:53.268654330Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 6 23:37:53.271946 containerd[1513]: time="2025-07-06T23:37:53.271910757Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:37:54.366140 containerd[1513]: time="2025-07-06T23:37:54.365957204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:54.367031 containerd[1513]: time="2025-07-06T23:37:54.366999432Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 6 23:37:54.368839 containerd[1513]: time="2025-07-06T23:37:54.368784505Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:54.372168 containerd[1513]: time="2025-07-06T23:37:54.372101915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:54.373064 containerd[1513]: time="2025-07-06T23:37:54.373018977Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.101067867s" Jul 6 23:37:54.373064 containerd[1513]: time="2025-07-06T23:37:54.373051165Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 6 23:37:54.373508 containerd[1513]: time="2025-07-06T23:37:54.373465551Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:37:55.475911 containerd[1513]: time="2025-07-06T23:37:55.475833740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:55.476844 containerd[1513]: time="2025-07-06T23:37:55.476795027Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 6 23:37:55.477417 containerd[1513]: time="2025-07-06T23:37:55.477377807Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:55.479855 containerd[1513]: time="2025-07-06T23:37:55.479821307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:55.481113 containerd[1513]: time="2025-07-06T23:37:55.481074300Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.107574965s" Jul 6 23:37:55.481142 containerd[1513]: time="2025-07-06T23:37:55.481112431Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 6 23:37:55.481570 containerd[1513]: time="2025-07-06T23:37:55.481534437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:37:57.107242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785364611.mount: Deactivated successfully. Jul 6 23:37:57.492508 containerd[1513]: time="2025-07-06T23:37:57.492266047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:57.493283 containerd[1513]: time="2025-07-06T23:37:57.493254654Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 6 23:37:57.494277 containerd[1513]: time="2025-07-06T23:37:57.494231461Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:57.496667 containerd[1513]: time="2025-07-06T23:37:57.496597038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:57.497364 containerd[1513]: time="2025-07-06T23:37:57.497216724Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 2.01565163s" Jul 6 23:37:57.497364 containerd[1513]: time="2025-07-06T23:37:57.497252638Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 6 23:37:57.497855 containerd[1513]: time="2025-07-06T23:37:57.497800021Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:37:58.173813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:37:58.175532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:37:58.181595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930786452.mount: Deactivated successfully. Jul 6 23:37:58.346000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:37:58.350938 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:37:58.396502 kubelet[2035]: E0706 23:37:58.396454 2035 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:37:58.401128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:37:58.401274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:37:58.401833 systemd[1]: kubelet.service: Consumed 159ms CPU time, 107M memory peak. Jul 6 23:37:59.029089 containerd[1513]: time="2025-07-06T23:37:59.029032204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:59.030654 containerd[1513]: time="2025-07-06T23:37:59.030618856Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 6 23:37:59.031939 containerd[1513]: time="2025-07-06T23:37:59.031901752Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:59.035408 containerd[1513]: time="2025-07-06T23:37:59.035371787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:37:59.036356 containerd[1513]: time="2025-07-06T23:37:59.036325365Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.538371215s" Jul 6 23:37:59.036395 containerd[1513]: time="2025-07-06T23:37:59.036356364Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 6 23:37:59.036804 containerd[1513]: time="2025-07-06T23:37:59.036765540Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:37:59.504546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount681189869.mount: Deactivated successfully. Jul 6 23:37:59.512888 containerd[1513]: time="2025-07-06T23:37:59.512807751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:37:59.513565 containerd[1513]: time="2025-07-06T23:37:59.513503337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 6 23:37:59.514335 containerd[1513]: time="2025-07-06T23:37:59.514311012Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:37:59.516193 containerd[1513]: time="2025-07-06T23:37:59.516139462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:37:59.516822 containerd[1513]: time="2025-07-06T23:37:59.516769598Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 479.970321ms" Jul 6 23:37:59.516822 containerd[1513]: time="2025-07-06T23:37:59.516807305Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:37:59.517448 containerd[1513]: time="2025-07-06T23:37:59.517248670Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:37:59.944514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872027552.mount: Deactivated successfully. Jul 6 23:38:01.425178 containerd[1513]: time="2025-07-06T23:38:01.425095752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:01.425656 containerd[1513]: time="2025-07-06T23:38:01.425613950Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 6 23:38:01.426659 containerd[1513]: time="2025-07-06T23:38:01.426623188Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:01.430094 containerd[1513]: time="2025-07-06T23:38:01.430058516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:01.431065 containerd[1513]: time="2025-07-06T23:38:01.431030495Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.913750056s" Jul 6 23:38:01.431092 containerd[1513]: time="2025-07-06T23:38:01.431066282Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 6 23:38:05.373264 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:38:05.373424 systemd[1]: kubelet.service: Consumed 159ms CPU time, 107M memory peak. Jul 6 23:38:05.375423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:38:05.397986 systemd[1]: Reload requested from client PID 2173 ('systemctl') (unit session-7.scope)... Jul 6 23:38:05.398005 systemd[1]: Reloading... Jul 6 23:38:05.474951 zram_generator::config[2220]: No configuration found. Jul 6 23:38:05.609628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:38:05.712608 systemd[1]: Reloading finished in 314 ms. Jul 6 23:38:05.773538 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:38:05.773629 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:38:05.773881 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:38:05.773954 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95M memory peak. Jul 6 23:38:05.775650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:38:05.888538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:38:05.892490 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:38:05.926414 kubelet[2261]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:38:05.926414 kubelet[2261]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:38:05.926414 kubelet[2261]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:38:05.927523 kubelet[2261]: I0706 23:38:05.927452 2261 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:38:06.748018 kubelet[2261]: I0706 23:38:06.747659 2261 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:38:06.748018 kubelet[2261]: I0706 23:38:06.747692 2261 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:38:06.748018 kubelet[2261]: I0706 23:38:06.747909 2261 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:38:06.793093 kubelet[2261]: E0706 23:38:06.793044 2261 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:38:06.795916 kubelet[2261]: I0706 23:38:06.795887 2261 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:38:06.805401 kubelet[2261]: I0706 23:38:06.805369 2261 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:38:06.808716 kubelet[2261]: I0706 23:38:06.808693 2261 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:38:06.809904 kubelet[2261]: I0706 23:38:06.809850 2261 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:38:06.810071 kubelet[2261]: I0706 23:38:06.809896 2261 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:38:06.810167 kubelet[2261]: I0706 23:38:06.810132 2261 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:38:06.810167 kubelet[2261]: I0706 23:38:06.810141 2261 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:38:06.810354 kubelet[2261]: I0706 23:38:06.810328 2261 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:38:06.815304 kubelet[2261]: I0706 23:38:06.815276 2261 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:38:06.815304 kubelet[2261]: I0706 23:38:06.815305 2261 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:38:06.815380 kubelet[2261]: I0706 23:38:06.815332 2261 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:38:06.816565 kubelet[2261]: I0706 23:38:06.816447 2261 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:38:06.826495 kubelet[2261]: I0706 23:38:06.819156 2261 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:38:06.826495 kubelet[2261]: I0706 23:38:06.820476 2261 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:38:06.826495 kubelet[2261]: W0706 23:38:06.820909 2261 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:38:06.826495 kubelet[2261]: E0706 23:38:06.822374 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:38:06.826495 kubelet[2261]: I0706 23:38:06.825686 2261 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:38:06.826495 kubelet[2261]: I0706 23:38:06.825733 2261 server.go:1289] "Started kubelet" Jul 6 23:38:06.827966 kubelet[2261]: E0706 23:38:06.827458 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:38:06.827966 kubelet[2261]: I0706 23:38:06.827556 2261 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:38:06.832269 kubelet[2261]: I0706 23:38:06.832184 2261 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:38:06.832504 kubelet[2261]: I0706 23:38:06.832486 2261 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:38:06.834127 kubelet[2261]: I0706 23:38:06.834101 2261 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:38:06.837102 kubelet[2261]: I0706 23:38:06.837026 2261 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:38:06.837216 kubelet[2261]: I0706 23:38:06.837178 2261 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:38:06.837856 kubelet[2261]: I0706 23:38:06.837493 2261 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:38:06.837856 kubelet[2261]: I0706 23:38:06.837826 2261 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:38:06.837856 kubelet[2261]: I0706 23:38:06.837890 2261 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:38:06.837856 kubelet[2261]: E0706 23:38:06.837609 2261 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:38:06.840347 kubelet[2261]: E0706 23:38:06.840303 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:38:06.840456 kubelet[2261]: E0706 23:38:06.840382 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Jul 6 23:38:06.840655 kubelet[2261]: I0706 23:38:06.840582 2261 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:38:06.840958 kubelet[2261]: I0706 23:38:06.840694 2261 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:38:06.841672 kubelet[2261]: E0706 23:38:06.841494 2261 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:38:06.842428 kubelet[2261]: I0706 23:38:06.842404 2261 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:38:06.842863 kubelet[2261]: E0706 23:38:06.840967 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcdd325c253a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:38:06.825698209 +0000 UTC m=+0.929992566,LastTimestamp:2025-07-06 23:38:06.825698209 +0000 UTC m=+0.929992566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:38:06.853356 kubelet[2261]: I0706 23:38:06.853330 2261 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:38:06.853356 kubelet[2261]: I0706 23:38:06.853350 2261 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:38:06.854071 kubelet[2261]: I0706 23:38:06.853369 2261 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:38:06.860417 kubelet[2261]: I0706 23:38:06.860369 2261 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:38:06.861552 kubelet[2261]: I0706 23:38:06.861523 2261 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:38:06.861552 kubelet[2261]: I0706 23:38:06.861546 2261 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:38:06.861626 kubelet[2261]: I0706 23:38:06.861571 2261 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:38:06.861626 kubelet[2261]: I0706 23:38:06.861581 2261 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:38:06.861671 kubelet[2261]: E0706 23:38:06.861641 2261 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:38:06.862179 kubelet[2261]: E0706 23:38:06.862149 2261 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:38:06.872029 kubelet[2261]: I0706 23:38:06.871963 2261 policy_none.go:49] "None policy: Start" Jul 6 23:38:06.872029 kubelet[2261]: I0706 23:38:06.871994 2261 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:38:06.872029 kubelet[2261]: I0706 23:38:06.872007 2261 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:38:06.877584 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:38:06.892103 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:38:06.895239 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:38:06.906735 kubelet[2261]: E0706 23:38:06.906696 2261 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:38:06.907064 kubelet[2261]: I0706 23:38:06.906909 2261 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:38:06.907129 kubelet[2261]: I0706 23:38:06.907081 2261 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:38:06.907309 kubelet[2261]: I0706 23:38:06.907273 2261 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:38:06.909450 kubelet[2261]: E0706 23:38:06.909424 2261 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:38:06.909625 kubelet[2261]: E0706 23:38:06.909474 2261 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:38:06.972041 systemd[1]: Created slice kubepods-burstable-pod16e65bd0e6dbdd3eb1c209a7c3bf2f9f.slice - libcontainer container kubepods-burstable-pod16e65bd0e6dbdd3eb1c209a7c3bf2f9f.slice. Jul 6 23:38:06.986322 kubelet[2261]: E0706 23:38:06.986263 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:38:06.989528 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 6 23:38:06.991702 kubelet[2261]: E0706 23:38:06.991677 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:38:06.993205 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 6 23:38:06.994820 kubelet[2261]: E0706 23:38:06.994798 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:38:07.010131 kubelet[2261]: I0706 23:38:07.010043 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:38:07.012369 kubelet[2261]: E0706 23:38:07.012335 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 6 23:38:07.040946 kubelet[2261]: E0706 23:38:07.040889 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Jul 6 23:38:07.139317 kubelet[2261]: I0706 23:38:07.139216 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16e65bd0e6dbdd3eb1c209a7c3bf2f9f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"16e65bd0e6dbdd3eb1c209a7c3bf2f9f\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:07.139317 kubelet[2261]: I0706 23:38:07.139273 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16e65bd0e6dbdd3eb1c209a7c3bf2f9f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"16e65bd0e6dbdd3eb1c209a7c3bf2f9f\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:07.139477 kubelet[2261]: I0706 23:38:07.139349 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:07.139477 kubelet[2261]: I0706 23:38:07.139404 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:07.139477 kubelet[2261]: I0706 23:38:07.139424 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:07.139477 kubelet[2261]: I0706 23:38:07.139449 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:07.139477 kubelet[2261]: I0706 23:38:07.139464 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:38:07.139580 kubelet[2261]: I0706 23:38:07.139479 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16e65bd0e6dbdd3eb1c209a7c3bf2f9f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"16e65bd0e6dbdd3eb1c209a7c3bf2f9f\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:07.139580 kubelet[2261]: I0706 23:38:07.139502 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:07.214314 kubelet[2261]: I0706 23:38:07.214265 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:38:07.214658 kubelet[2261]: E0706 23:38:07.214629 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 6 23:38:07.287169 kubelet[2261]: E0706 23:38:07.287134 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:07.287990 containerd[1513]: time="2025-07-06T23:38:07.287786615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:16e65bd0e6dbdd3eb1c209a7c3bf2f9f,Namespace:kube-system,Attempt:0,}" Jul 6 23:38:07.292560 kubelet[2261]: E0706 23:38:07.292537 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:07.293092 containerd[1513]: time="2025-07-06T23:38:07.292862151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 6 23:38:07.295430 kubelet[2261]: E0706 23:38:07.295399 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:07.296004 containerd[1513]: time="2025-07-06T23:38:07.295978540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 6 23:38:07.316342 containerd[1513]: time="2025-07-06T23:38:07.316164555Z" level=info msg="connecting to shim aa8e6227e58c90145bdc3c4d0380398daf9ac0894e8723fb528497ed6e7f20ab" address="unix:///run/containerd/s/a478fc5de5e56ab948feee121cb5127e7b2291f4f447bbe60d1d09f08b3dafe6" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:07.318669 containerd[1513]: time="2025-07-06T23:38:07.318631711Z" level=info msg="connecting to shim c5aae2a363fc965ded9180dec635bd0bbc471602454254f0827e839b32315e57" address="unix:///run/containerd/s/78acea9d180b06511c7deef777f6ff9ff4ef882ccbf72530919c08fc5c88e1c1" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:07.334137 containerd[1513]: time="2025-07-06T23:38:07.334044665Z" level=info msg="connecting to shim 08a7cf1f7a3c0e1cec286edaeaa008204c9772b5a672f8e678859b917b3f91a7" address="unix:///run/containerd/s/02e49c8bc87b78c325f9aeca1cb989c96bbf2c277d38ba58537722d05b174ca6" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:07.354159 systemd[1]: Started cri-containerd-aa8e6227e58c90145bdc3c4d0380398daf9ac0894e8723fb528497ed6e7f20ab.scope - libcontainer container aa8e6227e58c90145bdc3c4d0380398daf9ac0894e8723fb528497ed6e7f20ab. Jul 6 23:38:07.355344 systemd[1]: Started cri-containerd-c5aae2a363fc965ded9180dec635bd0bbc471602454254f0827e839b32315e57.scope - libcontainer container c5aae2a363fc965ded9180dec635bd0bbc471602454254f0827e839b32315e57. Jul 6 23:38:07.358343 systemd[1]: Started cri-containerd-08a7cf1f7a3c0e1cec286edaeaa008204c9772b5a672f8e678859b917b3f91a7.scope - libcontainer container 08a7cf1f7a3c0e1cec286edaeaa008204c9772b5a672f8e678859b917b3f91a7. Jul 6 23:38:07.370551 kubelet[2261]: E0706 23:38:07.370447 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcdd325c253a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:38:06.825698209 +0000 UTC m=+0.929992566,LastTimestamp:2025-07-06 23:38:06.825698209 +0000 UTC m=+0.929992566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:38:07.408380 containerd[1513]: time="2025-07-06T23:38:07.408339722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5aae2a363fc965ded9180dec635bd0bbc471602454254f0827e839b32315e57\"" Jul 6 23:38:07.409538 kubelet[2261]: E0706 23:38:07.409485 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:07.441661 kubelet[2261]: E0706 23:38:07.441589 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Jul 6 23:38:07.448206 containerd[1513]: time="2025-07-06T23:38:07.448166961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:16e65bd0e6dbdd3eb1c209a7c3bf2f9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa8e6227e58c90145bdc3c4d0380398daf9ac0894e8723fb528497ed6e7f20ab\"" Jul 6 23:38:07.449047 kubelet[2261]: E0706 23:38:07.449014 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:07.454873 containerd[1513]: time="2025-07-06T23:38:07.454829795Z" level=info msg="CreateContainer within sandbox \"c5aae2a363fc965ded9180dec635bd0bbc471602454254f0827e839b32315e57\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:38:07.459416 containerd[1513]: time="2025-07-06T23:38:07.459349051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"08a7cf1f7a3c0e1cec286edaeaa008204c9772b5a672f8e678859b917b3f91a7\"" Jul 6 23:38:07.460211 kubelet[2261]: E0706 23:38:07.460189 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:07.460473 containerd[1513]: time="2025-07-06T23:38:07.460448008Z" level=info msg="CreateContainer within sandbox \"aa8e6227e58c90145bdc3c4d0380398daf9ac0894e8723fb528497ed6e7f20ab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:38:07.464613 containerd[1513]: time="2025-07-06T23:38:07.464520533Z" level=info msg="CreateContainer within sandbox \"08a7cf1f7a3c0e1cec286edaeaa008204c9772b5a672f8e678859b917b3f91a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:38:07.471654 containerd[1513]: time="2025-07-06T23:38:07.471576640Z" level=info msg="Container 1107a83caf14ce38bb2b260ce482ae879739e4aa214357d1777a1b5a08caeac5: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:07.475779 containerd[1513]: time="2025-07-06T23:38:07.475349283Z" level=info msg="Container 4715b95e63b2a800a1146cca862702d777476a9c775a0bf0d5e5e59bd3f61910: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:07.477870 containerd[1513]: time="2025-07-06T23:38:07.477824178Z" level=info msg="Container a0a0fdd0c99ad3841c4e644967ff127a1ff606bd4135ed57a9a00ee46a7d71e6: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:07.483581 containerd[1513]: time="2025-07-06T23:38:07.483532431Z" level=info msg="CreateContainer within sandbox \"c5aae2a363fc965ded9180dec635bd0bbc471602454254f0827e839b32315e57\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1107a83caf14ce38bb2b260ce482ae879739e4aa214357d1777a1b5a08caeac5\"" Jul 6 23:38:07.484370 containerd[1513]: time="2025-07-06T23:38:07.484335215Z" level=info msg="StartContainer for \"1107a83caf14ce38bb2b260ce482ae879739e4aa214357d1777a1b5a08caeac5\"" Jul 6 23:38:07.485555 containerd[1513]: time="2025-07-06T23:38:07.485529358Z" level=info msg="connecting to shim 1107a83caf14ce38bb2b260ce482ae879739e4aa214357d1777a1b5a08caeac5" address="unix:///run/containerd/s/78acea9d180b06511c7deef777f6ff9ff4ef882ccbf72530919c08fc5c88e1c1" protocol=ttrpc version=3 Jul 6 23:38:07.488332 containerd[1513]: time="2025-07-06T23:38:07.488282354Z" level=info msg="CreateContainer within sandbox \"08a7cf1f7a3c0e1cec286edaeaa008204c9772b5a672f8e678859b917b3f91a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a0a0fdd0c99ad3841c4e644967ff127a1ff606bd4135ed57a9a00ee46a7d71e6\"" Jul 6 23:38:07.488977 containerd[1513]: time="2025-07-06T23:38:07.488765229Z" level=info msg="StartContainer for \"a0a0fdd0c99ad3841c4e644967ff127a1ff606bd4135ed57a9a00ee46a7d71e6\"" Jul 6 23:38:07.489847 containerd[1513]: time="2025-07-06T23:38:07.489811326Z" level=info msg="connecting to shim a0a0fdd0c99ad3841c4e644967ff127a1ff606bd4135ed57a9a00ee46a7d71e6" address="unix:///run/containerd/s/02e49c8bc87b78c325f9aeca1cb989c96bbf2c277d38ba58537722d05b174ca6" protocol=ttrpc version=3 Jul 6 23:38:07.490900 containerd[1513]: time="2025-07-06T23:38:07.490854391Z" level=info msg="CreateContainer within sandbox \"aa8e6227e58c90145bdc3c4d0380398daf9ac0894e8723fb528497ed6e7f20ab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4715b95e63b2a800a1146cca862702d777476a9c775a0bf0d5e5e59bd3f61910\"" Jul 6 23:38:07.491614 containerd[1513]: time="2025-07-06T23:38:07.491578185Z" level=info msg="StartContainer for \"4715b95e63b2a800a1146cca862702d777476a9c775a0bf0d5e5e59bd3f61910\"" Jul 6 23:38:07.492879 containerd[1513]: time="2025-07-06T23:38:07.492833047Z" level=info msg="connecting to shim 4715b95e63b2a800a1146cca862702d777476a9c775a0bf0d5e5e59bd3f61910" address="unix:///run/containerd/s/a478fc5de5e56ab948feee121cb5127e7b2291f4f447bbe60d1d09f08b3dafe6" protocol=ttrpc version=3 Jul 6 23:38:07.505254 systemd[1]: Started cri-containerd-1107a83caf14ce38bb2b260ce482ae879739e4aa214357d1777a1b5a08caeac5.scope - libcontainer container 1107a83caf14ce38bb2b260ce482ae879739e4aa214357d1777a1b5a08caeac5. Jul 6 23:38:07.509424 systemd[1]: Started cri-containerd-4715b95e63b2a800a1146cca862702d777476a9c775a0bf0d5e5e59bd3f61910.scope - libcontainer container 4715b95e63b2a800a1146cca862702d777476a9c775a0bf0d5e5e59bd3f61910. Jul 6 23:38:07.526135 systemd[1]: Started cri-containerd-a0a0fdd0c99ad3841c4e644967ff127a1ff606bd4135ed57a9a00ee46a7d71e6.scope - libcontainer container a0a0fdd0c99ad3841c4e644967ff127a1ff606bd4135ed57a9a00ee46a7d71e6. Jul 6 23:38:07.586088 containerd[1513]: time="2025-07-06T23:38:07.585165196Z" level=info msg="StartContainer for \"1107a83caf14ce38bb2b260ce482ae879739e4aa214357d1777a1b5a08caeac5\" returns successfully" Jul 6 23:38:07.593781 containerd[1513]: time="2025-07-06T23:38:07.593662349Z" level=info msg="StartContainer for \"a0a0fdd0c99ad3841c4e644967ff127a1ff606bd4135ed57a9a00ee46a7d71e6\" returns successfully" Jul 6 23:38:07.597808 containerd[1513]: time="2025-07-06T23:38:07.597776284Z" level=info msg="StartContainer for \"4715b95e63b2a800a1146cca862702d777476a9c775a0bf0d5e5e59bd3f61910\" returns successfully" Jul 6 23:38:07.616597 kubelet[2261]: I0706 23:38:07.616554 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:38:07.616891 kubelet[2261]: E0706 23:38:07.616856 2261 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 6 23:38:07.867274 kubelet[2261]: E0706 23:38:07.867152 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:38:07.867385 kubelet[2261]: E0706 23:38:07.867288 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:07.869757 kubelet[2261]: E0706 23:38:07.869728 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:38:07.869888 kubelet[2261]: E0706 23:38:07.869873 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:07.871267 kubelet[2261]: E0706 23:38:07.871232 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:38:07.871365 kubelet[2261]: E0706 23:38:07.871347 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:08.418417 kubelet[2261]: I0706 23:38:08.418382 2261 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:38:08.874744 kubelet[2261]: E0706 23:38:08.874539 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:38:08.874744 kubelet[2261]: E0706 23:38:08.874637 2261 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:38:08.874744 kubelet[2261]: E0706 23:38:08.874678 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:08.874744 kubelet[2261]: E0706 23:38:08.874742 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:09.487024 kubelet[2261]: E0706 23:38:09.486986 2261 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 6 23:38:09.524021 kubelet[2261]: I0706 23:38:09.523961 2261 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:38:09.538103 kubelet[2261]: I0706 23:38:09.538039 2261 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:09.546501 kubelet[2261]: E0706 23:38:09.546467 2261 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:09.546686 kubelet[2261]: I0706 23:38:09.546613 2261 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:09.553337 kubelet[2261]: E0706 23:38:09.553279 2261 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:09.553337 kubelet[2261]: I0706 23:38:09.553313 2261 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:38:09.557912 kubelet[2261]: E0706 23:38:09.557862 2261 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 6 23:38:09.818369 kubelet[2261]: I0706 23:38:09.818330 2261 apiserver.go:52] "Watching apiserver" Jul 6 23:38:09.838646 kubelet[2261]: I0706 23:38:09.838573 2261 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:38:11.376486 kubelet[2261]: I0706 23:38:11.376445 2261 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:11.382508 kubelet[2261]: E0706 23:38:11.382456 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:11.535559 kubelet[2261]: I0706 23:38:11.535517 2261 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:38:11.541078 kubelet[2261]: E0706 23:38:11.541045 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:11.881016 kubelet[2261]: E0706 23:38:11.880949 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:11.881212 systemd[1]: Reload requested from client PID 2555 ('systemctl') (unit session-7.scope)... Jul 6 23:38:11.881230 systemd[1]: Reloading... Jul 6 23:38:11.882003 kubelet[2261]: E0706 23:38:11.881965 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:11.950988 zram_generator::config[2601]: No configuration found. Jul 6 23:38:12.028026 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:38:12.157509 systemd[1]: Reloading finished in 275 ms. Jul 6 23:38:12.183799 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:38:12.201168 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:38:12.201665 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:38:12.201814 systemd[1]: kubelet.service: Consumed 1.270s CPU time, 129.1M memory peak. Jul 6 23:38:12.203854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:38:12.356115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:38:12.370346 (kubelet)[2640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:38:12.418080 kubelet[2640]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:38:12.418080 kubelet[2640]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:38:12.418080 kubelet[2640]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:38:12.418394 kubelet[2640]: I0706 23:38:12.418060 2640 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:38:12.425748 kubelet[2640]: I0706 23:38:12.425705 2640 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:38:12.425748 kubelet[2640]: I0706 23:38:12.425736 2640 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:38:12.425997 kubelet[2640]: I0706 23:38:12.425976 2640 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:38:12.429620 kubelet[2640]: I0706 23:38:12.429585 2640 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:38:12.432448 kubelet[2640]: I0706 23:38:12.432416 2640 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:38:12.437507 kubelet[2640]: I0706 23:38:12.437458 2640 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:38:12.440048 kubelet[2640]: I0706 23:38:12.440022 2640 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:38:12.440243 kubelet[2640]: I0706 23:38:12.440217 2640 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:38:12.440404 kubelet[2640]: I0706 23:38:12.440244 2640 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:38:12.440481 kubelet[2640]: I0706 23:38:12.440413 2640 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:38:12.440481 kubelet[2640]: I0706 23:38:12.440423 2640 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:38:12.440481 kubelet[2640]: I0706 23:38:12.440465 2640 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:38:12.440626 kubelet[2640]: I0706 23:38:12.440604 2640 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:38:12.440626 kubelet[2640]: I0706 23:38:12.440621 2640 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:38:12.440676 kubelet[2640]: I0706 23:38:12.440647 2640 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:38:12.440676 kubelet[2640]: I0706 23:38:12.440660 2640 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:38:12.441599 kubelet[2640]: I0706 23:38:12.441401 2640 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:38:12.442009 kubelet[2640]: I0706 23:38:12.441962 2640 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:38:12.450339 kubelet[2640]: I0706 23:38:12.450189 2640 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:38:12.450339 kubelet[2640]: I0706 23:38:12.450247 2640 server.go:1289] "Started kubelet" Jul 6 23:38:12.451918 kubelet[2640]: I0706 23:38:12.451872 2640 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:38:12.455897 kubelet[2640]: I0706 23:38:12.454801 2640 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:38:12.458001 kubelet[2640]: I0706 23:38:12.457967 2640 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:38:12.458427 kubelet[2640]: I0706 23:38:12.458390 2640 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:38:12.458549 kubelet[2640]: I0706 23:38:12.458509 2640 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:38:12.459108 kubelet[2640]: I0706 23:38:12.454916 2640 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:38:12.459973 kubelet[2640]: I0706 23:38:12.455515 2640 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:38:12.459973 kubelet[2640]: I0706 23:38:12.459779 2640 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:38:12.460758 kubelet[2640]: I0706 23:38:12.460258 2640 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:38:12.460758 kubelet[2640]: I0706 23:38:12.455527 2640 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:38:12.460758 kubelet[2640]: E0706 23:38:12.455648 2640 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:38:12.460907 kubelet[2640]: I0706 23:38:12.460889 2640 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:38:12.462952 kubelet[2640]: I0706 23:38:12.462886 2640 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:38:12.464868 kubelet[2640]: E0706 23:38:12.464726 2640 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:38:12.476083 kubelet[2640]: I0706 23:38:12.475882 2640 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:38:12.479646 kubelet[2640]: I0706 23:38:12.479621 2640 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:38:12.479814 kubelet[2640]: I0706 23:38:12.479788 2640 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:38:12.479844 kubelet[2640]: I0706 23:38:12.479830 2640 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:38:12.479844 kubelet[2640]: I0706 23:38:12.479840 2640 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:38:12.479946 kubelet[2640]: E0706 23:38:12.479900 2640 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:38:12.507990 kubelet[2640]: I0706 23:38:12.507964 2640 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:38:12.508154 kubelet[2640]: I0706 23:38:12.508139 2640 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:38:12.508215 kubelet[2640]: I0706 23:38:12.508207 2640 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:38:12.508403 kubelet[2640]: I0706 23:38:12.508386 2640 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:38:12.508494 kubelet[2640]: I0706 23:38:12.508471 2640 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:38:12.508539 kubelet[2640]: I0706 23:38:12.508532 2640 policy_none.go:49] "None policy: Start" Jul 6 23:38:12.508598 kubelet[2640]: I0706 23:38:12.508589 2640 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:38:12.508652 kubelet[2640]: I0706 23:38:12.508644 2640 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:38:12.508801 kubelet[2640]: I0706 23:38:12.508787 2640 state_mem.go:75] "Updated machine memory state" Jul 6 23:38:12.513248 kubelet[2640]: E0706 23:38:12.513220 2640 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:38:12.513412 kubelet[2640]: I0706 23:38:12.513390 2640 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:38:12.513456 kubelet[2640]: I0706 23:38:12.513410 2640 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:38:12.513614 kubelet[2640]: I0706 23:38:12.513596 2640 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:38:12.514969 kubelet[2640]: E0706 23:38:12.514744 2640 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:38:12.581334 kubelet[2640]: I0706 23:38:12.581256 2640 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:12.581444 kubelet[2640]: I0706 23:38:12.581315 2640 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:12.581466 kubelet[2640]: I0706 23:38:12.581368 2640 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:38:12.591758 kubelet[2640]: E0706 23:38:12.591686 2640 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:12.591758 kubelet[2640]: E0706 23:38:12.591681 2640 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:38:12.617115 kubelet[2640]: I0706 23:38:12.617088 2640 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:38:12.673818 kubelet[2640]: I0706 23:38:12.673649 2640 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 6 23:38:12.673818 kubelet[2640]: I0706 23:38:12.673768 2640 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:38:12.761320 kubelet[2640]: I0706 23:38:12.761275 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:38:12.761320 kubelet[2640]: I0706 23:38:12.761321 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16e65bd0e6dbdd3eb1c209a7c3bf2f9f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"16e65bd0e6dbdd3eb1c209a7c3bf2f9f\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:12.761496 kubelet[2640]: I0706 23:38:12.761346 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16e65bd0e6dbdd3eb1c209a7c3bf2f9f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"16e65bd0e6dbdd3eb1c209a7c3bf2f9f\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:12.761496 kubelet[2640]: I0706 23:38:12.761363 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16e65bd0e6dbdd3eb1c209a7c3bf2f9f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"16e65bd0e6dbdd3eb1c209a7c3bf2f9f\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:12.761496 kubelet[2640]: I0706 23:38:12.761381 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:12.761496 kubelet[2640]: I0706 23:38:12.761396 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:12.761496 kubelet[2640]: I0706 23:38:12.761410 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:12.761607 kubelet[2640]: I0706 23:38:12.761425 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:12.761607 kubelet[2640]: I0706 23:38:12.761441 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:12.893098 kubelet[2640]: E0706 23:38:12.892946 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:12.893098 kubelet[2640]: E0706 23:38:12.892950 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:12.893098 kubelet[2640]: E0706 23:38:12.893070 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:13.441555 kubelet[2640]: I0706 23:38:13.441503 2640 apiserver.go:52] "Watching apiserver" Jul 6 23:38:13.461759 kubelet[2640]: I0706 23:38:13.461712 2640 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:38:13.501413 kubelet[2640]: I0706 23:38:13.501375 2640 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:13.502163 kubelet[2640]: I0706 23:38:13.501648 2640 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:38:13.502510 kubelet[2640]: I0706 23:38:13.502489 2640 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:13.523846 kubelet[2640]: E0706 23:38:13.523801 2640 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:38:13.524092 kubelet[2640]: E0706 23:38:13.524075 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:13.546808 kubelet[2640]: E0706 23:38:13.546216 2640 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:38:13.547204 kubelet[2640]: E0706 23:38:13.547017 2640 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:38:13.547204 kubelet[2640]: E0706 23:38:13.547164 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:13.547356 kubelet[2640]: E0706 23:38:13.547315 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:13.624967 kubelet[2640]: I0706 23:38:13.624712 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.6246942669999997 podStartE2EDuration="2.624694267s" podCreationTimestamp="2025-07-06 23:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:38:13.546339248 +0000 UTC m=+1.172887932" watchObservedRunningTime="2025-07-06 23:38:13.624694267 +0000 UTC m=+1.251242951" Jul 6 23:38:13.624967 kubelet[2640]: I0706 23:38:13.624824 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.624819638 podStartE2EDuration="1.624819638s" podCreationTimestamp="2025-07-06 23:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:38:13.624068733 +0000 UTC m=+1.250617417" watchObservedRunningTime="2025-07-06 23:38:13.624819638 +0000 UTC m=+1.251368282" Jul 6 23:38:13.650671 kubelet[2640]: I0706 23:38:13.650539 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.6504996199999997 podStartE2EDuration="2.65049962s" podCreationTimestamp="2025-07-06 23:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:38:13.637492568 +0000 UTC m=+1.264041252" watchObservedRunningTime="2025-07-06 23:38:13.65049962 +0000 UTC m=+1.277048304" Jul 6 23:38:14.503791 kubelet[2640]: E0706 23:38:14.503325 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:14.503791 kubelet[2640]: E0706 23:38:14.503391 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:14.503791 kubelet[2640]: E0706 23:38:14.503500 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:15.504348 kubelet[2640]: E0706 23:38:15.504311 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:16.628895 kubelet[2640]: E0706 23:38:16.628781 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:16.692129 kubelet[2640]: I0706 23:38:16.692084 2640 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:38:16.692458 containerd[1513]: time="2025-07-06T23:38:16.692422982Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:38:16.692968 kubelet[2640]: I0706 23:38:16.692949 2640 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:38:17.415163 systemd[1]: Created slice kubepods-besteffort-podd63dad51_4239_4f3b_aec8_ec896d3a9c44.slice - libcontainer container kubepods-besteffort-podd63dad51_4239_4f3b_aec8_ec896d3a9c44.slice. Jul 6 23:38:17.495377 kubelet[2640]: I0706 23:38:17.495241 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d63dad51-4239-4f3b-aec8-ec896d3a9c44-kube-proxy\") pod \"kube-proxy-4x7gh\" (UID: \"d63dad51-4239-4f3b-aec8-ec896d3a9c44\") " pod="kube-system/kube-proxy-4x7gh" Jul 6 23:38:17.495377 kubelet[2640]: I0706 23:38:17.495289 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d63dad51-4239-4f3b-aec8-ec896d3a9c44-xtables-lock\") pod \"kube-proxy-4x7gh\" (UID: \"d63dad51-4239-4f3b-aec8-ec896d3a9c44\") " pod="kube-system/kube-proxy-4x7gh" Jul 6 23:38:17.495377 kubelet[2640]: I0706 23:38:17.495305 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d63dad51-4239-4f3b-aec8-ec896d3a9c44-lib-modules\") pod \"kube-proxy-4x7gh\" (UID: \"d63dad51-4239-4f3b-aec8-ec896d3a9c44\") " pod="kube-system/kube-proxy-4x7gh" Jul 6 23:38:17.495377 kubelet[2640]: I0706 23:38:17.495321 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz77v\" (UniqueName: \"kubernetes.io/projected/d63dad51-4239-4f3b-aec8-ec896d3a9c44-kube-api-access-tz77v\") pod \"kube-proxy-4x7gh\" (UID: \"d63dad51-4239-4f3b-aec8-ec896d3a9c44\") " pod="kube-system/kube-proxy-4x7gh" Jul 6 23:38:17.724701 kubelet[2640]: E0706 23:38:17.724303 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:17.726109 containerd[1513]: time="2025-07-06T23:38:17.726068163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4x7gh,Uid:d63dad51-4239-4f3b-aec8-ec896d3a9c44,Namespace:kube-system,Attempt:0,}" Jul 6 23:38:17.736579 systemd[1]: Created slice kubepods-besteffort-pod0917f8d4_e8f5_41cc_a511_786dcf14db94.slice - libcontainer container kubepods-besteffort-pod0917f8d4_e8f5_41cc_a511_786dcf14db94.slice. Jul 6 23:38:17.752117 containerd[1513]: time="2025-07-06T23:38:17.752068600Z" level=info msg="connecting to shim 030f839b5161f02f7bc45391cda91a6e7b3c4de15a339758b19694a2a84905f4" address="unix:///run/containerd/s/8c537949c0a6780c778f89da5f8d463213726711631d02c4f3da61ac8a3bea30" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:17.781149 systemd[1]: Started cri-containerd-030f839b5161f02f7bc45391cda91a6e7b3c4de15a339758b19694a2a84905f4.scope - libcontainer container 030f839b5161f02f7bc45391cda91a6e7b3c4de15a339758b19694a2a84905f4. Jul 6 23:38:17.796704 kubelet[2640]: I0706 23:38:17.796624 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0917f8d4-e8f5-41cc-a511-786dcf14db94-var-lib-calico\") pod \"tigera-operator-747864d56d-xz4h9\" (UID: \"0917f8d4-e8f5-41cc-a511-786dcf14db94\") " pod="tigera-operator/tigera-operator-747864d56d-xz4h9" Jul 6 23:38:17.796704 kubelet[2640]: I0706 23:38:17.796676 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4kgf\" (UniqueName: \"kubernetes.io/projected/0917f8d4-e8f5-41cc-a511-786dcf14db94-kube-api-access-v4kgf\") pod \"tigera-operator-747864d56d-xz4h9\" (UID: \"0917f8d4-e8f5-41cc-a511-786dcf14db94\") " pod="tigera-operator/tigera-operator-747864d56d-xz4h9" Jul 6 23:38:17.805167 containerd[1513]: time="2025-07-06T23:38:17.805122021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4x7gh,Uid:d63dad51-4239-4f3b-aec8-ec896d3a9c44,Namespace:kube-system,Attempt:0,} returns sandbox id \"030f839b5161f02f7bc45391cda91a6e7b3c4de15a339758b19694a2a84905f4\"" Jul 6 23:38:17.805850 kubelet[2640]: E0706 23:38:17.805827 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:17.811151 containerd[1513]: time="2025-07-06T23:38:17.811114081Z" level=info msg="CreateContainer within sandbox \"030f839b5161f02f7bc45391cda91a6e7b3c4de15a339758b19694a2a84905f4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:38:17.821452 containerd[1513]: time="2025-07-06T23:38:17.821412892Z" level=info msg="Container 89b162fa8ba717425efd07ef8645493938067f38962eb4b4a9baf7649788f897: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:17.824576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134458299.mount: Deactivated successfully. Jul 6 23:38:17.830255 containerd[1513]: time="2025-07-06T23:38:17.830201076Z" level=info msg="CreateContainer within sandbox \"030f839b5161f02f7bc45391cda91a6e7b3c4de15a339758b19694a2a84905f4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"89b162fa8ba717425efd07ef8645493938067f38962eb4b4a9baf7649788f897\"" Jul 6 23:38:17.831120 containerd[1513]: time="2025-07-06T23:38:17.831093198Z" level=info msg="StartContainer for \"89b162fa8ba717425efd07ef8645493938067f38962eb4b4a9baf7649788f897\"" Jul 6 23:38:17.833494 containerd[1513]: time="2025-07-06T23:38:17.833468244Z" level=info msg="connecting to shim 89b162fa8ba717425efd07ef8645493938067f38962eb4b4a9baf7649788f897" address="unix:///run/containerd/s/8c537949c0a6780c778f89da5f8d463213726711631d02c4f3da61ac8a3bea30" protocol=ttrpc version=3 Jul 6 23:38:17.854141 systemd[1]: Started cri-containerd-89b162fa8ba717425efd07ef8645493938067f38962eb4b4a9baf7649788f897.scope - libcontainer container 89b162fa8ba717425efd07ef8645493938067f38962eb4b4a9baf7649788f897. Jul 6 23:38:17.895295 containerd[1513]: time="2025-07-06T23:38:17.894881164Z" level=info msg="StartContainer for \"89b162fa8ba717425efd07ef8645493938067f38962eb4b4a9baf7649788f897\" returns successfully" Jul 6 23:38:18.040795 containerd[1513]: time="2025-07-06T23:38:18.040748357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-xz4h9,Uid:0917f8d4-e8f5-41cc-a511-786dcf14db94,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:38:18.062354 containerd[1513]: time="2025-07-06T23:38:18.062273028Z" level=info msg="connecting to shim 8e204e4fc546472a19d0033dc694ca4a224c9ff787dd4258ea95bdf5fcc62587" address="unix:///run/containerd/s/289468455fca41c495620d8de047f212756456b0ff85225e62c8d9d14add1a4e" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:18.092434 systemd[1]: Started cri-containerd-8e204e4fc546472a19d0033dc694ca4a224c9ff787dd4258ea95bdf5fcc62587.scope - libcontainer container 8e204e4fc546472a19d0033dc694ca4a224c9ff787dd4258ea95bdf5fcc62587. Jul 6 23:38:18.134914 containerd[1513]: time="2025-07-06T23:38:18.134850912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-xz4h9,Uid:0917f8d4-e8f5-41cc-a511-786dcf14db94,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8e204e4fc546472a19d0033dc694ca4a224c9ff787dd4258ea95bdf5fcc62587\"" Jul 6 23:38:18.138101 containerd[1513]: time="2025-07-06T23:38:18.136859971Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:38:18.512785 kubelet[2640]: E0706 23:38:18.512674 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:18.523522 kubelet[2640]: I0706 23:38:18.523419 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4x7gh" podStartSLOduration=1.523402551 podStartE2EDuration="1.523402551s" podCreationTimestamp="2025-07-06 23:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:38:18.523291003 +0000 UTC m=+6.149839687" watchObservedRunningTime="2025-07-06 23:38:18.523402551 +0000 UTC m=+6.149951235" Jul 6 23:38:19.738811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2536808790.mount: Deactivated successfully. Jul 6 23:38:22.670743 kubelet[2640]: E0706 23:38:22.670676 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:22.851044 containerd[1513]: time="2025-07-06T23:38:22.850990526Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 6 23:38:22.854225 containerd[1513]: time="2025-07-06T23:38:22.854183600Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 4.71612101s" Jul 6 23:38:22.854225 containerd[1513]: time="2025-07-06T23:38:22.854221207Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 6 23:38:22.861183 containerd[1513]: time="2025-07-06T23:38:22.861110734Z" level=info msg="CreateContainer within sandbox \"8e204e4fc546472a19d0033dc694ca4a224c9ff787dd4258ea95bdf5fcc62587\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:38:22.865580 containerd[1513]: time="2025-07-06T23:38:22.865536291Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:22.866270 containerd[1513]: time="2025-07-06T23:38:22.866238991Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:22.866910 containerd[1513]: time="2025-07-06T23:38:22.866874077Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:22.876560 containerd[1513]: time="2025-07-06T23:38:22.876521350Z" level=info msg="Container b0eaa800ff49ae5f9a27b94df564daac30a3791f5c3193440f982ea8bf94bbae: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:22.880087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098100791.mount: Deactivated successfully. Jul 6 23:38:22.882067 containerd[1513]: time="2025-07-06T23:38:22.881995956Z" level=info msg="CreateContainer within sandbox \"8e204e4fc546472a19d0033dc694ca4a224c9ff787dd4258ea95bdf5fcc62587\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b0eaa800ff49ae5f9a27b94df564daac30a3791f5c3193440f982ea8bf94bbae\"" Jul 6 23:38:22.882751 containerd[1513]: time="2025-07-06T23:38:22.882728341Z" level=info msg="StartContainer for \"b0eaa800ff49ae5f9a27b94df564daac30a3791f5c3193440f982ea8bf94bbae\"" Jul 6 23:38:22.883756 containerd[1513]: time="2025-07-06T23:38:22.883645043Z" level=info msg="connecting to shim b0eaa800ff49ae5f9a27b94df564daac30a3791f5c3193440f982ea8bf94bbae" address="unix:///run/containerd/s/289468455fca41c495620d8de047f212756456b0ff85225e62c8d9d14add1a4e" protocol=ttrpc version=3 Jul 6 23:38:22.923116 systemd[1]: Started cri-containerd-b0eaa800ff49ae5f9a27b94df564daac30a3791f5c3193440f982ea8bf94bbae.scope - libcontainer container b0eaa800ff49ae5f9a27b94df564daac30a3791f5c3193440f982ea8bf94bbae. Jul 6 23:38:22.951306 containerd[1513]: time="2025-07-06T23:38:22.951266735Z" level=info msg="StartContainer for \"b0eaa800ff49ae5f9a27b94df564daac30a3791f5c3193440f982ea8bf94bbae\" returns successfully" Jul 6 23:38:23.524401 kubelet[2640]: E0706 23:38:23.523942 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:23.534430 kubelet[2640]: I0706 23:38:23.534360 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-xz4h9" podStartSLOduration=1.813884151 podStartE2EDuration="6.533666807s" podCreationTimestamp="2025-07-06 23:38:17 +0000 UTC" firstStartedPulling="2025-07-06 23:38:18.136466313 +0000 UTC m=+5.763014957" lastFinishedPulling="2025-07-06 23:38:22.856248929 +0000 UTC m=+10.482797613" observedRunningTime="2025-07-06 23:38:23.533563667 +0000 UTC m=+11.160112351" watchObservedRunningTime="2025-07-06 23:38:23.533666807 +0000 UTC m=+11.160215451" Jul 6 23:38:24.750601 kubelet[2640]: E0706 23:38:24.750550 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:26.636315 kubelet[2640]: E0706 23:38:26.636232 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:28.481050 sudo[1713]: pam_unix(sudo:session): session closed for user root Jul 6 23:38:28.494003 sshd[1712]: Connection closed by 10.0.0.1 port 55540 Jul 6 23:38:28.494488 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:28.498951 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:55540.service: Deactivated successfully. Jul 6 23:38:28.501552 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:38:28.501982 systemd[1]: session-7.scope: Consumed 6.356s CPU time, 221.9M memory peak. Jul 6 23:38:28.503270 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:38:28.506705 systemd-logind[1486]: Removed session 7. Jul 6 23:38:31.388490 update_engine[1493]: I20250706 23:38:31.387959 1493 update_attempter.cc:509] Updating boot flags... Jul 6 23:38:34.792065 systemd[1]: Created slice kubepods-besteffort-pod48d57bca_980c_408c_bf56_c4a191a9488f.slice - libcontainer container kubepods-besteffort-pod48d57bca_980c_408c_bf56_c4a191a9488f.slice. Jul 6 23:38:34.807993 kubelet[2640]: I0706 23:38:34.807865 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/48d57bca-980c-408c-bf56-c4a191a9488f-typha-certs\") pod \"calico-typha-8654c64c6b-879pc\" (UID: \"48d57bca-980c-408c-bf56-c4a191a9488f\") " pod="calico-system/calico-typha-8654c64c6b-879pc" Jul 6 23:38:34.807993 kubelet[2640]: I0706 23:38:34.807913 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nml6\" (UniqueName: \"kubernetes.io/projected/48d57bca-980c-408c-bf56-c4a191a9488f-kube-api-access-8nml6\") pod \"calico-typha-8654c64c6b-879pc\" (UID: \"48d57bca-980c-408c-bf56-c4a191a9488f\") " pod="calico-system/calico-typha-8654c64c6b-879pc" Jul 6 23:38:34.807993 kubelet[2640]: I0706 23:38:34.807948 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48d57bca-980c-408c-bf56-c4a191a9488f-tigera-ca-bundle\") pod \"calico-typha-8654c64c6b-879pc\" (UID: \"48d57bca-980c-408c-bf56-c4a191a9488f\") " pod="calico-system/calico-typha-8654c64c6b-879pc" Jul 6 23:38:34.953708 systemd[1]: Created slice kubepods-besteffort-podf6a335d7_e5aa_4a30_865b_ed6fbbc58850.slice - libcontainer container kubepods-besteffort-podf6a335d7_e5aa_4a30_865b_ed6fbbc58850.slice. Jul 6 23:38:35.008883 kubelet[2640]: I0706 23:38:35.008837 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6a335d7-e5aa-4a30-865b-ed6fbbc58850-xtables-lock\") pod \"calico-node-rdztx\" (UID: \"f6a335d7-e5aa-4a30-865b-ed6fbbc58850\") " pod="calico-system/calico-node-rdztx" Jul 6 23:38:35.008883 kubelet[2640]: I0706 23:38:35.008878 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f6a335d7-e5aa-4a30-865b-ed6fbbc58850-var-lib-calico\") pod \"calico-node-rdztx\" (UID: \"f6a335d7-e5aa-4a30-865b-ed6fbbc58850\") " pod="calico-system/calico-node-rdztx" Jul 6 23:38:35.009043 kubelet[2640]: I0706 23:38:35.008896 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v94jb\" (UniqueName: \"kubernetes.io/projected/f6a335d7-e5aa-4a30-865b-ed6fbbc58850-kube-api-access-v94jb\") pod \"calico-node-rdztx\" (UID: \"f6a335d7-e5aa-4a30-865b-ed6fbbc58850\") " pod="calico-system/calico-node-rdztx" Jul 6 23:38:35.009043 kubelet[2640]: I0706 23:38:35.008980 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f6a335d7-e5aa-4a30-865b-ed6fbbc58850-var-run-calico\") pod \"calico-node-rdztx\" (UID: \"f6a335d7-e5aa-4a30-865b-ed6fbbc58850\") " pod="calico-system/calico-node-rdztx" Jul 6 23:38:35.009043 kubelet[2640]: I0706 23:38:35.009023 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6a335d7-e5aa-4a30-865b-ed6fbbc58850-tigera-ca-bundle\") pod \"calico-node-rdztx\" (UID: \"f6a335d7-e5aa-4a30-865b-ed6fbbc58850\") " pod="calico-system/calico-node-rdztx" Jul 6 23:38:35.009108 kubelet[2640]: I0706 23:38:35.009057 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f6a335d7-e5aa-4a30-865b-ed6fbbc58850-cni-log-dir\") pod \"calico-node-rdztx\" (UID: \"f6a335d7-e5aa-4a30-865b-ed6fbbc58850\") " pod="calico-system/calico-node-rdztx" Jul 6 23:38:35.009147 kubelet[2640]: I0706 23:38:35.009105 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f6a335d7-e5aa-4a30-865b-ed6fbbc58850-cni-bin-dir\") pod \"calico-node-rdztx\" (UID: \"f6a335d7-e5aa-4a30-865b-ed6fbbc58850\") " pod="calico-system/calico-node-rdztx" Jul 6 23:38:35.009184 kubelet[2640]: I0706 23:38:35.009166 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f6a335d7-e5aa-4a30-865b-ed6fbbc58850-cni-net-dir\") pod \"calico-node-rdztx\" (UID: \"f6a335d7-e5aa-4a30-865b-ed6fbbc58850\") " pod="calico-system/calico-node-rdztx" Jul 6 23:38:35.009217 kubelet[2640]: I0706 23:38:35.009201 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6a335d7-e5aa-4a30-865b-ed6fbbc58850-lib-modules\") pod \"calico-node-rdztx\" (UID: \"f6a335d7-e5aa-4a30-865b-ed6fbbc58850\") " pod="calico-system/calico-node-rdztx" Jul 6 23:38:35.009252 kubelet[2640]: I0706 23:38:35.009234 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f6a335d7-e5aa-4a30-865b-ed6fbbc58850-policysync\") pod \"calico-node-rdztx\" (UID: \"f6a335d7-e5aa-4a30-865b-ed6fbbc58850\") " pod="calico-system/calico-node-rdztx" Jul 6 23:38:35.009282 kubelet[2640]: I0706 23:38:35.009270 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f6a335d7-e5aa-4a30-865b-ed6fbbc58850-flexvol-driver-host\") pod \"calico-node-rdztx\" (UID: \"f6a335d7-e5aa-4a30-865b-ed6fbbc58850\") " pod="calico-system/calico-node-rdztx" Jul 6 23:38:35.009305 kubelet[2640]: I0706 23:38:35.009291 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f6a335d7-e5aa-4a30-865b-ed6fbbc58850-node-certs\") pod \"calico-node-rdztx\" (UID: \"f6a335d7-e5aa-4a30-865b-ed6fbbc58850\") " pod="calico-system/calico-node-rdztx" Jul 6 23:38:35.097829 kubelet[2640]: E0706 23:38:35.097320 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:35.098147 containerd[1513]: time="2025-07-06T23:38:35.098084675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8654c64c6b-879pc,Uid:48d57bca-980c-408c-bf56-c4a191a9488f,Namespace:calico-system,Attempt:0,}" Jul 6 23:38:35.114348 kubelet[2640]: E0706 23:38:35.114300 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.114595 kubelet[2640]: W0706 23:38:35.114409 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.116202 kubelet[2640]: E0706 23:38:35.115794 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.120615 kubelet[2640]: E0706 23:38:35.120586 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.120615 kubelet[2640]: W0706 23:38:35.120606 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.120715 kubelet[2640]: E0706 23:38:35.120640 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.136617 kubelet[2640]: E0706 23:38:35.135607 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.136617 kubelet[2640]: W0706 23:38:35.135666 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.136617 kubelet[2640]: E0706 23:38:35.135688 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.160825 containerd[1513]: time="2025-07-06T23:38:35.159690802Z" level=info msg="connecting to shim b88592784e32fdf620ff91746f5a55f5d5350b54120825454dbd947e8f33565f" address="unix:///run/containerd/s/930fc859f29ca0bbc1dc9420154f4ffd4913adc9ef58198436c1f8eb26766112" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:35.192443 systemd[1]: Started cri-containerd-b88592784e32fdf620ff91746f5a55f5d5350b54120825454dbd947e8f33565f.scope - libcontainer container b88592784e32fdf620ff91746f5a55f5d5350b54120825454dbd947e8f33565f. Jul 6 23:38:35.197701 kubelet[2640]: E0706 23:38:35.197133 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5bgr" podUID="6d5d40c2-7d76-440e-95ce-43f964a9f978" Jul 6 23:38:35.241207 containerd[1513]: time="2025-07-06T23:38:35.241166345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8654c64c6b-879pc,Uid:48d57bca-980c-408c-bf56-c4a191a9488f,Namespace:calico-system,Attempt:0,} returns sandbox id \"b88592784e32fdf620ff91746f5a55f5d5350b54120825454dbd947e8f33565f\"" Jul 6 23:38:35.242147 kubelet[2640]: E0706 23:38:35.242109 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:35.243487 containerd[1513]: time="2025-07-06T23:38:35.243380249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:38:35.269189 containerd[1513]: time="2025-07-06T23:38:35.269147422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rdztx,Uid:f6a335d7-e5aa-4a30-865b-ed6fbbc58850,Namespace:calico-system,Attempt:0,}" Jul 6 23:38:35.293017 kubelet[2640]: E0706 23:38:35.292977 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.293121 kubelet[2640]: W0706 23:38:35.293004 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.293121 kubelet[2640]: E0706 23:38:35.293070 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.293749 kubelet[2640]: E0706 23:38:35.293309 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.293749 kubelet[2640]: W0706 23:38:35.293322 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.293749 kubelet[2640]: E0706 23:38:35.293370 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.293749 kubelet[2640]: E0706 23:38:35.293511 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.293749 kubelet[2640]: W0706 23:38:35.293519 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.293749 kubelet[2640]: E0706 23:38:35.293545 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.294007 kubelet[2640]: E0706 23:38:35.293800 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.294007 kubelet[2640]: W0706 23:38:35.293810 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.294007 kubelet[2640]: E0706 23:38:35.293819 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.294250 kubelet[2640]: E0706 23:38:35.294224 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.294250 kubelet[2640]: W0706 23:38:35.294241 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.294637 kubelet[2640]: E0706 23:38:35.294273 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.294946 kubelet[2640]: E0706 23:38:35.294867 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.294946 kubelet[2640]: W0706 23:38:35.294884 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.294946 kubelet[2640]: E0706 23:38:35.294895 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.295374 kubelet[2640]: E0706 23:38:35.295355 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.295374 kubelet[2640]: W0706 23:38:35.295371 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.295456 kubelet[2640]: E0706 23:38:35.295383 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.295617 kubelet[2640]: E0706 23:38:35.295602 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.295617 kubelet[2640]: W0706 23:38:35.295613 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.295671 kubelet[2640]: E0706 23:38:35.295622 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.295827 kubelet[2640]: E0706 23:38:35.295812 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.295827 kubelet[2640]: W0706 23:38:35.295823 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.295876 kubelet[2640]: E0706 23:38:35.295831 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.295990 kubelet[2640]: E0706 23:38:35.295978 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.295990 kubelet[2640]: W0706 23:38:35.295988 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.296042 kubelet[2640]: E0706 23:38:35.295996 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.296139 kubelet[2640]: E0706 23:38:35.296128 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.296163 kubelet[2640]: W0706 23:38:35.296138 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.296163 kubelet[2640]: E0706 23:38:35.296155 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.296320 kubelet[2640]: E0706 23:38:35.296307 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.296320 kubelet[2640]: W0706 23:38:35.296318 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.296430 kubelet[2640]: E0706 23:38:35.296325 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.296658 kubelet[2640]: E0706 23:38:35.296638 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.296692 kubelet[2640]: W0706 23:38:35.296662 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.296692 kubelet[2640]: E0706 23:38:35.296674 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.296882 kubelet[2640]: E0706 23:38:35.296868 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.296882 kubelet[2640]: W0706 23:38:35.296881 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.296949 kubelet[2640]: E0706 23:38:35.296890 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.297079 kubelet[2640]: E0706 23:38:35.297053 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.297079 kubelet[2640]: W0706 23:38:35.297063 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.297129 kubelet[2640]: E0706 23:38:35.297089 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.297314 kubelet[2640]: E0706 23:38:35.297232 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.297314 kubelet[2640]: W0706 23:38:35.297243 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.297314 kubelet[2640]: E0706 23:38:35.297251 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.297415 kubelet[2640]: E0706 23:38:35.297403 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.297415 kubelet[2640]: W0706 23:38:35.297413 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.297460 kubelet[2640]: E0706 23:38:35.297421 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.297555 kubelet[2640]: E0706 23:38:35.297546 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.297585 kubelet[2640]: W0706 23:38:35.297555 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.297585 kubelet[2640]: E0706 23:38:35.297562 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.297709 kubelet[2640]: E0706 23:38:35.297698 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.297733 kubelet[2640]: W0706 23:38:35.297708 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.297733 kubelet[2640]: E0706 23:38:35.297717 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.297850 kubelet[2640]: E0706 23:38:35.297839 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.297850 kubelet[2640]: W0706 23:38:35.297849 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.297891 kubelet[2640]: E0706 23:38:35.297856 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.300116 containerd[1513]: time="2025-07-06T23:38:35.300071918Z" level=info msg="connecting to shim b4b4873f823106a0e80492372129161dd9b50c4445a8963d013c83b8042ea7fe" address="unix:///run/containerd/s/11c9704d0c55012f01a2a1303227bcad91aac5d069387c2636d908f38edcfe62" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:35.312391 kubelet[2640]: E0706 23:38:35.312363 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.312391 kubelet[2640]: W0706 23:38:35.312386 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.312842 kubelet[2640]: E0706 23:38:35.312404 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.312842 kubelet[2640]: I0706 23:38:35.312429 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6d5d40c2-7d76-440e-95ce-43f964a9f978-varrun\") pod \"csi-node-driver-s5bgr\" (UID: \"6d5d40c2-7d76-440e-95ce-43f964a9f978\") " pod="calico-system/csi-node-driver-s5bgr" Jul 6 23:38:35.316244 kubelet[2640]: E0706 23:38:35.315021 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.317673 kubelet[2640]: W0706 23:38:35.317064 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.317673 kubelet[2640]: E0706 23:38:35.317093 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.317673 kubelet[2640]: I0706 23:38:35.317131 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6d5d40c2-7d76-440e-95ce-43f964a9f978-socket-dir\") pod \"csi-node-driver-s5bgr\" (UID: \"6d5d40c2-7d76-440e-95ce-43f964a9f978\") " pod="calico-system/csi-node-driver-s5bgr" Jul 6 23:38:35.318086 kubelet[2640]: E0706 23:38:35.317868 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.318086 kubelet[2640]: W0706 23:38:35.317884 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.318086 kubelet[2640]: E0706 23:38:35.317895 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.318086 kubelet[2640]: I0706 23:38:35.317959 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2blc\" (UniqueName: \"kubernetes.io/projected/6d5d40c2-7d76-440e-95ce-43f964a9f978-kube-api-access-v2blc\") pod \"csi-node-driver-s5bgr\" (UID: \"6d5d40c2-7d76-440e-95ce-43f964a9f978\") " pod="calico-system/csi-node-driver-s5bgr" Jul 6 23:38:35.319072 kubelet[2640]: E0706 23:38:35.319033 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.319072 kubelet[2640]: W0706 23:38:35.319047 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.319072 kubelet[2640]: E0706 23:38:35.319058 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.319478 kubelet[2640]: E0706 23:38:35.319463 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.319651 kubelet[2640]: W0706 23:38:35.319596 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.319758 kubelet[2640]: E0706 23:38:35.319744 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.320243 kubelet[2640]: E0706 23:38:35.320230 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.320548 kubelet[2640]: W0706 23:38:35.320378 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.320646 kubelet[2640]: E0706 23:38:35.320633 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.320793 kubelet[2640]: I0706 23:38:35.320778 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d5d40c2-7d76-440e-95ce-43f964a9f978-registration-dir\") pod \"csi-node-driver-s5bgr\" (UID: \"6d5d40c2-7d76-440e-95ce-43f964a9f978\") " pod="calico-system/csi-node-driver-s5bgr" Jul 6 23:38:35.321056 kubelet[2640]: E0706 23:38:35.321041 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.321665 kubelet[2640]: W0706 23:38:35.321646 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.322021 kubelet[2640]: E0706 23:38:35.321992 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.323105 systemd[1]: Started cri-containerd-b4b4873f823106a0e80492372129161dd9b50c4445a8963d013c83b8042ea7fe.scope - libcontainer container b4b4873f823106a0e80492372129161dd9b50c4445a8963d013c83b8042ea7fe. Jul 6 23:38:35.326063 kubelet[2640]: E0706 23:38:35.326046 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.326152 kubelet[2640]: W0706 23:38:35.326138 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.326212 kubelet[2640]: E0706 23:38:35.326200 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.327352 kubelet[2640]: E0706 23:38:35.327334 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.328948 kubelet[2640]: W0706 23:38:35.328932 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.329020 kubelet[2640]: E0706 23:38:35.329007 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.329298 kubelet[2640]: E0706 23:38:35.329286 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.329399 kubelet[2640]: W0706 23:38:35.329385 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.329465 kubelet[2640]: E0706 23:38:35.329454 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.329780 kubelet[2640]: E0706 23:38:35.329765 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.329873 kubelet[2640]: W0706 23:38:35.329856 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.329924 kubelet[2640]: E0706 23:38:35.329915 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.330987 kubelet[2640]: E0706 23:38:35.330969 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.331084 kubelet[2640]: W0706 23:38:35.331070 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.331145 kubelet[2640]: E0706 23:38:35.331134 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.332107 kubelet[2640]: E0706 23:38:35.332083 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.333062 kubelet[2640]: W0706 23:38:35.332961 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.333062 kubelet[2640]: E0706 23:38:35.332979 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.333062 kubelet[2640]: I0706 23:38:35.333010 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d5d40c2-7d76-440e-95ce-43f964a9f978-kubelet-dir\") pod \"csi-node-driver-s5bgr\" (UID: \"6d5d40c2-7d76-440e-95ce-43f964a9f978\") " pod="calico-system/csi-node-driver-s5bgr" Jul 6 23:38:35.333486 kubelet[2640]: E0706 23:38:35.333421 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.333486 kubelet[2640]: W0706 23:38:35.333436 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.333486 kubelet[2640]: E0706 23:38:35.333448 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.333807 kubelet[2640]: E0706 23:38:35.333763 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.333807 kubelet[2640]: W0706 23:38:35.333778 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.333807 kubelet[2640]: E0706 23:38:35.333787 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.368787 containerd[1513]: time="2025-07-06T23:38:35.368678436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rdztx,Uid:f6a335d7-e5aa-4a30-865b-ed6fbbc58850,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4b4873f823106a0e80492372129161dd9b50c4445a8963d013c83b8042ea7fe\"" Jul 6 23:38:35.434375 kubelet[2640]: E0706 23:38:35.434338 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.434375 kubelet[2640]: W0706 23:38:35.434363 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.434375 kubelet[2640]: E0706 23:38:35.434381 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.434583 kubelet[2640]: E0706 23:38:35.434564 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.434583 kubelet[2640]: W0706 23:38:35.434575 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.434583 kubelet[2640]: E0706 23:38:35.434590 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.434815 kubelet[2640]: E0706 23:38:35.434802 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.434815 kubelet[2640]: W0706 23:38:35.434814 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.434872 kubelet[2640]: E0706 23:38:35.434823 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.435109 kubelet[2640]: E0706 23:38:35.435079 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.435109 kubelet[2640]: W0706 23:38:35.435108 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.435208 kubelet[2640]: E0706 23:38:35.435123 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.435307 kubelet[2640]: E0706 23:38:35.435294 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.435307 kubelet[2640]: W0706 23:38:35.435306 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.435366 kubelet[2640]: E0706 23:38:35.435315 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.435496 kubelet[2640]: E0706 23:38:35.435484 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.435538 kubelet[2640]: W0706 23:38:35.435496 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.435538 kubelet[2640]: E0706 23:38:35.435506 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.435735 kubelet[2640]: E0706 23:38:35.435723 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.435770 kubelet[2640]: W0706 23:38:35.435744 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.435770 kubelet[2640]: E0706 23:38:35.435754 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.435951 kubelet[2640]: E0706 23:38:35.435923 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.435951 kubelet[2640]: W0706 23:38:35.435945 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.436018 kubelet[2640]: E0706 23:38:35.435953 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.436116 kubelet[2640]: E0706 23:38:35.436102 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.436116 kubelet[2640]: W0706 23:38:35.436113 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.436165 kubelet[2640]: E0706 23:38:35.436123 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.436262 kubelet[2640]: E0706 23:38:35.436246 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.436262 kubelet[2640]: W0706 23:38:35.436257 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.436376 kubelet[2640]: E0706 23:38:35.436264 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.436432 kubelet[2640]: E0706 23:38:35.436415 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.436432 kubelet[2640]: W0706 23:38:35.436427 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.436509 kubelet[2640]: E0706 23:38:35.436435 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.436605 kubelet[2640]: E0706 23:38:35.436580 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.436605 kubelet[2640]: W0706 23:38:35.436601 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.436703 kubelet[2640]: E0706 23:38:35.436609 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.436792 kubelet[2640]: E0706 23:38:35.436780 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.436792 kubelet[2640]: W0706 23:38:35.436792 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.436869 kubelet[2640]: E0706 23:38:35.436801 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.436978 kubelet[2640]: E0706 23:38:35.436965 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.436978 kubelet[2640]: W0706 23:38:35.436976 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.437036 kubelet[2640]: E0706 23:38:35.436984 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.437171 kubelet[2640]: E0706 23:38:35.437159 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.437171 kubelet[2640]: W0706 23:38:35.437170 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.437220 kubelet[2640]: E0706 23:38:35.437179 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.437322 kubelet[2640]: E0706 23:38:35.437310 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.437322 kubelet[2640]: W0706 23:38:35.437320 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.437376 kubelet[2640]: E0706 23:38:35.437328 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.437467 kubelet[2640]: E0706 23:38:35.437454 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.437467 kubelet[2640]: W0706 23:38:35.437463 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.437511 kubelet[2640]: E0706 23:38:35.437471 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.437674 kubelet[2640]: E0706 23:38:35.437659 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.437674 kubelet[2640]: W0706 23:38:35.437671 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.437762 kubelet[2640]: E0706 23:38:35.437680 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.437824 kubelet[2640]: E0706 23:38:35.437811 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.437824 kubelet[2640]: W0706 23:38:35.437822 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.437901 kubelet[2640]: E0706 23:38:35.437829 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.437996 kubelet[2640]: E0706 23:38:35.437983 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.437996 kubelet[2640]: W0706 23:38:35.437994 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.438054 kubelet[2640]: E0706 23:38:35.438002 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.438150 kubelet[2640]: E0706 23:38:35.438138 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.438150 kubelet[2640]: W0706 23:38:35.438149 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.438219 kubelet[2640]: E0706 23:38:35.438157 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.438286 kubelet[2640]: E0706 23:38:35.438274 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.438286 kubelet[2640]: W0706 23:38:35.438283 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.438352 kubelet[2640]: E0706 23:38:35.438291 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.438454 kubelet[2640]: E0706 23:38:35.438443 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.438454 kubelet[2640]: W0706 23:38:35.438454 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.438497 kubelet[2640]: E0706 23:38:35.438461 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.438711 kubelet[2640]: E0706 23:38:35.438697 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.438739 kubelet[2640]: W0706 23:38:35.438711 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.438739 kubelet[2640]: E0706 23:38:35.438722 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.439294 kubelet[2640]: E0706 23:38:35.439276 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.439294 kubelet[2640]: W0706 23:38:35.439290 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.439367 kubelet[2640]: E0706 23:38:35.439302 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:35.448675 kubelet[2640]: E0706 23:38:35.448602 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:35.448675 kubelet[2640]: W0706 23:38:35.448623 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:35.448675 kubelet[2640]: E0706 23:38:35.448641 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:36.186671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1591799130.mount: Deactivated successfully. Jul 6 23:38:36.769482 containerd[1513]: time="2025-07-06T23:38:36.769428221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:36.769974 containerd[1513]: time="2025-07-06T23:38:36.769949871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 6 23:38:36.770840 containerd[1513]: time="2025-07-06T23:38:36.770798233Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:36.773160 containerd[1513]: time="2025-07-06T23:38:36.773122618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:36.773750 containerd[1513]: time="2025-07-06T23:38:36.773609585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.530173891s" Jul 6 23:38:36.773750 containerd[1513]: time="2025-07-06T23:38:36.773636548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 6 23:38:36.774593 containerd[1513]: time="2025-07-06T23:38:36.774449387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:38:36.784810 containerd[1513]: time="2025-07-06T23:38:36.784755424Z" level=info msg="CreateContainer within sandbox \"b88592784e32fdf620ff91746f5a55f5d5350b54120825454dbd947e8f33565f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:38:36.791023 containerd[1513]: time="2025-07-06T23:38:36.790978187Z" level=info msg="Container c148f6487939a50466298a40a2ec450aae5f77dd366b69a0c94c638b925899dd: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:36.797665 containerd[1513]: time="2025-07-06T23:38:36.797560264Z" level=info msg="CreateContainer within sandbox \"b88592784e32fdf620ff91746f5a55f5d5350b54120825454dbd947e8f33565f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c148f6487939a50466298a40a2ec450aae5f77dd366b69a0c94c638b925899dd\"" Jul 6 23:38:36.798413 containerd[1513]: time="2025-07-06T23:38:36.798130959Z" level=info msg="StartContainer for \"c148f6487939a50466298a40a2ec450aae5f77dd366b69a0c94c638b925899dd\"" Jul 6 23:38:36.799336 containerd[1513]: time="2025-07-06T23:38:36.799309353Z" level=info msg="connecting to shim c148f6487939a50466298a40a2ec450aae5f77dd366b69a0c94c638b925899dd" address="unix:///run/containerd/s/930fc859f29ca0bbc1dc9420154f4ffd4913adc9ef58198436c1f8eb26766112" protocol=ttrpc version=3 Jul 6 23:38:36.823139 systemd[1]: Started cri-containerd-c148f6487939a50466298a40a2ec450aae5f77dd366b69a0c94c638b925899dd.scope - libcontainer container c148f6487939a50466298a40a2ec450aae5f77dd366b69a0c94c638b925899dd. Jul 6 23:38:36.863477 containerd[1513]: time="2025-07-06T23:38:36.863381995Z" level=info msg="StartContainer for \"c148f6487939a50466298a40a2ec450aae5f77dd366b69a0c94c638b925899dd\" returns successfully" Jul 6 23:38:37.480473 kubelet[2640]: E0706 23:38:37.480405 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5bgr" podUID="6d5d40c2-7d76-440e-95ce-43f964a9f978" Jul 6 23:38:37.556249 kubelet[2640]: E0706 23:38:37.555519 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:37.614675 kubelet[2640]: E0706 23:38:37.614607 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.614675 kubelet[2640]: W0706 23:38:37.614631 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.614675 kubelet[2640]: E0706 23:38:37.614652 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.614911 kubelet[2640]: E0706 23:38:37.614904 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.614965 kubelet[2640]: W0706 23:38:37.614913 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.614965 kubelet[2640]: E0706 23:38:37.614922 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.615265 kubelet[2640]: E0706 23:38:37.615249 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.615265 kubelet[2640]: W0706 23:38:37.615263 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.615359 kubelet[2640]: E0706 23:38:37.615273 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.615503 kubelet[2640]: E0706 23:38:37.615470 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.615503 kubelet[2640]: W0706 23:38:37.615502 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.615556 kubelet[2640]: E0706 23:38:37.615511 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.615807 kubelet[2640]: E0706 23:38:37.615690 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.615807 kubelet[2640]: W0706 23:38:37.615801 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.615881 kubelet[2640]: E0706 23:38:37.615815 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.616034 kubelet[2640]: E0706 23:38:37.616020 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.616034 kubelet[2640]: W0706 23:38:37.616032 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.616082 kubelet[2640]: E0706 23:38:37.616041 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.616295 kubelet[2640]: E0706 23:38:37.616279 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.616295 kubelet[2640]: W0706 23:38:37.616292 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.616379 kubelet[2640]: E0706 23:38:37.616301 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.616464 kubelet[2640]: E0706 23:38:37.616449 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.616506 kubelet[2640]: W0706 23:38:37.616481 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.616506 kubelet[2640]: E0706 23:38:37.616491 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.617784 kubelet[2640]: E0706 23:38:37.617715 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.617784 kubelet[2640]: W0706 23:38:37.617733 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.617784 kubelet[2640]: E0706 23:38:37.617746 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.618233 kubelet[2640]: E0706 23:38:37.618217 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.618316 kubelet[2640]: W0706 23:38:37.618304 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.618316 kubelet[2640]: E0706 23:38:37.618336 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.618669 kubelet[2640]: E0706 23:38:37.618622 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.618838 kubelet[2640]: W0706 23:38:37.618732 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.618838 kubelet[2640]: E0706 23:38:37.618754 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.619281 kubelet[2640]: E0706 23:38:37.619229 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.619472 kubelet[2640]: W0706 23:38:37.619332 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.619472 kubelet[2640]: E0706 23:38:37.619347 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.619843 kubelet[2640]: E0706 23:38:37.619681 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.620061 kubelet[2640]: W0706 23:38:37.619783 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.620061 kubelet[2640]: E0706 23:38:37.619966 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.621634 kubelet[2640]: E0706 23:38:37.620511 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.621634 kubelet[2640]: W0706 23:38:37.620560 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.621634 kubelet[2640]: E0706 23:38:37.620573 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.622063 kubelet[2640]: E0706 23:38:37.621942 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.622063 kubelet[2640]: W0706 23:38:37.621992 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.622063 kubelet[2640]: E0706 23:38:37.622005 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.654214 kubelet[2640]: E0706 23:38:37.654161 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.654214 kubelet[2640]: W0706 23:38:37.654183 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.654483 kubelet[2640]: E0706 23:38:37.654352 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.655083 kubelet[2640]: E0706 23:38:37.654969 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.655083 kubelet[2640]: W0706 23:38:37.655046 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.655304 kubelet[2640]: E0706 23:38:37.655068 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.655905 kubelet[2640]: E0706 23:38:37.655887 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.655905 kubelet[2640]: W0706 23:38:37.655904 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.655986 kubelet[2640]: E0706 23:38:37.655917 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.656201 kubelet[2640]: E0706 23:38:37.656184 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.656201 kubelet[2640]: W0706 23:38:37.656198 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.656375 kubelet[2640]: E0706 23:38:37.656349 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.656557 kubelet[2640]: E0706 23:38:37.656543 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.656557 kubelet[2640]: W0706 23:38:37.656555 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.656557 kubelet[2640]: E0706 23:38:37.656565 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.656706 kubelet[2640]: E0706 23:38:37.656698 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.656852 kubelet[2640]: W0706 23:38:37.656722 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.656852 kubelet[2640]: E0706 23:38:37.656731 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.657084 kubelet[2640]: E0706 23:38:37.656947 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.657084 kubelet[2640]: W0706 23:38:37.656957 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.657084 kubelet[2640]: E0706 23:38:37.656965 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.657260 kubelet[2640]: E0706 23:38:37.657247 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.657260 kubelet[2640]: W0706 23:38:37.657259 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.657332 kubelet[2640]: E0706 23:38:37.657268 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.657577 kubelet[2640]: E0706 23:38:37.657563 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.657577 kubelet[2640]: W0706 23:38:37.657575 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.657634 kubelet[2640]: E0706 23:38:37.657583 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.657848 kubelet[2640]: E0706 23:38:37.657811 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.657848 kubelet[2640]: W0706 23:38:37.657825 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.658081 kubelet[2640]: E0706 23:38:37.657852 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.658081 kubelet[2640]: E0706 23:38:37.658055 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.658081 kubelet[2640]: W0706 23:38:37.658063 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.658081 kubelet[2640]: E0706 23:38:37.658073 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.658208 kubelet[2640]: E0706 23:38:37.658199 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.658208 kubelet[2640]: W0706 23:38:37.658206 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.658271 kubelet[2640]: E0706 23:38:37.658213 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.658364 kubelet[2640]: E0706 23:38:37.658353 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.658364 kubelet[2640]: W0706 23:38:37.658362 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.658364 kubelet[2640]: E0706 23:38:37.658369 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.658523 kubelet[2640]: E0706 23:38:37.658512 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.658523 kubelet[2640]: W0706 23:38:37.658522 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.658633 kubelet[2640]: E0706 23:38:37.658529 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.658753 kubelet[2640]: E0706 23:38:37.658740 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.658753 kubelet[2640]: W0706 23:38:37.658750 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.658809 kubelet[2640]: E0706 23:38:37.658757 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.659273 kubelet[2640]: E0706 23:38:37.659240 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.659327 kubelet[2640]: W0706 23:38:37.659273 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.659327 kubelet[2640]: E0706 23:38:37.659287 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.660702 kubelet[2640]: E0706 23:38:37.660648 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.660702 kubelet[2640]: W0706 23:38:37.660671 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.660702 kubelet[2640]: E0706 23:38:37.660683 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.662510 kubelet[2640]: E0706 23:38:37.662488 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:38:37.662510 kubelet[2640]: W0706 23:38:37.662512 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:38:37.662760 kubelet[2640]: E0706 23:38:37.662525 2640 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:38:37.719946 containerd[1513]: time="2025-07-06T23:38:37.719894716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:37.720478 containerd[1513]: time="2025-07-06T23:38:37.720449207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 6 23:38:37.721291 containerd[1513]: time="2025-07-06T23:38:37.721267963Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:37.722995 containerd[1513]: time="2025-07-06T23:38:37.722965880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:37.723567 containerd[1513]: time="2025-07-06T23:38:37.723536813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 949.055343ms" Jul 6 23:38:37.723616 containerd[1513]: time="2025-07-06T23:38:37.723571056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 6 23:38:37.726880 containerd[1513]: time="2025-07-06T23:38:37.726846119Z" level=info msg="CreateContainer within sandbox \"b4b4873f823106a0e80492372129161dd9b50c4445a8963d013c83b8042ea7fe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:38:37.753813 containerd[1513]: time="2025-07-06T23:38:37.751147366Z" level=info msg="Container fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:37.760042 containerd[1513]: time="2025-07-06T23:38:37.760004145Z" level=info msg="CreateContainer within sandbox \"b4b4873f823106a0e80492372129161dd9b50c4445a8963d013c83b8042ea7fe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1\"" Jul 6 23:38:37.760630 containerd[1513]: time="2025-07-06T23:38:37.760609041Z" level=info msg="StartContainer for \"fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1\"" Jul 6 23:38:37.761904 containerd[1513]: time="2025-07-06T23:38:37.761880919Z" level=info msg="connecting to shim fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1" address="unix:///run/containerd/s/11c9704d0c55012f01a2a1303227bcad91aac5d069387c2636d908f38edcfe62" protocol=ttrpc version=3 Jul 6 23:38:37.784219 systemd[1]: Started cri-containerd-fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1.scope - libcontainer container fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1. Jul 6 23:38:37.822868 containerd[1513]: time="2025-07-06T23:38:37.822798592Z" level=info msg="StartContainer for \"fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1\" returns successfully" Jul 6 23:38:37.857417 systemd[1]: cri-containerd-fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1.scope: Deactivated successfully. Jul 6 23:38:37.857768 systemd[1]: cri-containerd-fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1.scope: Consumed 52ms CPU time, 6.1M memory peak, 4.5M written to disk. Jul 6 23:38:37.892064 containerd[1513]: time="2025-07-06T23:38:37.892019073Z" level=info msg="received exit event container_id:\"fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1\" id:\"fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1\" pid:3348 exited_at:{seconds:1751845117 nanos:864387198}" Jul 6 23:38:37.892375 containerd[1513]: time="2025-07-06T23:38:37.892149525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1\" id:\"fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1\" pid:3348 exited_at:{seconds:1751845117 nanos:864387198}" Jul 6 23:38:37.934607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb473c3c3a7935e98cbbe6e123b4e07b9b28764b29291c0c52e9f433046713d1-rootfs.mount: Deactivated successfully. Jul 6 23:38:38.560614 kubelet[2640]: I0706 23:38:38.560582 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:38:38.561623 kubelet[2640]: E0706 23:38:38.560833 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:38.561671 containerd[1513]: time="2025-07-06T23:38:38.561135360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:38:38.595012 kubelet[2640]: I0706 23:38:38.594947 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8654c64c6b-879pc" podStartSLOduration=3.063586342 podStartE2EDuration="4.594909906s" podCreationTimestamp="2025-07-06 23:38:34 +0000 UTC" firstStartedPulling="2025-07-06 23:38:35.242979409 +0000 UTC m=+22.869528093" lastFinishedPulling="2025-07-06 23:38:36.774302973 +0000 UTC m=+24.400851657" observedRunningTime="2025-07-06 23:38:37.574239367 +0000 UTC m=+25.200788091" watchObservedRunningTime="2025-07-06 23:38:38.594909906 +0000 UTC m=+26.221458590" Jul 6 23:38:39.481373 kubelet[2640]: E0706 23:38:39.481323 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5bgr" podUID="6d5d40c2-7d76-440e-95ce-43f964a9f978" Jul 6 23:38:41.271858 containerd[1513]: time="2025-07-06T23:38:41.271807789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:41.272385 containerd[1513]: time="2025-07-06T23:38:41.272355512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 6 23:38:41.273258 containerd[1513]: time="2025-07-06T23:38:41.273203418Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:41.275170 containerd[1513]: time="2025-07-06T23:38:41.275135808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:41.275720 containerd[1513]: time="2025-07-06T23:38:41.275693851Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.714523649s" Jul 6 23:38:41.275885 containerd[1513]: time="2025-07-06T23:38:41.275795739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 6 23:38:41.281172 containerd[1513]: time="2025-07-06T23:38:41.281127594Z" level=info msg="CreateContainer within sandbox \"b4b4873f823106a0e80492372129161dd9b50c4445a8963d013c83b8042ea7fe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:38:41.290955 containerd[1513]: time="2025-07-06T23:38:41.288952882Z" level=info msg="Container d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:41.298564 containerd[1513]: time="2025-07-06T23:38:41.298421178Z" level=info msg="CreateContainer within sandbox \"b4b4873f823106a0e80492372129161dd9b50c4445a8963d013c83b8042ea7fe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee\"" Jul 6 23:38:41.299077 containerd[1513]: time="2025-07-06T23:38:41.299045466Z" level=info msg="StartContainer for \"d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee\"" Jul 6 23:38:41.300618 containerd[1513]: time="2025-07-06T23:38:41.300589826Z" level=info msg="connecting to shim d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee" address="unix:///run/containerd/s/11c9704d0c55012f01a2a1303227bcad91aac5d069387c2636d908f38edcfe62" protocol=ttrpc version=3 Jul 6 23:38:41.327187 systemd[1]: Started cri-containerd-d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee.scope - libcontainer container d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee. Jul 6 23:38:41.373188 containerd[1513]: time="2025-07-06T23:38:41.373137985Z" level=info msg="StartContainer for \"d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee\" returns successfully" Jul 6 23:38:41.481030 kubelet[2640]: E0706 23:38:41.480969 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5bgr" podUID="6d5d40c2-7d76-440e-95ce-43f964a9f978" Jul 6 23:38:42.137903 systemd[1]: cri-containerd-d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee.scope: Deactivated successfully. Jul 6 23:38:42.139587 systemd[1]: cri-containerd-d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee.scope: Consumed 591ms CPU time, 174.3M memory peak, 1.2M read from disk, 165.8M written to disk. Jul 6 23:38:42.142555 containerd[1513]: time="2025-07-06T23:38:42.141843409Z" level=info msg="received exit event container_id:\"d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee\" id:\"d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee\" pid:3405 exited_at:{seconds:1751845122 nanos:141594631}" Jul 6 23:38:42.142728 containerd[1513]: time="2025-07-06T23:38:42.142427573Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee\" id:\"d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee\" pid:3405 exited_at:{seconds:1751845122 nanos:141594631}" Jul 6 23:38:42.161461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d72dfa9761dc456def8580637749e5332411201ad4414b8bd56ee0278793fbee-rootfs.mount: Deactivated successfully. Jul 6 23:38:42.202460 kubelet[2640]: I0706 23:38:42.202426 2640 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:38:42.282201 systemd[1]: Created slice kubepods-besteffort-podd09adaa7_c29e_4b27_a37d_e7fa809f26ab.slice - libcontainer container kubepods-besteffort-podd09adaa7_c29e_4b27_a37d_e7fa809f26ab.slice. Jul 6 23:38:42.296486 systemd[1]: Created slice kubepods-burstable-pod1b89afa8_1bf1_4e93_8372_434b3f1f9f6c.slice - libcontainer container kubepods-burstable-pod1b89afa8_1bf1_4e93_8372_434b3f1f9f6c.slice. Jul 6 23:38:42.304767 systemd[1]: Created slice kubepods-burstable-pod593f9cf9_5714_4196_a312_14e156daf0d7.slice - libcontainer container kubepods-burstable-pod593f9cf9_5714_4196_a312_14e156daf0d7.slice. Jul 6 23:38:42.311784 systemd[1]: Created slice kubepods-besteffort-pod69e2a38d_b277_4659_be5c_e834c107567f.slice - libcontainer container kubepods-besteffort-pod69e2a38d_b277_4659_be5c_e834c107567f.slice. Jul 6 23:38:42.320127 systemd[1]: Created slice kubepods-besteffort-podf6b6c6dc_2aeb_4c09_a0e3_c0d895605167.slice - libcontainer container kubepods-besteffort-podf6b6c6dc_2aeb_4c09_a0e3_c0d895605167.slice. Jul 6 23:38:42.328830 systemd[1]: Created slice kubepods-besteffort-podaeac0e8a_f55d_4143_a3d5_0b9990b44e2d.slice - libcontainer container kubepods-besteffort-podaeac0e8a_f55d_4143_a3d5_0b9990b44e2d.slice. Jul 6 23:38:42.340709 systemd[1]: Created slice kubepods-besteffort-pod4e0ebb2d_9c3f_4c45_afc1_1707b2fb0fdc.slice - libcontainer container kubepods-besteffort-pod4e0ebb2d_9c3f_4c45_afc1_1707b2fb0fdc.slice. Jul 6 23:38:42.389893 kubelet[2640]: I0706 23:38:42.389766 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6b6c6dc-2aeb-4c09-a0e3-c0d895605167-calico-apiserver-certs\") pod \"calico-apiserver-844cfc594d-wc774\" (UID: \"f6b6c6dc-2aeb-4c09-a0e3-c0d895605167\") " pod="calico-apiserver/calico-apiserver-844cfc594d-wc774" Jul 6 23:38:42.389893 kubelet[2640]: I0706 23:38:42.389818 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p75d2\" (UniqueName: \"kubernetes.io/projected/f6b6c6dc-2aeb-4c09-a0e3-c0d895605167-kube-api-access-p75d2\") pod \"calico-apiserver-844cfc594d-wc774\" (UID: \"f6b6c6dc-2aeb-4c09-a0e3-c0d895605167\") " pod="calico-apiserver/calico-apiserver-844cfc594d-wc774" Jul 6 23:38:42.389893 kubelet[2640]: I0706 23:38:42.389846 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d09adaa7-c29e-4b27-a37d-e7fa809f26ab-whisker-ca-bundle\") pod \"whisker-95b89bbbc-4fjpn\" (UID: \"d09adaa7-c29e-4b27-a37d-e7fa809f26ab\") " pod="calico-system/whisker-95b89bbbc-4fjpn" Jul 6 23:38:42.389893 kubelet[2640]: I0706 23:38:42.389862 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5w7x\" (UniqueName: \"kubernetes.io/projected/d09adaa7-c29e-4b27-a37d-e7fa809f26ab-kube-api-access-p5w7x\") pod \"whisker-95b89bbbc-4fjpn\" (UID: \"d09adaa7-c29e-4b27-a37d-e7fa809f26ab\") " pod="calico-system/whisker-95b89bbbc-4fjpn" Jul 6 23:38:42.390359 kubelet[2640]: I0706 23:38:42.389879 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc-goldmane-key-pair\") pod \"goldmane-768f4c5c69-nx6kk\" (UID: \"4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc\") " pod="calico-system/goldmane-768f4c5c69-nx6kk" Jul 6 23:38:42.390416 kubelet[2640]: I0706 23:38:42.390376 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b89afa8-1bf1-4e93-8372-434b3f1f9f6c-config-volume\") pod \"coredns-674b8bbfcf-mrqjt\" (UID: \"1b89afa8-1bf1-4e93-8372-434b3f1f9f6c\") " pod="kube-system/coredns-674b8bbfcf-mrqjt" Jul 6 23:38:42.390416 kubelet[2640]: I0706 23:38:42.390405 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc-config\") pod \"goldmane-768f4c5c69-nx6kk\" (UID: \"4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc\") " pod="calico-system/goldmane-768f4c5c69-nx6kk" Jul 6 23:38:42.390460 kubelet[2640]: I0706 23:38:42.390420 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-nx6kk\" (UID: \"4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc\") " pod="calico-system/goldmane-768f4c5c69-nx6kk" Jul 6 23:38:42.390460 kubelet[2640]: I0706 23:38:42.390442 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d09adaa7-c29e-4b27-a37d-e7fa809f26ab-whisker-backend-key-pair\") pod \"whisker-95b89bbbc-4fjpn\" (UID: \"d09adaa7-c29e-4b27-a37d-e7fa809f26ab\") " pod="calico-system/whisker-95b89bbbc-4fjpn" Jul 6 23:38:42.390460 kubelet[2640]: I0706 23:38:42.390459 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/593f9cf9-5714-4196-a312-14e156daf0d7-config-volume\") pod \"coredns-674b8bbfcf-ddxgk\" (UID: \"593f9cf9-5714-4196-a312-14e156daf0d7\") " pod="kube-system/coredns-674b8bbfcf-ddxgk" Jul 6 23:38:42.390527 kubelet[2640]: I0706 23:38:42.390476 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsqpg\" (UniqueName: \"kubernetes.io/projected/593f9cf9-5714-4196-a312-14e156daf0d7-kube-api-access-nsqpg\") pod \"coredns-674b8bbfcf-ddxgk\" (UID: \"593f9cf9-5714-4196-a312-14e156daf0d7\") " pod="kube-system/coredns-674b8bbfcf-ddxgk" Jul 6 23:38:42.390527 kubelet[2640]: I0706 23:38:42.390493 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/69e2a38d-b277-4659-be5c-e834c107567f-calico-apiserver-certs\") pod \"calico-apiserver-844cfc594d-8c9tx\" (UID: \"69e2a38d-b277-4659-be5c-e834c107567f\") " pod="calico-apiserver/calico-apiserver-844cfc594d-8c9tx" Jul 6 23:38:42.390527 kubelet[2640]: I0706 23:38:42.390510 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47s8t\" (UniqueName: \"kubernetes.io/projected/4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc-kube-api-access-47s8t\") pod \"goldmane-768f4c5c69-nx6kk\" (UID: \"4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc\") " pod="calico-system/goldmane-768f4c5c69-nx6kk" Jul 6 23:38:42.390601 kubelet[2640]: I0706 23:38:42.390526 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeac0e8a-f55d-4143-a3d5-0b9990b44e2d-tigera-ca-bundle\") pod \"calico-kube-controllers-7df7dc4d54-w5qkj\" (UID: \"aeac0e8a-f55d-4143-a3d5-0b9990b44e2d\") " pod="calico-system/calico-kube-controllers-7df7dc4d54-w5qkj" Jul 6 23:38:42.390601 kubelet[2640]: I0706 23:38:42.390556 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psbzr\" (UniqueName: \"kubernetes.io/projected/69e2a38d-b277-4659-be5c-e834c107567f-kube-api-access-psbzr\") pod \"calico-apiserver-844cfc594d-8c9tx\" (UID: \"69e2a38d-b277-4659-be5c-e834c107567f\") " pod="calico-apiserver/calico-apiserver-844cfc594d-8c9tx" Jul 6 23:38:42.390601 kubelet[2640]: I0706 23:38:42.390573 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l82c\" (UniqueName: \"kubernetes.io/projected/aeac0e8a-f55d-4143-a3d5-0b9990b44e2d-kube-api-access-8l82c\") pod \"calico-kube-controllers-7df7dc4d54-w5qkj\" (UID: \"aeac0e8a-f55d-4143-a3d5-0b9990b44e2d\") " pod="calico-system/calico-kube-controllers-7df7dc4d54-w5qkj" Jul 6 23:38:42.390601 kubelet[2640]: I0706 23:38:42.390592 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7srw\" (UniqueName: \"kubernetes.io/projected/1b89afa8-1bf1-4e93-8372-434b3f1f9f6c-kube-api-access-h7srw\") pod \"coredns-674b8bbfcf-mrqjt\" (UID: \"1b89afa8-1bf1-4e93-8372-434b3f1f9f6c\") " pod="kube-system/coredns-674b8bbfcf-mrqjt" Jul 6 23:38:42.585758 containerd[1513]: time="2025-07-06T23:38:42.585696077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:38:42.587434 containerd[1513]: time="2025-07-06T23:38:42.587350200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-95b89bbbc-4fjpn,Uid:d09adaa7-c29e-4b27-a37d-e7fa809f26ab,Namespace:calico-system,Attempt:0,}" Jul 6 23:38:42.603988 kubelet[2640]: E0706 23:38:42.601505 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:42.605207 containerd[1513]: time="2025-07-06T23:38:42.602265793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mrqjt,Uid:1b89afa8-1bf1-4e93-8372-434b3f1f9f6c,Namespace:kube-system,Attempt:0,}" Jul 6 23:38:42.609661 kubelet[2640]: E0706 23:38:42.609618 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:42.612647 containerd[1513]: time="2025-07-06T23:38:42.612603724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddxgk,Uid:593f9cf9-5714-4196-a312-14e156daf0d7,Namespace:kube-system,Attempt:0,}" Jul 6 23:38:42.621619 containerd[1513]: time="2025-07-06T23:38:42.621557752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-844cfc594d-8c9tx,Uid:69e2a38d-b277-4659-be5c-e834c107567f,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:38:42.626199 containerd[1513]: time="2025-07-06T23:38:42.626069168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-844cfc594d-wc774,Uid:f6b6c6dc-2aeb-4c09-a0e3-c0d895605167,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:38:42.639623 containerd[1513]: time="2025-07-06T23:38:42.636390618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df7dc4d54-w5qkj,Uid:aeac0e8a-f55d-4143-a3d5-0b9990b44e2d,Namespace:calico-system,Attempt:0,}" Jul 6 23:38:42.652174 containerd[1513]: time="2025-07-06T23:38:42.646901482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nx6kk,Uid:4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc,Namespace:calico-system,Attempt:0,}" Jul 6 23:38:43.178947 containerd[1513]: time="2025-07-06T23:38:43.178645384Z" level=error msg="Failed to destroy network for sandbox \"3ffb711d01de3f9cd96acf645632e11ecf773b5c3fa3b0c641b0575c2e85fa72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.181167 containerd[1513]: time="2025-07-06T23:38:43.181109200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mrqjt,Uid:1b89afa8-1bf1-4e93-8372-434b3f1f9f6c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffb711d01de3f9cd96acf645632e11ecf773b5c3fa3b0c641b0575c2e85fa72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.187359 containerd[1513]: time="2025-07-06T23:38:43.187314405Z" level=error msg="Failed to destroy network for sandbox \"c7c16ceaa51f6dc43d2d7eff75f2d3e84f6c32fcbb7542098f5498b37ee6f1fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.191666 containerd[1513]: time="2025-07-06T23:38:43.191600992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df7dc4d54-w5qkj,Uid:aeac0e8a-f55d-4143-a3d5-0b9990b44e2d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7c16ceaa51f6dc43d2d7eff75f2d3e84f6c32fcbb7542098f5498b37ee6f1fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.192837 kubelet[2640]: E0706 23:38:43.192154 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7c16ceaa51f6dc43d2d7eff75f2d3e84f6c32fcbb7542098f5498b37ee6f1fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.192837 kubelet[2640]: E0706 23:38:43.192398 2640 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7c16ceaa51f6dc43d2d7eff75f2d3e84f6c32fcbb7542098f5498b37ee6f1fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7df7dc4d54-w5qkj" Jul 6 23:38:43.192980 kubelet[2640]: E0706 23:38:43.192845 2640 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7c16ceaa51f6dc43d2d7eff75f2d3e84f6c32fcbb7542098f5498b37ee6f1fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7df7dc4d54-w5qkj" Jul 6 23:38:43.193350 kubelet[2640]: E0706 23:38:43.192967 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7df7dc4d54-w5qkj_calico-system(aeac0e8a-f55d-4143-a3d5-0b9990b44e2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7df7dc4d54-w5qkj_calico-system(aeac0e8a-f55d-4143-a3d5-0b9990b44e2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7c16ceaa51f6dc43d2d7eff75f2d3e84f6c32fcbb7542098f5498b37ee6f1fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7df7dc4d54-w5qkj" podUID="aeac0e8a-f55d-4143-a3d5-0b9990b44e2d" Jul 6 23:38:43.195040 kubelet[2640]: E0706 23:38:43.194654 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffb711d01de3f9cd96acf645632e11ecf773b5c3fa3b0c641b0575c2e85fa72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.195040 kubelet[2640]: E0706 23:38:43.194981 2640 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffb711d01de3f9cd96acf645632e11ecf773b5c3fa3b0c641b0575c2e85fa72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mrqjt" Jul 6 23:38:43.195892 kubelet[2640]: E0706 23:38:43.195031 2640 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffb711d01de3f9cd96acf645632e11ecf773b5c3fa3b0c641b0575c2e85fa72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mrqjt" Jul 6 23:38:43.195987 kubelet[2640]: E0706 23:38:43.195921 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mrqjt_kube-system(1b89afa8-1bf1-4e93-8372-434b3f1f9f6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mrqjt_kube-system(1b89afa8-1bf1-4e93-8372-434b3f1f9f6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ffb711d01de3f9cd96acf645632e11ecf773b5c3fa3b0c641b0575c2e85fa72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mrqjt" podUID="1b89afa8-1bf1-4e93-8372-434b3f1f9f6c" Jul 6 23:38:43.209186 containerd[1513]: time="2025-07-06T23:38:43.209111447Z" level=error msg="Failed to destroy network for sandbox \"7bfdecaece607a05b2b845686841db9cc57c126b342100de894621770f3e1640\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.210901 containerd[1513]: time="2025-07-06T23:38:43.210848131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-844cfc594d-wc774,Uid:f6b6c6dc-2aeb-4c09-a0e3-c0d895605167,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bfdecaece607a05b2b845686841db9cc57c126b342100de894621770f3e1640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.211286 kubelet[2640]: E0706 23:38:43.211240 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bfdecaece607a05b2b845686841db9cc57c126b342100de894621770f3e1640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.211368 kubelet[2640]: E0706 23:38:43.211333 2640 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bfdecaece607a05b2b845686841db9cc57c126b342100de894621770f3e1640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-844cfc594d-wc774" Jul 6 23:38:43.211368 kubelet[2640]: E0706 23:38:43.211356 2640 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bfdecaece607a05b2b845686841db9cc57c126b342100de894621770f3e1640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-844cfc594d-wc774" Jul 6 23:38:43.211466 kubelet[2640]: E0706 23:38:43.211418 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-844cfc594d-wc774_calico-apiserver(f6b6c6dc-2aeb-4c09-a0e3-c0d895605167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-844cfc594d-wc774_calico-apiserver(f6b6c6dc-2aeb-4c09-a0e3-c0d895605167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bfdecaece607a05b2b845686841db9cc57c126b342100de894621770f3e1640\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-844cfc594d-wc774" podUID="f6b6c6dc-2aeb-4c09-a0e3-c0d895605167" Jul 6 23:38:43.215456 containerd[1513]: time="2025-07-06T23:38:43.215214524Z" level=error msg="Failed to destroy network for sandbox \"b9aefc846953b45b7142f747129bad0c2766dd43bc02048b913633e039658efd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.218309 containerd[1513]: time="2025-07-06T23:38:43.218202338Z" level=error msg="Failed to destroy network for sandbox \"731010a77762b9d1ffecc5c7af4f69a602dfcf558cc50c80c1204bc975e7dc31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.218756 containerd[1513]: time="2025-07-06T23:38:43.218712615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-844cfc594d-8c9tx,Uid:69e2a38d-b277-4659-be5c-e834c107567f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9aefc846953b45b7142f747129bad0c2766dd43bc02048b913633e039658efd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.219297 kubelet[2640]: E0706 23:38:43.219260 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9aefc846953b45b7142f747129bad0c2766dd43bc02048b913633e039658efd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.219386 kubelet[2640]: E0706 23:38:43.219320 2640 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9aefc846953b45b7142f747129bad0c2766dd43bc02048b913633e039658efd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-844cfc594d-8c9tx" Jul 6 23:38:43.219512 kubelet[2640]: E0706 23:38:43.219341 2640 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9aefc846953b45b7142f747129bad0c2766dd43bc02048b913633e039658efd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-844cfc594d-8c9tx" Jul 6 23:38:43.219744 kubelet[2640]: E0706 23:38:43.219595 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-844cfc594d-8c9tx_calico-apiserver(69e2a38d-b277-4659-be5c-e834c107567f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-844cfc594d-8c9tx_calico-apiserver(69e2a38d-b277-4659-be5c-e834c107567f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9aefc846953b45b7142f747129bad0c2766dd43bc02048b913633e039658efd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-844cfc594d-8c9tx" podUID="69e2a38d-b277-4659-be5c-e834c107567f" Jul 6 23:38:43.224608 containerd[1513]: time="2025-07-06T23:38:43.222997482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nx6kk,Uid:4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"731010a77762b9d1ffecc5c7af4f69a602dfcf558cc50c80c1204bc975e7dc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.224967 kubelet[2640]: E0706 23:38:43.224919 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"731010a77762b9d1ffecc5c7af4f69a602dfcf558cc50c80c1204bc975e7dc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.225109 kubelet[2640]: E0706 23:38:43.225090 2640 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"731010a77762b9d1ffecc5c7af4f69a602dfcf558cc50c80c1204bc975e7dc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-nx6kk" Jul 6 23:38:43.225185 kubelet[2640]: E0706 23:38:43.225169 2640 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"731010a77762b9d1ffecc5c7af4f69a602dfcf558cc50c80c1204bc975e7dc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-nx6kk" Jul 6 23:38:43.225316 kubelet[2640]: E0706 23:38:43.225289 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-nx6kk_calico-system(4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-nx6kk_calico-system(4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"731010a77762b9d1ffecc5c7af4f69a602dfcf558cc50c80c1204bc975e7dc31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-nx6kk" podUID="4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc" Jul 6 23:38:43.226254 containerd[1513]: time="2025-07-06T23:38:43.226196751Z" level=error msg="Failed to destroy network for sandbox \"9a1bedfa35df8c877f94fdefbe5a8568568a34f87005823405f30affd6be0990\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.226656 containerd[1513]: time="2025-07-06T23:38:43.226626422Z" level=error msg="Failed to destroy network for sandbox \"f0c117f6dcd2494d35a8fa172bbeead862eb91bad3d31201d8261a56dd37e028\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.229388 containerd[1513]: time="2025-07-06T23:38:43.229341697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddxgk,Uid:593f9cf9-5714-4196-a312-14e156daf0d7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1bedfa35df8c877f94fdefbe5a8568568a34f87005823405f30affd6be0990\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.229593 kubelet[2640]: E0706 23:38:43.229549 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1bedfa35df8c877f94fdefbe5a8568568a34f87005823405f30affd6be0990\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.229656 kubelet[2640]: E0706 23:38:43.229608 2640 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1bedfa35df8c877f94fdefbe5a8568568a34f87005823405f30affd6be0990\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ddxgk" Jul 6 23:38:43.229656 kubelet[2640]: E0706 23:38:43.229630 2640 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1bedfa35df8c877f94fdefbe5a8568568a34f87005823405f30affd6be0990\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ddxgk" Jul 6 23:38:43.229714 kubelet[2640]: E0706 23:38:43.229684 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ddxgk_kube-system(593f9cf9-5714-4196-a312-14e156daf0d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ddxgk_kube-system(593f9cf9-5714-4196-a312-14e156daf0d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a1bedfa35df8c877f94fdefbe5a8568568a34f87005823405f30affd6be0990\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ddxgk" podUID="593f9cf9-5714-4196-a312-14e156daf0d7" Jul 6 23:38:43.230300 containerd[1513]: time="2025-07-06T23:38:43.230249202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-95b89bbbc-4fjpn,Uid:d09adaa7-c29e-4b27-a37d-e7fa809f26ab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0c117f6dcd2494d35a8fa172bbeead862eb91bad3d31201d8261a56dd37e028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.230468 kubelet[2640]: E0706 23:38:43.230417 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0c117f6dcd2494d35a8fa172bbeead862eb91bad3d31201d8261a56dd37e028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.230625 kubelet[2640]: E0706 23:38:43.230470 2640 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0c117f6dcd2494d35a8fa172bbeead862eb91bad3d31201d8261a56dd37e028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-95b89bbbc-4fjpn" Jul 6 23:38:43.230625 kubelet[2640]: E0706 23:38:43.230489 2640 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0c117f6dcd2494d35a8fa172bbeead862eb91bad3d31201d8261a56dd37e028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-95b89bbbc-4fjpn" Jul 6 23:38:43.230625 kubelet[2640]: E0706 23:38:43.230536 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-95b89bbbc-4fjpn_calico-system(d09adaa7-c29e-4b27-a37d-e7fa809f26ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-95b89bbbc-4fjpn_calico-system(d09adaa7-c29e-4b27-a37d-e7fa809f26ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0c117f6dcd2494d35a8fa172bbeead862eb91bad3d31201d8261a56dd37e028\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-95b89bbbc-4fjpn" podUID="d09adaa7-c29e-4b27-a37d-e7fa809f26ab" Jul 6 23:38:43.488485 systemd[1]: Created slice kubepods-besteffort-pod6d5d40c2_7d76_440e_95ce_43f964a9f978.slice - libcontainer container kubepods-besteffort-pod6d5d40c2_7d76_440e_95ce_43f964a9f978.slice. Jul 6 23:38:43.492742 containerd[1513]: time="2025-07-06T23:38:43.492703488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5bgr,Uid:6d5d40c2-7d76-440e-95ce-43f964a9f978,Namespace:calico-system,Attempt:0,}" Jul 6 23:38:43.549248 containerd[1513]: time="2025-07-06T23:38:43.549200456Z" level=error msg="Failed to destroy network for sandbox \"19bc40f7bc566cff902f14faeb0dd723491f179d220ad94eabaca713bcd9fd9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.551317 systemd[1]: run-netns-cni\x2d05fee7d2\x2d0e69\x2daab0\x2d7888\x2d803935d636de.mount: Deactivated successfully. Jul 6 23:38:43.552766 containerd[1513]: time="2025-07-06T23:38:43.552697627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5bgr,Uid:6d5d40c2-7d76-440e-95ce-43f964a9f978,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19bc40f7bc566cff902f14faeb0dd723491f179d220ad94eabaca713bcd9fd9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.553136 kubelet[2640]: E0706 23:38:43.553083 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19bc40f7bc566cff902f14faeb0dd723491f179d220ad94eabaca713bcd9fd9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:38:43.553209 kubelet[2640]: E0706 23:38:43.553161 2640 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19bc40f7bc566cff902f14faeb0dd723491f179d220ad94eabaca713bcd9fd9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s5bgr" Jul 6 23:38:43.553209 kubelet[2640]: E0706 23:38:43.553198 2640 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19bc40f7bc566cff902f14faeb0dd723491f179d220ad94eabaca713bcd9fd9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s5bgr" Jul 6 23:38:43.553300 kubelet[2640]: E0706 23:38:43.553249 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s5bgr_calico-system(6d5d40c2-7d76-440e-95ce-43f964a9f978)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s5bgr_calico-system(6d5d40c2-7d76-440e-95ce-43f964a9f978)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19bc40f7bc566cff902f14faeb0dd723491f179d220ad94eabaca713bcd9fd9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s5bgr" podUID="6d5d40c2-7d76-440e-95ce-43f964a9f978" Jul 6 23:38:46.960420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1525726137.mount: Deactivated successfully. Jul 6 23:38:47.277946 containerd[1513]: time="2025-07-06T23:38:47.277709382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:47.292945 containerd[1513]: time="2025-07-06T23:38:47.278157330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 6 23:38:47.292945 containerd[1513]: time="2025-07-06T23:38:47.281106392Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:47.293320 containerd[1513]: time="2025-07-06T23:38:47.289692601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.703932999s" Jul 6 23:38:47.293373 containerd[1513]: time="2025-07-06T23:38:47.293323145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 6 23:38:47.293645 containerd[1513]: time="2025-07-06T23:38:47.293614963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:47.316909 containerd[1513]: time="2025-07-06T23:38:47.316836274Z" level=info msg="CreateContainer within sandbox \"b4b4873f823106a0e80492372129161dd9b50c4445a8963d013c83b8042ea7fe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:38:47.335918 containerd[1513]: time="2025-07-06T23:38:47.335864887Z" level=info msg="Container 7df7f9c49cd83fa4dfe923d52d15eabd9f091571f38c1c372f6650a6dabfbb4e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:47.355401 containerd[1513]: time="2025-07-06T23:38:47.355345488Z" level=info msg="CreateContainer within sandbox \"b4b4873f823106a0e80492372129161dd9b50c4445a8963d013c83b8042ea7fe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7df7f9c49cd83fa4dfe923d52d15eabd9f091571f38c1c372f6650a6dabfbb4e\"" Jul 6 23:38:47.356017 containerd[1513]: time="2025-07-06T23:38:47.355975087Z" level=info msg="StartContainer for \"7df7f9c49cd83fa4dfe923d52d15eabd9f091571f38c1c372f6650a6dabfbb4e\"" Jul 6 23:38:47.357840 containerd[1513]: time="2025-07-06T23:38:47.357791398Z" level=info msg="connecting to shim 7df7f9c49cd83fa4dfe923d52d15eabd9f091571f38c1c372f6650a6dabfbb4e" address="unix:///run/containerd/s/11c9704d0c55012f01a2a1303227bcad91aac5d069387c2636d908f38edcfe62" protocol=ttrpc version=3 Jul 6 23:38:47.384158 systemd[1]: Started cri-containerd-7df7f9c49cd83fa4dfe923d52d15eabd9f091571f38c1c372f6650a6dabfbb4e.scope - libcontainer container 7df7f9c49cd83fa4dfe923d52d15eabd9f091571f38c1c372f6650a6dabfbb4e. Jul 6 23:38:47.485164 containerd[1513]: time="2025-07-06T23:38:47.485119927Z" level=info msg="StartContainer for \"7df7f9c49cd83fa4dfe923d52d15eabd9f091571f38c1c372f6650a6dabfbb4e\" returns successfully" Jul 6 23:38:47.907510 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:38:47.907717 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:38:47.916060 containerd[1513]: time="2025-07-06T23:38:47.915955924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7df7f9c49cd83fa4dfe923d52d15eabd9f091571f38c1c372f6650a6dabfbb4e\" id:\"0e1de2f1da92d2193b6959eb22041b38454fec2d67d8d63df99fdf0990b1c177\" pid:3759 exit_status:1 exited_at:{seconds:1751845127 nanos:915604982}" Jul 6 23:38:48.039977 kubelet[2640]: I0706 23:38:48.038524 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rdztx" podStartSLOduration=2.1141079720000002 podStartE2EDuration="14.038503478s" podCreationTimestamp="2025-07-06 23:38:34 +0000 UTC" firstStartedPulling="2025-07-06 23:38:35.370378888 +0000 UTC m=+22.996927572" lastFinishedPulling="2025-07-06 23:38:47.294774394 +0000 UTC m=+34.921323078" observedRunningTime="2025-07-06 23:38:47.633172133 +0000 UTC m=+35.259720857" watchObservedRunningTime="2025-07-06 23:38:48.038503478 +0000 UTC m=+35.665052122" Jul 6 23:38:48.152362 kubelet[2640]: I0706 23:38:48.151905 2640 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d09adaa7-c29e-4b27-a37d-e7fa809f26ab-whisker-backend-key-pair\") pod \"d09adaa7-c29e-4b27-a37d-e7fa809f26ab\" (UID: \"d09adaa7-c29e-4b27-a37d-e7fa809f26ab\") " Jul 6 23:38:48.152856 kubelet[2640]: I0706 23:38:48.152674 2640 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d09adaa7-c29e-4b27-a37d-e7fa809f26ab-whisker-ca-bundle\") pod \"d09adaa7-c29e-4b27-a37d-e7fa809f26ab\" (UID: \"d09adaa7-c29e-4b27-a37d-e7fa809f26ab\") " Jul 6 23:38:48.152856 kubelet[2640]: I0706 23:38:48.153018 2640 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5w7x\" (UniqueName: \"kubernetes.io/projected/d09adaa7-c29e-4b27-a37d-e7fa809f26ab-kube-api-access-p5w7x\") pod \"d09adaa7-c29e-4b27-a37d-e7fa809f26ab\" (UID: \"d09adaa7-c29e-4b27-a37d-e7fa809f26ab\") " Jul 6 23:38:48.168813 kubelet[2640]: I0706 23:38:48.167413 2640 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d09adaa7-c29e-4b27-a37d-e7fa809f26ab-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d09adaa7-c29e-4b27-a37d-e7fa809f26ab" (UID: "d09adaa7-c29e-4b27-a37d-e7fa809f26ab"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:38:48.171008 kubelet[2640]: I0706 23:38:48.169232 2640 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d09adaa7-c29e-4b27-a37d-e7fa809f26ab-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d09adaa7-c29e-4b27-a37d-e7fa809f26ab" (UID: "d09adaa7-c29e-4b27-a37d-e7fa809f26ab"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:38:48.170993 systemd[1]: var-lib-kubelet-pods-d09adaa7\x2dc29e\x2d4b27\x2da37d\x2de7fa809f26ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp5w7x.mount: Deactivated successfully. Jul 6 23:38:48.171097 systemd[1]: var-lib-kubelet-pods-d09adaa7\x2dc29e\x2d4b27\x2da37d\x2de7fa809f26ab-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:38:48.171846 kubelet[2640]: I0706 23:38:48.171803 2640 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09adaa7-c29e-4b27-a37d-e7fa809f26ab-kube-api-access-p5w7x" (OuterVolumeSpecName: "kube-api-access-p5w7x") pod "d09adaa7-c29e-4b27-a37d-e7fa809f26ab" (UID: "d09adaa7-c29e-4b27-a37d-e7fa809f26ab"). InnerVolumeSpecName "kube-api-access-p5w7x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:38:48.254182 kubelet[2640]: I0706 23:38:48.254131 2640 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d09adaa7-c29e-4b27-a37d-e7fa809f26ab-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 6 23:38:48.254182 kubelet[2640]: I0706 23:38:48.254167 2640 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d09adaa7-c29e-4b27-a37d-e7fa809f26ab-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 6 23:38:48.254182 kubelet[2640]: I0706 23:38:48.254177 2640 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p5w7x\" (UniqueName: \"kubernetes.io/projected/d09adaa7-c29e-4b27-a37d-e7fa809f26ab-kube-api-access-p5w7x\") on node \"localhost\" DevicePath \"\"" Jul 6 23:38:48.490539 systemd[1]: Removed slice kubepods-besteffort-podd09adaa7_c29e_4b27_a37d_e7fa809f26ab.slice - libcontainer container kubepods-besteffort-podd09adaa7_c29e_4b27_a37d_e7fa809f26ab.slice. Jul 6 23:38:48.713057 systemd[1]: Created slice kubepods-besteffort-pod29bb7fd2_8b31_44c0_b2d9_9d92c7f40d86.slice - libcontainer container kubepods-besteffort-pod29bb7fd2_8b31_44c0_b2d9_9d92c7f40d86.slice. Jul 6 23:38:48.751741 containerd[1513]: time="2025-07-06T23:38:48.751603397Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7df7f9c49cd83fa4dfe923d52d15eabd9f091571f38c1c372f6650a6dabfbb4e\" id:\"f927e559372666112ceb4765aa2c1cb0477bcbf98d7e6279a95a7896b8c8ce3b\" pid:3818 exit_status:1 exited_at:{seconds:1751845128 nanos:751284218}" Jul 6 23:38:48.759226 kubelet[2640]: I0706 23:38:48.759166 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-529g9\" (UniqueName: \"kubernetes.io/projected/29bb7fd2-8b31-44c0-b2d9-9d92c7f40d86-kube-api-access-529g9\") pod \"whisker-64778d6cfc-7nz6w\" (UID: \"29bb7fd2-8b31-44c0-b2d9-9d92c7f40d86\") " pod="calico-system/whisker-64778d6cfc-7nz6w" Jul 6 23:38:48.759226 kubelet[2640]: I0706 23:38:48.759229 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/29bb7fd2-8b31-44c0-b2d9-9d92c7f40d86-whisker-backend-key-pair\") pod \"whisker-64778d6cfc-7nz6w\" (UID: \"29bb7fd2-8b31-44c0-b2d9-9d92c7f40d86\") " pod="calico-system/whisker-64778d6cfc-7nz6w" Jul 6 23:38:48.759417 kubelet[2640]: I0706 23:38:48.759324 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29bb7fd2-8b31-44c0-b2d9-9d92c7f40d86-whisker-ca-bundle\") pod \"whisker-64778d6cfc-7nz6w\" (UID: \"29bb7fd2-8b31-44c0-b2d9-9d92c7f40d86\") " pod="calico-system/whisker-64778d6cfc-7nz6w" Jul 6 23:38:49.016830 containerd[1513]: time="2025-07-06T23:38:49.016732464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64778d6cfc-7nz6w,Uid:29bb7fd2-8b31-44c0-b2d9-9d92c7f40d86,Namespace:calico-system,Attempt:0,}" Jul 6 23:38:49.316704 systemd-networkd[1436]: calibbc1444c7bb: Link UP Jul 6 23:38:49.317632 systemd-networkd[1436]: calibbc1444c7bb: Gained carrier Jul 6 23:38:49.341103 containerd[1513]: 2025-07-06 23:38:49.045 [INFO][3833] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:38:49.341103 containerd[1513]: 2025-07-06 23:38:49.114 [INFO][3833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--64778d6cfc--7nz6w-eth0 whisker-64778d6cfc- calico-system 29bb7fd2-8b31-44c0-b2d9-9d92c7f40d86 898 0 2025-07-06 23:38:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64778d6cfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-64778d6cfc-7nz6w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibbc1444c7bb [] [] }} ContainerID="cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" Namespace="calico-system" Pod="whisker-64778d6cfc-7nz6w" WorkloadEndpoint="localhost-k8s-whisker--64778d6cfc--7nz6w-" Jul 6 23:38:49.341103 containerd[1513]: 2025-07-06 23:38:49.114 [INFO][3833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" Namespace="calico-system" Pod="whisker-64778d6cfc-7nz6w" WorkloadEndpoint="localhost-k8s-whisker--64778d6cfc--7nz6w-eth0" Jul 6 23:38:49.341103 containerd[1513]: 2025-07-06 23:38:49.247 [INFO][3847] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" HandleID="k8s-pod-network.cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" Workload="localhost-k8s-whisker--64778d6cfc--7nz6w-eth0" Jul 6 23:38:49.341500 containerd[1513]: 2025-07-06 23:38:49.247 [INFO][3847] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" HandleID="k8s-pod-network.cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" Workload="localhost-k8s-whisker--64778d6cfc--7nz6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d340), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-64778d6cfc-7nz6w", "timestamp":"2025-07-06 23:38:49.247191201 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:38:49.341500 containerd[1513]: 2025-07-06 23:38:49.247 [INFO][3847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:38:49.341500 containerd[1513]: 2025-07-06 23:38:49.247 [INFO][3847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:38:49.341500 containerd[1513]: 2025-07-06 23:38:49.247 [INFO][3847] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:38:49.341500 containerd[1513]: 2025-07-06 23:38:49.259 [INFO][3847] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" host="localhost" Jul 6 23:38:49.341500 containerd[1513]: 2025-07-06 23:38:49.266 [INFO][3847] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:38:49.341500 containerd[1513]: 2025-07-06 23:38:49.272 [INFO][3847] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:38:49.341500 containerd[1513]: 2025-07-06 23:38:49.274 [INFO][3847] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:49.341500 containerd[1513]: 2025-07-06 23:38:49.276 [INFO][3847] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:49.341500 containerd[1513]: 2025-07-06 23:38:49.277 [INFO][3847] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" host="localhost" Jul 6 23:38:49.343035 containerd[1513]: 2025-07-06 23:38:49.278 [INFO][3847] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa Jul 6 23:38:49.343035 containerd[1513]: 2025-07-06 23:38:49.284 [INFO][3847] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" host="localhost" Jul 6 23:38:49.343035 containerd[1513]: 2025-07-06 23:38:49.291 [INFO][3847] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" host="localhost" Jul 6 23:38:49.343035 containerd[1513]: 2025-07-06 23:38:49.291 [INFO][3847] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" host="localhost" Jul 6 23:38:49.343035 containerd[1513]: 2025-07-06 23:38:49.291 [INFO][3847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:38:49.343035 containerd[1513]: 2025-07-06 23:38:49.291 [INFO][3847] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" HandleID="k8s-pod-network.cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" Workload="localhost-k8s-whisker--64778d6cfc--7nz6w-eth0" Jul 6 23:38:49.343190 containerd[1513]: 2025-07-06 23:38:49.297 [INFO][3833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" Namespace="calico-system" Pod="whisker-64778d6cfc-7nz6w" WorkloadEndpoint="localhost-k8s-whisker--64778d6cfc--7nz6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64778d6cfc--7nz6w-eth0", GenerateName:"whisker-64778d6cfc-", Namespace:"calico-system", SelfLink:"", UID:"29bb7fd2-8b31-44c0-b2d9-9d92c7f40d86", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64778d6cfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-64778d6cfc-7nz6w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibbc1444c7bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:49.343190 containerd[1513]: 2025-07-06 23:38:49.298 [INFO][3833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" Namespace="calico-system" Pod="whisker-64778d6cfc-7nz6w" WorkloadEndpoint="localhost-k8s-whisker--64778d6cfc--7nz6w-eth0" Jul 6 23:38:49.343272 containerd[1513]: 2025-07-06 23:38:49.298 [INFO][3833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbc1444c7bb ContainerID="cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" Namespace="calico-system" Pod="whisker-64778d6cfc-7nz6w" WorkloadEndpoint="localhost-k8s-whisker--64778d6cfc--7nz6w-eth0" Jul 6 23:38:49.343272 containerd[1513]: 2025-07-06 23:38:49.319 [INFO][3833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" Namespace="calico-system" Pod="whisker-64778d6cfc-7nz6w" WorkloadEndpoint="localhost-k8s-whisker--64778d6cfc--7nz6w-eth0" Jul 6 23:38:49.343313 containerd[1513]: 2025-07-06 23:38:49.319 [INFO][3833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" Namespace="calico-system" Pod="whisker-64778d6cfc-7nz6w" WorkloadEndpoint="localhost-k8s-whisker--64778d6cfc--7nz6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64778d6cfc--7nz6w-eth0", GenerateName:"whisker-64778d6cfc-", Namespace:"calico-system", SelfLink:"", UID:"29bb7fd2-8b31-44c0-b2d9-9d92c7f40d86", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64778d6cfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa", Pod:"whisker-64778d6cfc-7nz6w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibbc1444c7bb", MAC:"16:4a:65:8d:28:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:49.343360 containerd[1513]: 2025-07-06 23:38:49.333 [INFO][3833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" Namespace="calico-system" Pod="whisker-64778d6cfc-7nz6w" WorkloadEndpoint="localhost-k8s-whisker--64778d6cfc--7nz6w-eth0" Jul 6 23:38:49.475863 containerd[1513]: time="2025-07-06T23:38:49.475813431Z" level=info msg="connecting to shim cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa" address="unix:///run/containerd/s/3a0d6428726cb776760a3d043fbb7c88ad64a1b5f02cd06d429ad8f402ef0ba7" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:49.506153 systemd[1]: Started cri-containerd-cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa.scope - libcontainer container cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa. Jul 6 23:38:49.521714 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:38:49.549717 containerd[1513]: time="2025-07-06T23:38:49.549675880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64778d6cfc-7nz6w,Uid:29bb7fd2-8b31-44c0-b2d9-9d92c7f40d86,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa\"" Jul 6 23:38:49.551237 containerd[1513]: time="2025-07-06T23:38:49.551153725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:38:49.693648 containerd[1513]: time="2025-07-06T23:38:49.693512274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7df7f9c49cd83fa4dfe923d52d15eabd9f091571f38c1c372f6650a6dabfbb4e\" id:\"603c72843b107914a3ebbd5414e15a994a06c1704cbac9f893874f1072e253f5\" pid:4021 exit_status:1 exited_at:{seconds:1751845129 nanos:693221097}" Jul 6 23:38:50.485073 kubelet[2640]: I0706 23:38:50.485018 2640 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d09adaa7-c29e-4b27-a37d-e7fa809f26ab" path="/var/lib/kubelet/pods/d09adaa7-c29e-4b27-a37d-e7fa809f26ab/volumes" Jul 6 23:38:50.794528 containerd[1513]: time="2025-07-06T23:38:50.794475961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:50.795446 containerd[1513]: time="2025-07-06T23:38:50.795412053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 6 23:38:50.796801 containerd[1513]: time="2025-07-06T23:38:50.796338584Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:50.799002 containerd[1513]: time="2025-07-06T23:38:50.798960970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:50.800052 containerd[1513]: time="2025-07-06T23:38:50.800004588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.24879186s" Jul 6 23:38:50.800052 containerd[1513]: time="2025-07-06T23:38:50.800045711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 6 23:38:50.805836 containerd[1513]: time="2025-07-06T23:38:50.805788270Z" level=info msg="CreateContainer within sandbox \"cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:38:50.813420 containerd[1513]: time="2025-07-06T23:38:50.813358932Z" level=info msg="Container cb9d0ad6e07e6a33c2e18049718b51ad973688266a782fdd21f5c39a36967159: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:50.820826 containerd[1513]: time="2025-07-06T23:38:50.820780825Z" level=info msg="CreateContainer within sandbox \"cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cb9d0ad6e07e6a33c2e18049718b51ad973688266a782fdd21f5c39a36967159\"" Jul 6 23:38:50.822343 containerd[1513]: time="2025-07-06T23:38:50.822281868Z" level=info msg="StartContainer for \"cb9d0ad6e07e6a33c2e18049718b51ad973688266a782fdd21f5c39a36967159\"" Jul 6 23:38:50.823861 containerd[1513]: time="2025-07-06T23:38:50.823748870Z" level=info msg="connecting to shim cb9d0ad6e07e6a33c2e18049718b51ad973688266a782fdd21f5c39a36967159" address="unix:///run/containerd/s/3a0d6428726cb776760a3d043fbb7c88ad64a1b5f02cd06d429ad8f402ef0ba7" protocol=ttrpc version=3 Jul 6 23:38:50.847142 systemd[1]: Started cri-containerd-cb9d0ad6e07e6a33c2e18049718b51ad973688266a782fdd21f5c39a36967159.scope - libcontainer container cb9d0ad6e07e6a33c2e18049718b51ad973688266a782fdd21f5c39a36967159. Jul 6 23:38:50.888903 containerd[1513]: time="2025-07-06T23:38:50.884860231Z" level=info msg="StartContainer for \"cb9d0ad6e07e6a33c2e18049718b51ad973688266a782fdd21f5c39a36967159\" returns successfully" Jul 6 23:38:50.888903 containerd[1513]: time="2025-07-06T23:38:50.887850717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:38:50.921104 systemd-networkd[1436]: calibbc1444c7bb: Gained IPv6LL Jul 6 23:38:52.070250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791149097.mount: Deactivated successfully. Jul 6 23:38:52.085829 containerd[1513]: time="2025-07-06T23:38:52.085453450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:52.086368 containerd[1513]: time="2025-07-06T23:38:52.086336296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 6 23:38:52.087765 containerd[1513]: time="2025-07-06T23:38:52.087312987Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:52.090195 containerd[1513]: time="2025-07-06T23:38:52.089725953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:52.091578 containerd[1513]: time="2025-07-06T23:38:52.091458204Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.203569644s" Jul 6 23:38:52.091578 containerd[1513]: time="2025-07-06T23:38:52.091498646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 6 23:38:52.097383 containerd[1513]: time="2025-07-06T23:38:52.097330030Z" level=info msg="CreateContainer within sandbox \"cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:38:52.106217 containerd[1513]: time="2025-07-06T23:38:52.106170172Z" level=info msg="Container 783ca682f341ce2d531723815a01df8a1323f4beb3befc4c9bdc971f7bd63d04: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:52.113633 containerd[1513]: time="2025-07-06T23:38:52.113578520Z" level=info msg="CreateContainer within sandbox \"cd70bf0de4311b631a1c8acb7a07973000deddba2d6c75debb6eac7918213caa\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"783ca682f341ce2d531723815a01df8a1323f4beb3befc4c9bdc971f7bd63d04\"" Jul 6 23:38:52.114244 containerd[1513]: time="2025-07-06T23:38:52.114194472Z" level=info msg="StartContainer for \"783ca682f341ce2d531723815a01df8a1323f4beb3befc4c9bdc971f7bd63d04\"" Jul 6 23:38:52.116731 containerd[1513]: time="2025-07-06T23:38:52.116685122Z" level=info msg="connecting to shim 783ca682f341ce2d531723815a01df8a1323f4beb3befc4c9bdc971f7bd63d04" address="unix:///run/containerd/s/3a0d6428726cb776760a3d043fbb7c88ad64a1b5f02cd06d429ad8f402ef0ba7" protocol=ttrpc version=3 Jul 6 23:38:52.140156 systemd[1]: Started cri-containerd-783ca682f341ce2d531723815a01df8a1323f4beb3befc4c9bdc971f7bd63d04.scope - libcontainer container 783ca682f341ce2d531723815a01df8a1323f4beb3befc4c9bdc971f7bd63d04. Jul 6 23:38:52.179851 containerd[1513]: time="2025-07-06T23:38:52.179783659Z" level=info msg="StartContainer for \"783ca682f341ce2d531723815a01df8a1323f4beb3befc4c9bdc971f7bd63d04\" returns successfully" Jul 6 23:38:52.657494 kubelet[2640]: I0706 23:38:52.656973 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-64778d6cfc-7nz6w" podStartSLOduration=2.115342368 podStartE2EDuration="4.656954036s" podCreationTimestamp="2025-07-06 23:38:48 +0000 UTC" firstStartedPulling="2025-07-06 23:38:49.550837787 +0000 UTC m=+37.177386471" lastFinishedPulling="2025-07-06 23:38:52.092449455 +0000 UTC m=+39.718998139" observedRunningTime="2025-07-06 23:38:52.656885313 +0000 UTC m=+40.283434037" watchObservedRunningTime="2025-07-06 23:38:52.656954036 +0000 UTC m=+40.283502680" Jul 6 23:38:54.481129 containerd[1513]: time="2025-07-06T23:38:54.481078554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5bgr,Uid:6d5d40c2-7d76-440e-95ce-43f964a9f978,Namespace:calico-system,Attempt:0,}" Jul 6 23:38:54.661188 systemd-networkd[1436]: cali816e5864e4c: Link UP Jul 6 23:38:54.662176 systemd-networkd[1436]: cali816e5864e4c: Gained carrier Jul 6 23:38:54.675613 containerd[1513]: 2025-07-06 23:38:54.518 [INFO][4213] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:38:54.675613 containerd[1513]: 2025-07-06 23:38:54.541 [INFO][4213] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s5bgr-eth0 csi-node-driver- calico-system 6d5d40c2-7d76-440e-95ce-43f964a9f978 728 0 2025-07-06 23:38:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s5bgr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali816e5864e4c [] [] }} ContainerID="3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" Namespace="calico-system" Pod="csi-node-driver-s5bgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5bgr-" Jul 6 23:38:54.675613 containerd[1513]: 2025-07-06 23:38:54.541 [INFO][4213] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" Namespace="calico-system" Pod="csi-node-driver-s5bgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5bgr-eth0" Jul 6 23:38:54.675613 containerd[1513]: 2025-07-06 23:38:54.594 [INFO][4228] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" HandleID="k8s-pod-network.3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" Workload="localhost-k8s-csi--node--driver--s5bgr-eth0" Jul 6 23:38:54.675884 containerd[1513]: 2025-07-06 23:38:54.595 [INFO][4228] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" HandleID="k8s-pod-network.3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" Workload="localhost-k8s-csi--node--driver--s5bgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000385510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s5bgr", "timestamp":"2025-07-06 23:38:54.594977087 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:38:54.675884 containerd[1513]: 2025-07-06 23:38:54.597 [INFO][4228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:38:54.675884 containerd[1513]: 2025-07-06 23:38:54.597 [INFO][4228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:38:54.675884 containerd[1513]: 2025-07-06 23:38:54.597 [INFO][4228] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:38:54.675884 containerd[1513]: 2025-07-06 23:38:54.612 [INFO][4228] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" host="localhost" Jul 6 23:38:54.675884 containerd[1513]: 2025-07-06 23:38:54.626 [INFO][4228] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:38:54.675884 containerd[1513]: 2025-07-06 23:38:54.634 [INFO][4228] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:38:54.675884 containerd[1513]: 2025-07-06 23:38:54.636 [INFO][4228] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:54.675884 containerd[1513]: 2025-07-06 23:38:54.639 [INFO][4228] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:54.675884 containerd[1513]: 2025-07-06 23:38:54.639 [INFO][4228] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" host="localhost" Jul 6 23:38:54.676193 containerd[1513]: 2025-07-06 23:38:54.642 [INFO][4228] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939 Jul 6 23:38:54.676193 containerd[1513]: 2025-07-06 23:38:54.650 [INFO][4228] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" host="localhost" Jul 6 23:38:54.676193 containerd[1513]: 2025-07-06 23:38:54.656 [INFO][4228] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" host="localhost" Jul 6 23:38:54.676193 containerd[1513]: 2025-07-06 23:38:54.656 [INFO][4228] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" host="localhost" Jul 6 23:38:54.676193 containerd[1513]: 2025-07-06 23:38:54.656 [INFO][4228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:38:54.676193 containerd[1513]: 2025-07-06 23:38:54.656 [INFO][4228] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" HandleID="k8s-pod-network.3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" Workload="localhost-k8s-csi--node--driver--s5bgr-eth0" Jul 6 23:38:54.676367 containerd[1513]: 2025-07-06 23:38:54.658 [INFO][4213] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" Namespace="calico-system" Pod="csi-node-driver-s5bgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5bgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s5bgr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d5d40c2-7d76-440e-95ce-43f964a9f978", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s5bgr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali816e5864e4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:54.676423 containerd[1513]: 2025-07-06 23:38:54.659 [INFO][4213] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" Namespace="calico-system" Pod="csi-node-driver-s5bgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5bgr-eth0" Jul 6 23:38:54.676423 containerd[1513]: 2025-07-06 23:38:54.659 [INFO][4213] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali816e5864e4c ContainerID="3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" Namespace="calico-system" Pod="csi-node-driver-s5bgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5bgr-eth0" Jul 6 23:38:54.676423 containerd[1513]: 2025-07-06 23:38:54.662 [INFO][4213] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" Namespace="calico-system" Pod="csi-node-driver-s5bgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5bgr-eth0" Jul 6 23:38:54.676481 containerd[1513]: 2025-07-06 23:38:54.663 [INFO][4213] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" Namespace="calico-system" Pod="csi-node-driver-s5bgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5bgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s5bgr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d5d40c2-7d76-440e-95ce-43f964a9f978", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939", Pod:"csi-node-driver-s5bgr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali816e5864e4c", MAC:"f6:dc:63:e4:76:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:54.676527 containerd[1513]: 2025-07-06 23:38:54.672 [INFO][4213] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" Namespace="calico-system" Pod="csi-node-driver-s5bgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5bgr-eth0" Jul 6 23:38:54.707132 containerd[1513]: time="2025-07-06T23:38:54.707065290Z" level=info msg="connecting to shim 3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939" address="unix:///run/containerd/s/7a1ba76855887c67abf838906b2f9db69e1509377cfe68016ae619442943ab04" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:54.745166 systemd[1]: Started cri-containerd-3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939.scope - libcontainer container 3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939. Jul 6 23:38:54.758155 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:38:54.777381 containerd[1513]: time="2025-07-06T23:38:54.777317752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5bgr,Uid:6d5d40c2-7d76-440e-95ce-43f964a9f978,Namespace:calico-system,Attempt:0,} returns sandbox id \"3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939\"" Jul 6 23:38:54.779326 containerd[1513]: time="2025-07-06T23:38:54.779032796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:38:55.482093 containerd[1513]: time="2025-07-06T23:38:55.482004906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-844cfc594d-8c9tx,Uid:69e2a38d-b277-4659-be5c-e834c107567f,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:38:55.482093 containerd[1513]: time="2025-07-06T23:38:55.482006106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-844cfc594d-wc774,Uid:f6b6c6dc-2aeb-4c09-a0e3-c0d895605167,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:38:55.482612 containerd[1513]: time="2025-07-06T23:38:55.482008946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df7dc4d54-w5qkj,Uid:aeac0e8a-f55d-4143-a3d5-0b9990b44e2d,Namespace:calico-system,Attempt:0,}" Jul 6 23:38:55.654555 systemd-networkd[1436]: cali3bbcd9d034b: Link UP Jul 6 23:38:55.654759 systemd-networkd[1436]: cali3bbcd9d034b: Gained carrier Jul 6 23:38:55.669752 containerd[1513]: 2025-07-06 23:38:55.515 [INFO][4325] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:38:55.669752 containerd[1513]: 2025-07-06 23:38:55.535 [INFO][4325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0 calico-kube-controllers-7df7dc4d54- calico-system aeac0e8a-f55d-4143-a3d5-0b9990b44e2d 834 0 2025-07-06 23:38:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7df7dc4d54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7df7dc4d54-w5qkj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3bbcd9d034b [] [] }} ContainerID="c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" Namespace="calico-system" Pod="calico-kube-controllers-7df7dc4d54-w5qkj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-" Jul 6 23:38:55.669752 containerd[1513]: 2025-07-06 23:38:55.535 [INFO][4325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" Namespace="calico-system" Pod="calico-kube-controllers-7df7dc4d54-w5qkj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0" Jul 6 23:38:55.669752 containerd[1513]: 2025-07-06 23:38:55.602 [INFO][4359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" HandleID="k8s-pod-network.c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" Workload="localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0" Jul 6 23:38:55.669994 containerd[1513]: 2025-07-06 23:38:55.603 [INFO][4359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" HandleID="k8s-pod-network.c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" Workload="localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c5e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7df7dc4d54-w5qkj", "timestamp":"2025-07-06 23:38:55.602850577 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:38:55.669994 containerd[1513]: 2025-07-06 23:38:55.603 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:38:55.669994 containerd[1513]: 2025-07-06 23:38:55.603 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:38:55.669994 containerd[1513]: 2025-07-06 23:38:55.603 [INFO][4359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:38:55.669994 containerd[1513]: 2025-07-06 23:38:55.619 [INFO][4359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" host="localhost" Jul 6 23:38:55.669994 containerd[1513]: 2025-07-06 23:38:55.625 [INFO][4359] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:38:55.669994 containerd[1513]: 2025-07-06 23:38:55.631 [INFO][4359] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:38:55.669994 containerd[1513]: 2025-07-06 23:38:55.633 [INFO][4359] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:55.669994 containerd[1513]: 2025-07-06 23:38:55.635 [INFO][4359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:55.669994 containerd[1513]: 2025-07-06 23:38:55.635 [INFO][4359] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" host="localhost" Jul 6 23:38:55.670207 containerd[1513]: 2025-07-06 23:38:55.638 [INFO][4359] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c Jul 6 23:38:55.670207 containerd[1513]: 2025-07-06 23:38:55.643 [INFO][4359] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" host="localhost" Jul 6 23:38:55.670207 containerd[1513]: 2025-07-06 23:38:55.648 [INFO][4359] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" host="localhost" Jul 6 23:38:55.670207 containerd[1513]: 2025-07-06 23:38:55.648 [INFO][4359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" host="localhost" Jul 6 23:38:55.670207 containerd[1513]: 2025-07-06 23:38:55.648 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:38:55.670207 containerd[1513]: 2025-07-06 23:38:55.648 [INFO][4359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" HandleID="k8s-pod-network.c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" Workload="localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0" Jul 6 23:38:55.670316 containerd[1513]: 2025-07-06 23:38:55.651 [INFO][4325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" Namespace="calico-system" Pod="calico-kube-controllers-7df7dc4d54-w5qkj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0", GenerateName:"calico-kube-controllers-7df7dc4d54-", Namespace:"calico-system", SelfLink:"", UID:"aeac0e8a-f55d-4143-a3d5-0b9990b44e2d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7df7dc4d54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7df7dc4d54-w5qkj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3bbcd9d034b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:55.670364 containerd[1513]: 2025-07-06 23:38:55.652 [INFO][4325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" Namespace="calico-system" Pod="calico-kube-controllers-7df7dc4d54-w5qkj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0" Jul 6 23:38:55.670364 containerd[1513]: 2025-07-06 23:38:55.652 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3bbcd9d034b ContainerID="c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" Namespace="calico-system" Pod="calico-kube-controllers-7df7dc4d54-w5qkj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0" Jul 6 23:38:55.670364 containerd[1513]: 2025-07-06 23:38:55.655 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" Namespace="calico-system" Pod="calico-kube-controllers-7df7dc4d54-w5qkj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0" Jul 6 23:38:55.670423 containerd[1513]: 2025-07-06 23:38:55.655 [INFO][4325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" Namespace="calico-system" Pod="calico-kube-controllers-7df7dc4d54-w5qkj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0", GenerateName:"calico-kube-controllers-7df7dc4d54-", Namespace:"calico-system", SelfLink:"", UID:"aeac0e8a-f55d-4143-a3d5-0b9990b44e2d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7df7dc4d54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c", Pod:"calico-kube-controllers-7df7dc4d54-w5qkj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3bbcd9d034b", MAC:"32:4d:b8:25:c8:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:55.670469 containerd[1513]: 2025-07-06 23:38:55.667 [INFO][4325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" Namespace="calico-system" Pod="calico-kube-controllers-7df7dc4d54-w5qkj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df7dc4d54--w5qkj-eth0" Jul 6 23:38:55.694955 containerd[1513]: time="2025-07-06T23:38:55.694787103Z" level=info msg="connecting to shim c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c" address="unix:///run/containerd/s/653a9e4f652f793f3ec226d388e1ac4563e6b2deb3465ac0f1af1e706916a702" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:55.730322 systemd[1]: Started cri-containerd-c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c.scope - libcontainer container c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c. Jul 6 23:38:55.751351 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:38:55.778876 systemd-networkd[1436]: calia06c6511232: Link UP Jul 6 23:38:55.779743 systemd-networkd[1436]: calia06c6511232: Gained carrier Jul 6 23:38:55.806636 containerd[1513]: time="2025-07-06T23:38:55.806591261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df7dc4d54-w5qkj,Uid:aeac0e8a-f55d-4143-a3d5-0b9990b44e2d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c\"" Jul 6 23:38:55.834769 containerd[1513]: 2025-07-06 23:38:55.514 [INFO][4315] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:38:55.834769 containerd[1513]: 2025-07-06 23:38:55.536 [INFO][4315] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0 calico-apiserver-844cfc594d- calico-apiserver f6b6c6dc-2aeb-4c09-a0e3-c0d895605167 832 0 2025-07-06 23:38:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:844cfc594d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-844cfc594d-wc774 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia06c6511232 [] [] }} ContainerID="13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-wc774" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--wc774-" Jul 6 23:38:55.834769 containerd[1513]: 2025-07-06 23:38:55.536 [INFO][4315] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-wc774" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0" Jul 6 23:38:55.834769 containerd[1513]: 2025-07-06 23:38:55.613 [INFO][4364] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" HandleID="k8s-pod-network.13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" Workload="localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0" Jul 6 23:38:55.835908 containerd[1513]: 2025-07-06 23:38:55.613 [INFO][4364] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" HandleID="k8s-pod-network.13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" Workload="localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-844cfc594d-wc774", "timestamp":"2025-07-06 23:38:55.613643694 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:38:55.835908 containerd[1513]: 2025-07-06 23:38:55.613 [INFO][4364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:38:55.835908 containerd[1513]: 2025-07-06 23:38:55.648 [INFO][4364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:38:55.835908 containerd[1513]: 2025-07-06 23:38:55.649 [INFO][4364] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:38:55.835908 containerd[1513]: 2025-07-06 23:38:55.722 [INFO][4364] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" host="localhost" Jul 6 23:38:55.835908 containerd[1513]: 2025-07-06 23:38:55.731 [INFO][4364] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:38:55.835908 containerd[1513]: 2025-07-06 23:38:55.743 [INFO][4364] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:38:55.835908 containerd[1513]: 2025-07-06 23:38:55.746 [INFO][4364] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:55.835908 containerd[1513]: 2025-07-06 23:38:55.751 [INFO][4364] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:55.835908 containerd[1513]: 2025-07-06 23:38:55.751 [INFO][4364] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" host="localhost" Jul 6 23:38:55.836778 containerd[1513]: 2025-07-06 23:38:55.754 [INFO][4364] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb Jul 6 23:38:55.836778 containerd[1513]: 2025-07-06 23:38:55.759 [INFO][4364] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" host="localhost" Jul 6 23:38:55.836778 containerd[1513]: 2025-07-06 23:38:55.770 [INFO][4364] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" host="localhost" Jul 6 23:38:55.836778 containerd[1513]: 2025-07-06 23:38:55.770 [INFO][4364] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" host="localhost" Jul 6 23:38:55.836778 containerd[1513]: 2025-07-06 23:38:55.770 [INFO][4364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:38:55.836778 containerd[1513]: 2025-07-06 23:38:55.770 [INFO][4364] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" HandleID="k8s-pod-network.13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" Workload="localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0" Jul 6 23:38:55.836891 containerd[1513]: 2025-07-06 23:38:55.776 [INFO][4315] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-wc774" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0", GenerateName:"calico-apiserver-844cfc594d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6b6c6dc-2aeb-4c09-a0e3-c0d895605167", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"844cfc594d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-844cfc594d-wc774", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia06c6511232", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:55.836965 containerd[1513]: 2025-07-06 23:38:55.776 [INFO][4315] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-wc774" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0" Jul 6 23:38:55.836965 containerd[1513]: 2025-07-06 23:38:55.776 [INFO][4315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia06c6511232 ContainerID="13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-wc774" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0" Jul 6 23:38:55.836965 containerd[1513]: 2025-07-06 23:38:55.782 [INFO][4315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-wc774" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0" Jul 6 23:38:55.837032 containerd[1513]: 2025-07-06 23:38:55.784 [INFO][4315] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-wc774" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0", GenerateName:"calico-apiserver-844cfc594d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6b6c6dc-2aeb-4c09-a0e3-c0d895605167", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"844cfc594d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb", Pod:"calico-apiserver-844cfc594d-wc774", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia06c6511232", MAC:"ca:58:71:da:57:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:55.837079 containerd[1513]: 2025-07-06 23:38:55.828 [INFO][4315] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-wc774" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--wc774-eth0" Jul 6 23:38:55.866630 containerd[1513]: time="2025-07-06T23:38:55.866540654Z" level=info msg="connecting to shim 13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb" address="unix:///run/containerd/s/ea470afcbae33c59190734669da3fe788dd65a7122d1ab03b1b2005cfbd02fd0" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:55.892671 systemd-networkd[1436]: cali0272140f162: Link UP Jul 6 23:38:55.894020 systemd-networkd[1436]: cali0272140f162: Gained carrier Jul 6 23:38:55.927682 containerd[1513]: 2025-07-06 23:38:55.520 [INFO][4341] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:38:55.927682 containerd[1513]: 2025-07-06 23:38:55.544 [INFO][4341] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0 calico-apiserver-844cfc594d- calico-apiserver 69e2a38d-b277-4659-be5c-e834c107567f 830 0 2025-07-06 23:38:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:844cfc594d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-844cfc594d-8c9tx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0272140f162 [] [] }} ContainerID="8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-8c9tx" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--8c9tx-" Jul 6 23:38:55.927682 containerd[1513]: 2025-07-06 23:38:55.544 [INFO][4341] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-8c9tx" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0" Jul 6 23:38:55.927682 containerd[1513]: 2025-07-06 23:38:55.626 [INFO][4373] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" HandleID="k8s-pod-network.8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" Workload="localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0" Jul 6 23:38:55.927966 containerd[1513]: 2025-07-06 23:38:55.626 [INFO][4373] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" HandleID="k8s-pod-network.8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" Workload="localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136450), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-844cfc594d-8c9tx", "timestamp":"2025-07-06 23:38:55.626104092 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:38:55.927966 containerd[1513]: 2025-07-06 23:38:55.626 [INFO][4373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:38:55.927966 containerd[1513]: 2025-07-06 23:38:55.770 [INFO][4373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:38:55.927966 containerd[1513]: 2025-07-06 23:38:55.770 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:38:55.927966 containerd[1513]: 2025-07-06 23:38:55.829 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" host="localhost" Jul 6 23:38:55.927966 containerd[1513]: 2025-07-06 23:38:55.838 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:38:55.927966 containerd[1513]: 2025-07-06 23:38:55.858 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:38:55.927966 containerd[1513]: 2025-07-06 23:38:55.862 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:55.927966 containerd[1513]: 2025-07-06 23:38:55.866 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:55.927966 containerd[1513]: 2025-07-06 23:38:55.866 [INFO][4373] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" host="localhost" Jul 6 23:38:55.928178 containerd[1513]: 2025-07-06 23:38:55.869 [INFO][4373] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6 Jul 6 23:38:55.928178 containerd[1513]: 2025-07-06 23:38:55.876 [INFO][4373] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" host="localhost" Jul 6 23:38:55.928178 containerd[1513]: 2025-07-06 23:38:55.885 [INFO][4373] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" host="localhost" Jul 6 23:38:55.928178 containerd[1513]: 2025-07-06 23:38:55.885 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" host="localhost" Jul 6 23:38:55.928178 containerd[1513]: 2025-07-06 23:38:55.885 [INFO][4373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:38:55.928178 containerd[1513]: 2025-07-06 23:38:55.885 [INFO][4373] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" HandleID="k8s-pod-network.8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" Workload="localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0" Jul 6 23:38:55.928291 containerd[1513]: 2025-07-06 23:38:55.889 [INFO][4341] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-8c9tx" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0", GenerateName:"calico-apiserver-844cfc594d-", Namespace:"calico-apiserver", SelfLink:"", UID:"69e2a38d-b277-4659-be5c-e834c107567f", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"844cfc594d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-844cfc594d-8c9tx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0272140f162", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:55.928340 containerd[1513]: 2025-07-06 23:38:55.889 [INFO][4341] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-8c9tx" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0" Jul 6 23:38:55.928340 containerd[1513]: 2025-07-06 23:38:55.890 [INFO][4341] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0272140f162 ContainerID="8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-8c9tx" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0" Jul 6 23:38:55.928340 containerd[1513]: 2025-07-06 23:38:55.892 [INFO][4341] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-8c9tx" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0" Jul 6 23:38:55.928399 containerd[1513]: 2025-07-06 23:38:55.897 [INFO][4341] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-8c9tx" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0", GenerateName:"calico-apiserver-844cfc594d-", Namespace:"calico-apiserver", SelfLink:"", UID:"69e2a38d-b277-4659-be5c-e834c107567f", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"844cfc594d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6", Pod:"calico-apiserver-844cfc594d-8c9tx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0272140f162", MAC:"0e:65:63:ed:8c:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:55.928479 containerd[1513]: 2025-07-06 23:38:55.913 [INFO][4341] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" Namespace="calico-apiserver" Pod="calico-apiserver-844cfc594d-8c9tx" WorkloadEndpoint="localhost-k8s-calico--apiserver--844cfc594d--8c9tx-eth0" Jul 6 23:38:55.952733 containerd[1513]: time="2025-07-06T23:38:55.951644893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:55.956060 containerd[1513]: time="2025-07-06T23:38:55.956008062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.176936304s" Jul 6 23:38:55.956060 containerd[1513]: time="2025-07-06T23:38:55.956053144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 6 23:38:55.958073 containerd[1513]: time="2025-07-06T23:38:55.956468684Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:55.958073 containerd[1513]: time="2025-07-06T23:38:55.957145877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:55.958073 containerd[1513]: time="2025-07-06T23:38:55.957203439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 6 23:38:55.958731 containerd[1513]: time="2025-07-06T23:38:55.958705431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:38:55.963350 containerd[1513]: time="2025-07-06T23:38:55.963307292Z" level=info msg="CreateContainer within sandbox \"3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:38:55.977179 systemd[1]: Started cri-containerd-13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb.scope - libcontainer container 13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb. Jul 6 23:38:55.978563 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:46164.service - OpenSSH per-connection server daemon (10.0.0.1:46164). Jul 6 23:38:55.979443 containerd[1513]: time="2025-07-06T23:38:55.979349981Z" level=info msg="Container 8a6e200bb24ca583b3c392383c4c017d975c5b0c170cb0211b890ef55eed857a: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:55.998853 containerd[1513]: time="2025-07-06T23:38:55.998801473Z" level=info msg="CreateContainer within sandbox \"3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8a6e200bb24ca583b3c392383c4c017d975c5b0c170cb0211b890ef55eed857a\"" Jul 6 23:38:55.999866 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:38:56.019135 containerd[1513]: time="2025-07-06T23:38:56.018106976Z" level=info msg="StartContainer for \"8a6e200bb24ca583b3c392383c4c017d975c5b0c170cb0211b890ef55eed857a\"" Jul 6 23:38:56.023336 containerd[1513]: time="2025-07-06T23:38:56.021100316Z" level=info msg="connecting to shim 8a6e200bb24ca583b3c392383c4c017d975c5b0c170cb0211b890ef55eed857a" address="unix:///run/containerd/s/7a1ba76855887c67abf838906b2f9db69e1509377cfe68016ae619442943ab04" protocol=ttrpc version=3 Jul 6 23:38:56.051866 containerd[1513]: time="2025-07-06T23:38:56.051827430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-844cfc594d-wc774,Uid:f6b6c6dc-2aeb-4c09-a0e3-c0d895605167,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb\"" Jul 6 23:38:56.057484 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 46164 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:38:56.059464 containerd[1513]: time="2025-07-06T23:38:56.059427664Z" level=info msg="connecting to shim 8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6" address="unix:///run/containerd/s/0405e85870752eb7e6c50931fd64055f0b1401e92b4a10060e6eb9a56442e7e6" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:56.060025 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:56.064697 systemd-logind[1486]: New session 8 of user core. Jul 6 23:38:56.069190 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:38:56.078115 systemd[1]: Started cri-containerd-8a6e200bb24ca583b3c392383c4c017d975c5b0c170cb0211b890ef55eed857a.scope - libcontainer container 8a6e200bb24ca583b3c392383c4c017d975c5b0c170cb0211b890ef55eed857a. Jul 6 23:38:56.082072 systemd[1]: Started cri-containerd-8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6.scope - libcontainer container 8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6. Jul 6 23:38:56.096836 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:38:56.105048 systemd-networkd[1436]: cali816e5864e4c: Gained IPv6LL Jul 6 23:38:56.128358 containerd[1513]: time="2025-07-06T23:38:56.128247955Z" level=info msg="StartContainer for \"8a6e200bb24ca583b3c392383c4c017d975c5b0c170cb0211b890ef55eed857a\" returns successfully" Jul 6 23:38:56.128711 containerd[1513]: time="2025-07-06T23:38:56.128675255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-844cfc594d-8c9tx,Uid:69e2a38d-b277-4659-be5c-e834c107567f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6\"" Jul 6 23:38:56.322650 sshd[4566]: Connection closed by 10.0.0.1 port 46164 Jul 6 23:38:56.323009 sshd-session[4507]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:56.328567 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:46164.service: Deactivated successfully. Jul 6 23:38:56.331620 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:38:56.333274 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:38:56.334737 systemd-logind[1486]: Removed session 8. Jul 6 23:38:56.480993 kubelet[2640]: E0706 23:38:56.480921 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:56.482369 containerd[1513]: time="2025-07-06T23:38:56.482294953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddxgk,Uid:593f9cf9-5714-4196-a312-14e156daf0d7,Namespace:kube-system,Attempt:0,}" Jul 6 23:38:56.581573 systemd-networkd[1436]: cali85e78c5666e: Link UP Jul 6 23:38:56.582300 systemd-networkd[1436]: cali85e78c5666e: Gained carrier Jul 6 23:38:56.596896 containerd[1513]: 2025-07-06 23:38:56.505 [INFO][4613] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:38:56.596896 containerd[1513]: 2025-07-06 23:38:56.518 [INFO][4613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0 coredns-674b8bbfcf- kube-system 593f9cf9-5714-4196-a312-14e156daf0d7 829 0 2025-07-06 23:38:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ddxgk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali85e78c5666e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddxgk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddxgk-" Jul 6 23:38:56.596896 containerd[1513]: 2025-07-06 23:38:56.518 [INFO][4613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddxgk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0" Jul 6 23:38:56.596896 containerd[1513]: 2025-07-06 23:38:56.542 [INFO][4628] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" HandleID="k8s-pod-network.fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" Workload="localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0" Jul 6 23:38:56.597166 containerd[1513]: 2025-07-06 23:38:56.542 [INFO][4628] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" HandleID="k8s-pod-network.fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" Workload="localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005984d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ddxgk", "timestamp":"2025-07-06 23:38:56.542415038 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:38:56.597166 containerd[1513]: 2025-07-06 23:38:56.542 [INFO][4628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:38:56.597166 containerd[1513]: 2025-07-06 23:38:56.542 [INFO][4628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:38:56.597166 containerd[1513]: 2025-07-06 23:38:56.542 [INFO][4628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:38:56.597166 containerd[1513]: 2025-07-06 23:38:56.553 [INFO][4628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" host="localhost" Jul 6 23:38:56.597166 containerd[1513]: 2025-07-06 23:38:56.557 [INFO][4628] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:38:56.597166 containerd[1513]: 2025-07-06 23:38:56.561 [INFO][4628] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:38:56.597166 containerd[1513]: 2025-07-06 23:38:56.563 [INFO][4628] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:56.597166 containerd[1513]: 2025-07-06 23:38:56.565 [INFO][4628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:56.597166 containerd[1513]: 2025-07-06 23:38:56.565 [INFO][4628] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" host="localhost" Jul 6 23:38:56.597381 containerd[1513]: 2025-07-06 23:38:56.567 [INFO][4628] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513 Jul 6 23:38:56.597381 containerd[1513]: 2025-07-06 23:38:56.570 [INFO][4628] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" host="localhost" Jul 6 23:38:56.597381 containerd[1513]: 2025-07-06 23:38:56.576 [INFO][4628] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" host="localhost" Jul 6 23:38:56.597381 containerd[1513]: 2025-07-06 23:38:56.577 [INFO][4628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" host="localhost" Jul 6 23:38:56.597381 containerd[1513]: 2025-07-06 23:38:56.577 [INFO][4628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:38:56.597381 containerd[1513]: 2025-07-06 23:38:56.577 [INFO][4628] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" HandleID="k8s-pod-network.fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" Workload="localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0" Jul 6 23:38:56.597485 containerd[1513]: 2025-07-06 23:38:56.579 [INFO][4613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddxgk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"593f9cf9-5714-4196-a312-14e156daf0d7", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ddxgk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85e78c5666e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:56.597548 containerd[1513]: 2025-07-06 23:38:56.579 [INFO][4613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddxgk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0" Jul 6 23:38:56.597548 containerd[1513]: 2025-07-06 23:38:56.579 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85e78c5666e ContainerID="fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddxgk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0" Jul 6 23:38:56.597548 containerd[1513]: 2025-07-06 23:38:56.582 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddxgk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0" Jul 6 23:38:56.597610 containerd[1513]: 2025-07-06 23:38:56.582 [INFO][4613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddxgk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"593f9cf9-5714-4196-a312-14e156daf0d7", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513", Pod:"coredns-674b8bbfcf-ddxgk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85e78c5666e", MAC:"52:a7:49:cb:0b:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:56.597610 containerd[1513]: 2025-07-06 23:38:56.593 [INFO][4613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddxgk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddxgk-eth0" Jul 6 23:38:56.625457 containerd[1513]: time="2025-07-06T23:38:56.625415991Z" level=info msg="connecting to shim fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513" address="unix:///run/containerd/s/7b9618176ae17a715586e7487750f1f1c9a8b28d8731f74aa1b923388ece4430" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:56.647120 systemd[1]: Started cri-containerd-fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513.scope - libcontainer container fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513. Jul 6 23:38:56.660207 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:38:56.691843 containerd[1513]: time="2025-07-06T23:38:56.691794968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddxgk,Uid:593f9cf9-5714-4196-a312-14e156daf0d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513\"" Jul 6 23:38:56.692797 kubelet[2640]: E0706 23:38:56.692674 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:56.699667 containerd[1513]: time="2025-07-06T23:38:56.699630133Z" level=info msg="CreateContainer within sandbox \"fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:38:56.713265 containerd[1513]: time="2025-07-06T23:38:56.712540176Z" level=info msg="Container 839a322dc7b12df31d8b2a2d48055990d3a0e6d42443ec4c270ea2bc95f2c991: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:56.719954 containerd[1513]: time="2025-07-06T23:38:56.719900119Z" level=info msg="CreateContainer within sandbox \"fc28562817a76dbaa5ebfc5698c9d2ffc0397432658560dd955764cbc3020513\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"839a322dc7b12df31d8b2a2d48055990d3a0e6d42443ec4c270ea2bc95f2c991\"" Jul 6 23:38:56.721975 containerd[1513]: time="2025-07-06T23:38:56.720689236Z" level=info msg="StartContainer for \"839a322dc7b12df31d8b2a2d48055990d3a0e6d42443ec4c270ea2bc95f2c991\"" Jul 6 23:38:56.721975 containerd[1513]: time="2025-07-06T23:38:56.721535995Z" level=info msg="connecting to shim 839a322dc7b12df31d8b2a2d48055990d3a0e6d42443ec4c270ea2bc95f2c991" address="unix:///run/containerd/s/7b9618176ae17a715586e7487750f1f1c9a8b28d8731f74aa1b923388ece4430" protocol=ttrpc version=3 Jul 6 23:38:56.745473 systemd[1]: Started cri-containerd-839a322dc7b12df31d8b2a2d48055990d3a0e6d42443ec4c270ea2bc95f2c991.scope - libcontainer container 839a322dc7b12df31d8b2a2d48055990d3a0e6d42443ec4c270ea2bc95f2c991. Jul 6 23:38:56.783497 containerd[1513]: time="2025-07-06T23:38:56.783463045Z" level=info msg="StartContainer for \"839a322dc7b12df31d8b2a2d48055990d3a0e6d42443ec4c270ea2bc95f2c991\" returns successfully" Jul 6 23:38:57.257153 systemd-networkd[1436]: cali3bbcd9d034b: Gained IPv6LL Jul 6 23:38:57.385318 systemd-networkd[1436]: calia06c6511232: Gained IPv6LL Jul 6 23:38:57.481955 kubelet[2640]: E0706 23:38:57.480665 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:57.482337 containerd[1513]: time="2025-07-06T23:38:57.481287351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nx6kk,Uid:4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc,Namespace:calico-system,Attempt:0,}" Jul 6 23:38:57.482337 containerd[1513]: time="2025-07-06T23:38:57.481961102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mrqjt,Uid:1b89afa8-1bf1-4e93-8372-434b3f1f9f6c,Namespace:kube-system,Attempt:0,}" Jul 6 23:38:57.562675 containerd[1513]: time="2025-07-06T23:38:57.562635970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:57.563301 containerd[1513]: time="2025-07-06T23:38:57.563260198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 6 23:38:57.565244 containerd[1513]: time="2025-07-06T23:38:57.565212207Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:57.567182 containerd[1513]: time="2025-07-06T23:38:57.567128254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:57.569623 containerd[1513]: time="2025-07-06T23:38:57.569585646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.610726687s" Jul 6 23:38:57.569751 containerd[1513]: time="2025-07-06T23:38:57.569731853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 6 23:38:57.575767 containerd[1513]: time="2025-07-06T23:38:57.573086645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:38:57.581562 containerd[1513]: time="2025-07-06T23:38:57.581522709Z" level=info msg="CreateContainer within sandbox \"c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:38:57.598722 containerd[1513]: time="2025-07-06T23:38:57.598653008Z" level=info msg="Container fa1578db5b3daca9d14316a379f1b7eb82acf2b301fbc51815727f6a502e010f: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:57.612787 containerd[1513]: time="2025-07-06T23:38:57.612639843Z" level=info msg="CreateContainer within sandbox \"c8428f2350f64c8d1f6f7a2f7cda792f63589c05e9f130f7dd4c21d8dc4e970c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fa1578db5b3daca9d14316a379f1b7eb82acf2b301fbc51815727f6a502e010f\"" Jul 6 23:38:57.613938 containerd[1513]: time="2025-07-06T23:38:57.613830698Z" level=info msg="StartContainer for \"fa1578db5b3daca9d14316a379f1b7eb82acf2b301fbc51815727f6a502e010f\"" Jul 6 23:38:57.625102 containerd[1513]: time="2025-07-06T23:38:57.625060928Z" level=info msg="connecting to shim fa1578db5b3daca9d14316a379f1b7eb82acf2b301fbc51815727f6a502e010f" address="unix:///run/containerd/s/653a9e4f652f793f3ec226d388e1ac4563e6b2deb3465ac0f1af1e706916a702" protocol=ttrpc version=3 Jul 6 23:38:57.682257 systemd[1]: Started cri-containerd-fa1578db5b3daca9d14316a379f1b7eb82acf2b301fbc51815727f6a502e010f.scope - libcontainer container fa1578db5b3daca9d14316a379f1b7eb82acf2b301fbc51815727f6a502e010f. Jul 6 23:38:57.690534 kubelet[2640]: E0706 23:38:57.689597 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:57.723404 kubelet[2640]: I0706 23:38:57.723342 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ddxgk" podStartSLOduration=40.723325196 podStartE2EDuration="40.723325196s" podCreationTimestamp="2025-07-06 23:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:38:57.705530907 +0000 UTC m=+45.332079591" watchObservedRunningTime="2025-07-06 23:38:57.723325196 +0000 UTC m=+45.349873840" Jul 6 23:38:57.761253 containerd[1513]: time="2025-07-06T23:38:57.761210839Z" level=info msg="StartContainer for \"fa1578db5b3daca9d14316a379f1b7eb82acf2b301fbc51815727f6a502e010f\" returns successfully" Jul 6 23:38:57.773288 systemd-networkd[1436]: calid0cf0c6d5b2: Link UP Jul 6 23:38:57.773416 systemd-networkd[1436]: calid0cf0c6d5b2: Gained carrier Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.594 [INFO][4755] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.617 [INFO][4755] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0 goldmane-768f4c5c69- calico-system 4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc 833 0 2025-07-06 23:38:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-nx6kk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid0cf0c6d5b2 [] [] }} ContainerID="51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" Namespace="calico-system" Pod="goldmane-768f4c5c69-nx6kk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nx6kk-" Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.617 [INFO][4755] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" Namespace="calico-system" Pod="goldmane-768f4c5c69-nx6kk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0" Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.687 [INFO][4795] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" HandleID="k8s-pod-network.51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" Workload="localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0" Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.687 [INFO][4795] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" HandleID="k8s-pod-network.51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" Workload="localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012ee30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-nx6kk", "timestamp":"2025-07-06 23:38:57.687683695 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.687 [INFO][4795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.687 [INFO][4795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.687 [INFO][4795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.700 [INFO][4795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" host="localhost" Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.712 [INFO][4795] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.722 [INFO][4795] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.727 [INFO][4795] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.736 [INFO][4795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.736 [INFO][4795] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" host="localhost" Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.740 [INFO][4795] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053 Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.750 [INFO][4795] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" host="localhost" Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.759 [INFO][4795] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" host="localhost" Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.759 [INFO][4795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" host="localhost" Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.759 [INFO][4795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:38:57.788582 containerd[1513]: 2025-07-06 23:38:57.759 [INFO][4795] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" HandleID="k8s-pod-network.51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" Workload="localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0" Jul 6 23:38:57.789152 containerd[1513]: 2025-07-06 23:38:57.767 [INFO][4755] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" Namespace="calico-system" Pod="goldmane-768f4c5c69-nx6kk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-nx6kk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid0cf0c6d5b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:57.789152 containerd[1513]: 2025-07-06 23:38:57.767 [INFO][4755] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" Namespace="calico-system" Pod="goldmane-768f4c5c69-nx6kk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0" Jul 6 23:38:57.789152 containerd[1513]: 2025-07-06 23:38:57.767 [INFO][4755] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0cf0c6d5b2 ContainerID="51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" Namespace="calico-system" Pod="goldmane-768f4c5c69-nx6kk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0" Jul 6 23:38:57.789152 containerd[1513]: 2025-07-06 23:38:57.771 [INFO][4755] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" Namespace="calico-system" Pod="goldmane-768f4c5c69-nx6kk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0" Jul 6 23:38:57.789152 containerd[1513]: 2025-07-06 23:38:57.772 [INFO][4755] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" Namespace="calico-system" Pod="goldmane-768f4c5c69-nx6kk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053", Pod:"goldmane-768f4c5c69-nx6kk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid0cf0c6d5b2", MAC:"de:ed:12:03:af:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:57.789152 containerd[1513]: 2025-07-06 23:38:57.784 [INFO][4755] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" Namespace="calico-system" Pod="goldmane-768f4c5c69-nx6kk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nx6kk-eth0" Jul 6 23:38:57.822439 containerd[1513]: time="2025-07-06T23:38:57.821306611Z" level=info msg="connecting to shim 51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053" address="unix:///run/containerd/s/cd4f640962d6602d0bd2e419ccab7eb329348218853203fb828968793f002252" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:57.833102 systemd-networkd[1436]: cali0272140f162: Gained IPv6LL Jul 6 23:38:57.861373 systemd[1]: Started cri-containerd-51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053.scope - libcontainer container 51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053. Jul 6 23:38:57.869129 systemd-networkd[1436]: cali69bff83032b: Link UP Jul 6 23:38:57.869361 systemd-networkd[1436]: cali69bff83032b: Gained carrier Jul 6 23:38:57.883184 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.587 [INFO][4760] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.606 [INFO][4760] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0 coredns-674b8bbfcf- kube-system 1b89afa8-1bf1-4e93-8372-434b3f1f9f6c 826 0 2025-07-06 23:38:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-mrqjt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali69bff83032b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" Namespace="kube-system" Pod="coredns-674b8bbfcf-mrqjt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mrqjt-" Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.606 [INFO][4760] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" Namespace="kube-system" Pod="coredns-674b8bbfcf-mrqjt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0" Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.687 [INFO][4790] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" HandleID="k8s-pod-network.661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" Workload="localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0" Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.688 [INFO][4790] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" HandleID="k8s-pod-network.661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" Workload="localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dae0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-mrqjt", "timestamp":"2025-07-06 23:38:57.68734856 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.688 [INFO][4790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.759 [INFO][4790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.759 [INFO][4790] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.800 [INFO][4790] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" host="localhost" Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.807 [INFO][4790] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.822 [INFO][4790] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.824 [INFO][4790] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.834 [INFO][4790] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.834 [INFO][4790] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" host="localhost" Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.836 [INFO][4790] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660 Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.847 [INFO][4790] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" host="localhost" Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.857 [INFO][4790] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" host="localhost" Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.857 [INFO][4790] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" host="localhost" Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.857 [INFO][4790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:38:57.889982 containerd[1513]: 2025-07-06 23:38:57.858 [INFO][4790] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" HandleID="k8s-pod-network.661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" Workload="localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0" Jul 6 23:38:57.890512 containerd[1513]: 2025-07-06 23:38:57.863 [INFO][4760] cni-plugin/k8s.go 418: Populated endpoint ContainerID="661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" Namespace="kube-system" Pod="coredns-674b8bbfcf-mrqjt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1b89afa8-1bf1-4e93-8372-434b3f1f9f6c", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-mrqjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69bff83032b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:57.890512 containerd[1513]: 2025-07-06 23:38:57.863 [INFO][4760] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" Namespace="kube-system" Pod="coredns-674b8bbfcf-mrqjt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0" Jul 6 23:38:57.890512 containerd[1513]: 2025-07-06 23:38:57.863 [INFO][4760] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69bff83032b ContainerID="661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" Namespace="kube-system" Pod="coredns-674b8bbfcf-mrqjt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0" Jul 6 23:38:57.890512 containerd[1513]: 2025-07-06 23:38:57.869 [INFO][4760] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" Namespace="kube-system" Pod="coredns-674b8bbfcf-mrqjt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0" Jul 6 23:38:57.890512 containerd[1513]: 2025-07-06 23:38:57.870 [INFO][4760] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" Namespace="kube-system" Pod="coredns-674b8bbfcf-mrqjt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1b89afa8-1bf1-4e93-8372-434b3f1f9f6c", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660", Pod:"coredns-674b8bbfcf-mrqjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69bff83032b", MAC:"3e:ab:cf:76:39:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:38:57.890512 containerd[1513]: 2025-07-06 23:38:57.885 [INFO][4760] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" Namespace="kube-system" Pod="coredns-674b8bbfcf-mrqjt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mrqjt-eth0" Jul 6 23:38:57.910952 containerd[1513]: time="2025-07-06T23:38:57.910685795Z" level=info msg="connecting to shim 661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660" address="unix:///run/containerd/s/17e5e7e602153876c4b5143637311f3617cb9c2e28c5b75208285706044b4e54" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:38:57.941133 systemd[1]: Started cri-containerd-661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660.scope - libcontainer container 661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660. Jul 6 23:38:57.944859 containerd[1513]: time="2025-07-06T23:38:57.944813346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nx6kk,Uid:4e0ebb2d-9c3f-4c45-afc1-1707b2fb0fdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053\"" Jul 6 23:38:57.962179 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:38:57.992853 containerd[1513]: time="2025-07-06T23:38:57.992809809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mrqjt,Uid:1b89afa8-1bf1-4e93-8372-434b3f1f9f6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660\"" Jul 6 23:38:57.993843 kubelet[2640]: E0706 23:38:57.993819 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:57.999372 containerd[1513]: time="2025-07-06T23:38:57.999248581Z" level=info msg="CreateContainer within sandbox \"661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:38:58.006000 containerd[1513]: time="2025-07-06T23:38:58.005957001Z" level=info msg="Container 7c01fadbd494e6fd712e979c86c9ac7cb9c38453627c1c5eaa564fc4472df1f4: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:58.010794 containerd[1513]: time="2025-07-06T23:38:58.010751093Z" level=info msg="CreateContainer within sandbox \"661239c9ddef68bca8cfadfa37cb753f43d36472fbe3dc0ee4fbecb13403c660\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c01fadbd494e6fd712e979c86c9ac7cb9c38453627c1c5eaa564fc4472df1f4\"" Jul 6 23:38:58.011340 containerd[1513]: time="2025-07-06T23:38:58.011313878Z" level=info msg="StartContainer for \"7c01fadbd494e6fd712e979c86c9ac7cb9c38453627c1c5eaa564fc4472df1f4\"" Jul 6 23:38:58.012502 containerd[1513]: time="2025-07-06T23:38:58.012477170Z" level=info msg="connecting to shim 7c01fadbd494e6fd712e979c86c9ac7cb9c38453627c1c5eaa564fc4472df1f4" address="unix:///run/containerd/s/17e5e7e602153876c4b5143637311f3617cb9c2e28c5b75208285706044b4e54" protocol=ttrpc version=3 Jul 6 23:38:58.035135 systemd[1]: Started cri-containerd-7c01fadbd494e6fd712e979c86c9ac7cb9c38453627c1c5eaa564fc4472df1f4.scope - libcontainer container 7c01fadbd494e6fd712e979c86c9ac7cb9c38453627c1c5eaa564fc4472df1f4. Jul 6 23:38:58.065311 containerd[1513]: time="2025-07-06T23:38:58.065260551Z" level=info msg="StartContainer for \"7c01fadbd494e6fd712e979c86c9ac7cb9c38453627c1c5eaa564fc4472df1f4\" returns successfully" Jul 6 23:38:58.601110 systemd-networkd[1436]: cali85e78c5666e: Gained IPv6LL Jul 6 23:38:58.703478 kubelet[2640]: E0706 23:38:58.703441 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:58.706370 kubelet[2640]: E0706 23:38:58.703657 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:58.709602 kubelet[2640]: I0706 23:38:58.709525 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7df7dc4d54-w5qkj" podStartSLOduration=21.947262738 podStartE2EDuration="23.709504205s" podCreationTimestamp="2025-07-06 23:38:35 +0000 UTC" firstStartedPulling="2025-07-06 23:38:55.808281262 +0000 UTC m=+43.434829946" lastFinishedPulling="2025-07-06 23:38:57.570522729 +0000 UTC m=+45.197071413" observedRunningTime="2025-07-06 23:38:58.706597436 +0000 UTC m=+46.333146120" watchObservedRunningTime="2025-07-06 23:38:58.709504205 +0000 UTC m=+46.336052889" Jul 6 23:38:58.728595 kubelet[2640]: I0706 23:38:58.728508 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mrqjt" podStartSLOduration=41.726119902 podStartE2EDuration="41.726119902s" podCreationTimestamp="2025-07-06 23:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:38:58.721206884 +0000 UTC m=+46.347755568" watchObservedRunningTime="2025-07-06 23:38:58.726119902 +0000 UTC m=+46.352668666" Jul 6 23:38:58.794131 systemd-networkd[1436]: calid0cf0c6d5b2: Gained IPv6LL Jul 6 23:38:58.796987 containerd[1513]: time="2025-07-06T23:38:58.796683391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa1578db5b3daca9d14316a379f1b7eb82acf2b301fbc51815727f6a502e010f\" id:\"3157bb4420eed7ce52aebf9c02cdb69e6253e59b4e21b8fe10126ef1e2882a52\" pid:5035 exited_at:{seconds:1751845138 nanos:795630344}" Jul 6 23:38:59.369086 systemd-networkd[1436]: cali69bff83032b: Gained IPv6LL Jul 6 23:38:59.419571 containerd[1513]: time="2025-07-06T23:38:59.419485416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:59.420342 containerd[1513]: time="2025-07-06T23:38:59.420082322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 6 23:38:59.421315 containerd[1513]: time="2025-07-06T23:38:59.421259213Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:59.423253 containerd[1513]: time="2025-07-06T23:38:59.423218698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:38:59.424080 containerd[1513]: time="2025-07-06T23:38:59.424037013Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.850904246s" Jul 6 23:38:59.424080 containerd[1513]: time="2025-07-06T23:38:59.424070975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:38:59.424959 containerd[1513]: time="2025-07-06T23:38:59.424886450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:38:59.430903 containerd[1513]: time="2025-07-06T23:38:59.430815587Z" level=info msg="CreateContainer within sandbox \"13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:38:59.437733 containerd[1513]: time="2025-07-06T23:38:59.437679484Z" level=info msg="Container e35829498d2bfb607d3495e1ed249585a98c5396d992171a29c5801549783070: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:38:59.453359 containerd[1513]: time="2025-07-06T23:38:59.453302921Z" level=info msg="CreateContainer within sandbox \"13c3fbc09e0f328d8e6a901a7c2b87fb0205cabbb6b2f68a7fda96da88b82ccb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e35829498d2bfb607d3495e1ed249585a98c5396d992171a29c5801549783070\"" Jul 6 23:38:59.454130 containerd[1513]: time="2025-07-06T23:38:59.454104836Z" level=info msg="StartContainer for \"e35829498d2bfb607d3495e1ed249585a98c5396d992171a29c5801549783070\"" Jul 6 23:38:59.455288 containerd[1513]: time="2025-07-06T23:38:59.455256245Z" level=info msg="connecting to shim e35829498d2bfb607d3495e1ed249585a98c5396d992171a29c5801549783070" address="unix:///run/containerd/s/ea470afcbae33c59190734669da3fe788dd65a7122d1ab03b1b2005cfbd02fd0" protocol=ttrpc version=3 Jul 6 23:38:59.480144 systemd[1]: Started cri-containerd-e35829498d2bfb607d3495e1ed249585a98c5396d992171a29c5801549783070.scope - libcontainer container e35829498d2bfb607d3495e1ed249585a98c5396d992171a29c5801549783070. Jul 6 23:38:59.519222 containerd[1513]: time="2025-07-06T23:38:59.519181454Z" level=info msg="StartContainer for \"e35829498d2bfb607d3495e1ed249585a98c5396d992171a29c5801549783070\" returns successfully" Jul 6 23:38:59.707473 kubelet[2640]: E0706 23:38:59.707339 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:59.708422 kubelet[2640]: E0706 23:38:59.708267 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:38:59.722361 kubelet[2640]: I0706 23:38:59.722292 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-844cfc594d-wc774" podStartSLOduration=26.350698617 podStartE2EDuration="29.722273609s" podCreationTimestamp="2025-07-06 23:38:30 +0000 UTC" firstStartedPulling="2025-07-06 23:38:56.053194173 +0000 UTC m=+43.679742817" lastFinishedPulling="2025-07-06 23:38:59.424769125 +0000 UTC m=+47.051317809" observedRunningTime="2025-07-06 23:38:59.721620261 +0000 UTC m=+47.348168945" watchObservedRunningTime="2025-07-06 23:38:59.722273609 +0000 UTC m=+47.348822293" Jul 6 23:39:00.503022 containerd[1513]: time="2025-07-06T23:39:00.502962846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:00.503762 containerd[1513]: time="2025-07-06T23:39:00.503718998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 6 23:39:00.504703 containerd[1513]: time="2025-07-06T23:39:00.504665439Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:00.507548 containerd[1513]: time="2025-07-06T23:39:00.507502759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:00.508256 containerd[1513]: time="2025-07-06T23:39:00.508222189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.083302258s" Jul 6 23:39:00.508311 containerd[1513]: time="2025-07-06T23:39:00.508256271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 6 23:39:00.510326 containerd[1513]: time="2025-07-06T23:39:00.510280396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:39:00.514758 containerd[1513]: time="2025-07-06T23:39:00.514716784Z" level=info msg="CreateContainer within sandbox \"3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:39:00.525213 containerd[1513]: time="2025-07-06T23:39:00.525108784Z" level=info msg="Container d048157ba9a9d3a4bafc632b4abe24e8ea67d51dded8edbfe13054462c2df815: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:00.538836 containerd[1513]: time="2025-07-06T23:39:00.538784203Z" level=info msg="CreateContainer within sandbox \"3adfe51836ddc36342e25e0c36b99dc0b7b0a3758ca213c2390e839f5aee7939\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d048157ba9a9d3a4bafc632b4abe24e8ea67d51dded8edbfe13054462c2df815\"" Jul 6 23:39:00.539942 containerd[1513]: time="2025-07-06T23:39:00.539901650Z" level=info msg="StartContainer for \"d048157ba9a9d3a4bafc632b4abe24e8ea67d51dded8edbfe13054462c2df815\"" Jul 6 23:39:00.541578 containerd[1513]: time="2025-07-06T23:39:00.541539239Z" level=info msg="connecting to shim d048157ba9a9d3a4bafc632b4abe24e8ea67d51dded8edbfe13054462c2df815" address="unix:///run/containerd/s/7a1ba76855887c67abf838906b2f9db69e1509377cfe68016ae619442943ab04" protocol=ttrpc version=3 Jul 6 23:39:00.568173 systemd[1]: Started cri-containerd-d048157ba9a9d3a4bafc632b4abe24e8ea67d51dded8edbfe13054462c2df815.scope - libcontainer container d048157ba9a9d3a4bafc632b4abe24e8ea67d51dded8edbfe13054462c2df815. Jul 6 23:39:00.641439 containerd[1513]: time="2025-07-06T23:39:00.641303982Z" level=info msg="StartContainer for \"d048157ba9a9d3a4bafc632b4abe24e8ea67d51dded8edbfe13054462c2df815\" returns successfully" Jul 6 23:39:00.712601 kubelet[2640]: E0706 23:39:00.712563 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:39:00.716905 kubelet[2640]: I0706 23:39:00.716861 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:39:00.726425 kubelet[2640]: I0706 23:39:00.726360 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s5bgr" podStartSLOduration=19.995731047 podStartE2EDuration="25.726263698s" podCreationTimestamp="2025-07-06 23:38:35 +0000 UTC" firstStartedPulling="2025-07-06 23:38:54.778754343 +0000 UTC m=+42.405303027" lastFinishedPulling="2025-07-06 23:39:00.509286994 +0000 UTC m=+48.135835678" observedRunningTime="2025-07-06 23:39:00.725471465 +0000 UTC m=+48.352020149" watchObservedRunningTime="2025-07-06 23:39:00.726263698 +0000 UTC m=+48.352812382" Jul 6 23:39:00.732973 containerd[1513]: time="2025-07-06T23:39:00.732082464Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:00.732973 containerd[1513]: time="2025-07-06T23:39:00.732118706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:39:00.735307 containerd[1513]: time="2025-07-06T23:39:00.735255519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 224.931ms" Jul 6 23:39:00.735307 containerd[1513]: time="2025-07-06T23:39:00.735303361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:39:00.738536 containerd[1513]: time="2025-07-06T23:39:00.738144001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:39:00.742140 containerd[1513]: time="2025-07-06T23:39:00.741956242Z" level=info msg="CreateContainer within sandbox \"8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:39:00.751158 containerd[1513]: time="2025-07-06T23:39:00.751068708Z" level=info msg="Container e257eff359ea35e04dce9d2bbbbf363cfda2ae3ea87b03690022bcd2f363cd22: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:00.765089 containerd[1513]: time="2025-07-06T23:39:00.764972337Z" level=info msg="CreateContainer within sandbox \"8da80320c62129214aea2727c9bfbc11bff9cf7978441725cd8ec24c23e300b6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e257eff359ea35e04dce9d2bbbbf363cfda2ae3ea87b03690022bcd2f363cd22\"" Jul 6 23:39:00.766732 containerd[1513]: time="2025-07-06T23:39:00.766682009Z" level=info msg="StartContainer for \"e257eff359ea35e04dce9d2bbbbf363cfda2ae3ea87b03690022bcd2f363cd22\"" Jul 6 23:39:00.768302 containerd[1513]: time="2025-07-06T23:39:00.768262956Z" level=info msg="connecting to shim e257eff359ea35e04dce9d2bbbbf363cfda2ae3ea87b03690022bcd2f363cd22" address="unix:///run/containerd/s/0405e85870752eb7e6c50931fd64055f0b1401e92b4a10060e6eb9a56442e7e6" protocol=ttrpc version=3 Jul 6 23:39:00.799214 systemd[1]: Started cri-containerd-e257eff359ea35e04dce9d2bbbbf363cfda2ae3ea87b03690022bcd2f363cd22.scope - libcontainer container e257eff359ea35e04dce9d2bbbbf363cfda2ae3ea87b03690022bcd2f363cd22. Jul 6 23:39:00.851183 containerd[1513]: time="2025-07-06T23:39:00.851142264Z" level=info msg="StartContainer for \"e257eff359ea35e04dce9d2bbbbf363cfda2ae3ea87b03690022bcd2f363cd22\" returns successfully" Jul 6 23:39:01.343325 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:46170.service - OpenSSH per-connection server daemon (10.0.0.1:46170). Jul 6 23:39:01.433813 sshd[5225]: Accepted publickey for core from 10.0.0.1 port 46170 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:01.436919 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:01.448610 systemd-logind[1486]: New session 9 of user core. Jul 6 23:39:01.452200 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:39:01.577775 kubelet[2640]: I0706 23:39:01.577737 2640 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:39:01.577775 kubelet[2640]: I0706 23:39:01.577784 2640 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:39:01.753474 kubelet[2640]: I0706 23:39:01.753248 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-844cfc594d-8c9tx" podStartSLOduration=27.146531782 podStartE2EDuration="31.75260341s" podCreationTimestamp="2025-07-06 23:38:30 +0000 UTC" firstStartedPulling="2025-07-06 23:38:56.131893805 +0000 UTC m=+43.758442449" lastFinishedPulling="2025-07-06 23:39:00.737965393 +0000 UTC m=+48.364514077" observedRunningTime="2025-07-06 23:39:01.751384279 +0000 UTC m=+49.377932963" watchObservedRunningTime="2025-07-06 23:39:01.75260341 +0000 UTC m=+49.379152094" Jul 6 23:39:01.829798 sshd[5238]: Connection closed by 10.0.0.1 port 46170 Jul 6 23:39:01.830444 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:01.836256 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:46170.service: Deactivated successfully. Jul 6 23:39:01.840217 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:39:01.842130 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:39:01.845510 systemd-logind[1486]: Removed session 9. Jul 6 23:39:02.729525 kubelet[2640]: I0706 23:39:02.729484 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:39:02.730051 kubelet[2640]: E0706 23:39:02.729874 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:39:02.731245 kubelet[2640]: I0706 23:39:02.731210 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:39:03.130175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2993491103.mount: Deactivated successfully. Jul 6 23:39:03.460445 systemd-networkd[1436]: vxlan.calico: Link UP Jul 6 23:39:03.460453 systemd-networkd[1436]: vxlan.calico: Gained carrier Jul 6 23:39:03.735230 kubelet[2640]: E0706 23:39:03.735125 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:39:03.889641 containerd[1513]: time="2025-07-06T23:39:03.889584075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:03.891008 containerd[1513]: time="2025-07-06T23:39:03.890970290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 6 23:39:03.891826 containerd[1513]: time="2025-07-06T23:39:03.891795443Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:03.896474 containerd[1513]: time="2025-07-06T23:39:03.896418707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:39:03.897973 containerd[1513]: time="2025-07-06T23:39:03.897319863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.158781245s" Jul 6 23:39:03.897973 containerd[1513]: time="2025-07-06T23:39:03.897350264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 6 23:39:03.902617 containerd[1513]: time="2025-07-06T23:39:03.902564871Z" level=info msg="CreateContainer within sandbox \"51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:39:03.914959 containerd[1513]: time="2025-07-06T23:39:03.912734515Z" level=info msg="Container 26be662d3838c659ce062c6cd96ceb8f31cb1b99e4d00fa07f3841d3d0988b92: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:39:03.915714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225356827.mount: Deactivated successfully. Jul 6 23:39:03.925991 containerd[1513]: time="2025-07-06T23:39:03.925919279Z" level=info msg="CreateContainer within sandbox \"51e5d70b83923fc79389287b02ca84d60e97497cf9f933a1693fe2a3fbd66053\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"26be662d3838c659ce062c6cd96ceb8f31cb1b99e4d00fa07f3841d3d0988b92\"" Jul 6 23:39:03.926511 containerd[1513]: time="2025-07-06T23:39:03.926471621Z" level=info msg="StartContainer for \"26be662d3838c659ce062c6cd96ceb8f31cb1b99e4d00fa07f3841d3d0988b92\"" Jul 6 23:39:03.929019 containerd[1513]: time="2025-07-06T23:39:03.928984001Z" level=info msg="connecting to shim 26be662d3838c659ce062c6cd96ceb8f31cb1b99e4d00fa07f3841d3d0988b92" address="unix:///run/containerd/s/cd4f640962d6602d0bd2e419ccab7eb329348218853203fb828968793f002252" protocol=ttrpc version=3 Jul 6 23:39:03.954173 systemd[1]: Started cri-containerd-26be662d3838c659ce062c6cd96ceb8f31cb1b99e4d00fa07f3841d3d0988b92.scope - libcontainer container 26be662d3838c659ce062c6cd96ceb8f31cb1b99e4d00fa07f3841d3d0988b92. Jul 6 23:39:04.020895 containerd[1513]: time="2025-07-06T23:39:04.020855237Z" level=info msg="StartContainer for \"26be662d3838c659ce062c6cd96ceb8f31cb1b99e4d00fa07f3841d3d0988b92\" returns successfully" Jul 6 23:39:04.875524 containerd[1513]: time="2025-07-06T23:39:04.875474393Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26be662d3838c659ce062c6cd96ceb8f31cb1b99e4d00fa07f3841d3d0988b92\" id:\"a6a7ce32cc1475fef6429a23441f3c275d182e3691e9b67b00fbae6b97b2d48a\" pid:5502 exit_status:1 exited_at:{seconds:1751845144 nanos:875056616}" Jul 6 23:39:05.321074 systemd-networkd[1436]: vxlan.calico: Gained IPv6LL Jul 6 23:39:05.811497 containerd[1513]: time="2025-07-06T23:39:05.811454505Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26be662d3838c659ce062c6cd96ceb8f31cb1b99e4d00fa07f3841d3d0988b92\" id:\"bbc3a7565779c31d26eeb01153db305fa2852aee8ec49e5dc354e217633830ff\" pid:5530 exit_status:1 exited_at:{seconds:1751845145 nanos:811163014}" Jul 6 23:39:06.845871 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:53200.service - OpenSSH per-connection server daemon (10.0.0.1:53200). Jul 6 23:39:06.921882 sshd[5543]: Accepted publickey for core from 10.0.0.1 port 53200 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:06.926231 sshd-session[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:06.930782 systemd-logind[1486]: New session 10 of user core. Jul 6 23:39:06.940225 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:39:07.167062 sshd[5545]: Connection closed by 10.0.0.1 port 53200 Jul 6 23:39:07.167549 sshd-session[5543]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:07.179580 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:53200.service: Deactivated successfully. Jul 6 23:39:07.181897 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:39:07.184665 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:39:07.188352 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:53212.service - OpenSSH per-connection server daemon (10.0.0.1:53212). Jul 6 23:39:07.189507 systemd-logind[1486]: Removed session 10. Jul 6 23:39:07.261434 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 53212 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:07.263611 sshd-session[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:07.273628 systemd-logind[1486]: New session 11 of user core. Jul 6 23:39:07.280151 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:39:07.498463 sshd[5563]: Connection closed by 10.0.0.1 port 53212 Jul 6 23:39:07.499365 sshd-session[5561]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:07.516079 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:53212.service: Deactivated successfully. Jul 6 23:39:07.518486 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:39:07.520553 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:39:07.523737 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:53226.service - OpenSSH per-connection server daemon (10.0.0.1:53226). Jul 6 23:39:07.524681 systemd-logind[1486]: Removed session 11. Jul 6 23:39:07.589917 sshd[5575]: Accepted publickey for core from 10.0.0.1 port 53226 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:07.591997 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:07.601150 systemd-logind[1486]: New session 12 of user core. Jul 6 23:39:07.612182 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:39:07.810777 sshd[5577]: Connection closed by 10.0.0.1 port 53226 Jul 6 23:39:07.811351 sshd-session[5575]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:07.815764 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:53226.service: Deactivated successfully. Jul 6 23:39:07.820821 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:39:07.821850 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:39:07.824259 systemd-logind[1486]: Removed session 12. Jul 6 23:39:12.834788 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:37572.service - OpenSSH per-connection server daemon (10.0.0.1:37572). Jul 6 23:39:12.900684 sshd[5606]: Accepted publickey for core from 10.0.0.1 port 37572 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:12.902632 sshd-session[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:12.907767 systemd-logind[1486]: New session 13 of user core. Jul 6 23:39:12.915129 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:39:13.072491 sshd[5608]: Connection closed by 10.0.0.1 port 37572 Jul 6 23:39:13.071636 sshd-session[5606]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:13.082137 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:39:13.082332 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:37572.service: Deactivated successfully. Jul 6 23:39:13.086044 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:39:13.087864 systemd-logind[1486]: Removed session 13. Jul 6 23:39:13.603832 kubelet[2640]: I0706 23:39:13.603437 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:39:13.630908 kubelet[2640]: I0706 23:39:13.630344 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-nx6kk" podStartSLOduration=33.680005576 podStartE2EDuration="39.630326228s" podCreationTimestamp="2025-07-06 23:38:34 +0000 UTC" firstStartedPulling="2025-07-06 23:38:57.947887446 +0000 UTC m=+45.574436090" lastFinishedPulling="2025-07-06 23:39:03.898208098 +0000 UTC m=+51.524756742" observedRunningTime="2025-07-06 23:39:04.756494314 +0000 UTC m=+52.383042998" watchObservedRunningTime="2025-07-06 23:39:13.630326228 +0000 UTC m=+61.256874912" Jul 6 23:39:15.580326 containerd[1513]: time="2025-07-06T23:39:15.580216605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26be662d3838c659ce062c6cd96ceb8f31cb1b99e4d00fa07f3841d3d0988b92\" id:\"373ce5e4ef60ff929d038a007c18df99910d8134b27be3e680627eea24d55031\" pid:5643 exited_at:{seconds:1751845155 nanos:579937436}" Jul 6 23:39:18.083846 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:37574.service - OpenSSH per-connection server daemon (10.0.0.1:37574). Jul 6 23:39:18.156071 sshd[5658]: Accepted publickey for core from 10.0.0.1 port 37574 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:18.157489 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:18.161393 systemd-logind[1486]: New session 14 of user core. Jul 6 23:39:18.171136 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:39:18.382459 sshd[5662]: Connection closed by 10.0.0.1 port 37574 Jul 6 23:39:18.383007 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:18.391476 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:37574.service: Deactivated successfully. Jul 6 23:39:18.393446 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:39:18.394325 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:39:18.395615 systemd-logind[1486]: Removed session 14. Jul 6 23:39:19.693041 containerd[1513]: time="2025-07-06T23:39:19.692995473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7df7f9c49cd83fa4dfe923d52d15eabd9f091571f38c1c372f6650a6dabfbb4e\" id:\"faf479ec3adb48e6586d665be0f374549579741555302570d95afb0da7642691\" pid:5687 exited_at:{seconds:1751845159 nanos:692668702}" Jul 6 23:39:23.395981 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:41904.service - OpenSSH per-connection server daemon (10.0.0.1:41904). Jul 6 23:39:23.487590 sshd[5702]: Accepted publickey for core from 10.0.0.1 port 41904 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:23.489280 sshd-session[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:23.494205 systemd-logind[1486]: New session 15 of user core. Jul 6 23:39:23.502142 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:39:23.682422 sshd[5704]: Connection closed by 10.0.0.1 port 41904 Jul 6 23:39:23.681440 sshd-session[5702]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:23.685053 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:41904.service: Deactivated successfully. Jul 6 23:39:23.688442 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:39:23.689115 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:39:23.690711 systemd-logind[1486]: Removed session 15. Jul 6 23:39:25.480715 kubelet[2640]: E0706 23:39:25.480658 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:39:28.695613 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:41918.service - OpenSSH per-connection server daemon (10.0.0.1:41918). Jul 6 23:39:28.731079 containerd[1513]: time="2025-07-06T23:39:28.731042354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa1578db5b3daca9d14316a379f1b7eb82acf2b301fbc51815727f6a502e010f\" id:\"696fb5931d87ecd96978dd8e05c01d4bcd39ad2c44813bf41bdcbb5cf535eb55\" pid:5740 exited_at:{seconds:1751845168 nanos:730739038}" Jul 6 23:39:28.743512 sshd[5726]: Accepted publickey for core from 10.0.0.1 port 41918 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:28.744922 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:28.749386 systemd-logind[1486]: New session 16 of user core. Jul 6 23:39:28.763125 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:39:28.913952 sshd[5751]: Connection closed by 10.0.0.1 port 41918 Jul 6 23:39:28.914824 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:28.924474 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:41918.service: Deactivated successfully. Jul 6 23:39:28.927650 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:39:28.928561 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:39:28.931256 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:41932.service - OpenSSH per-connection server daemon (10.0.0.1:41932). Jul 6 23:39:28.933448 systemd-logind[1486]: Removed session 16. Jul 6 23:39:28.987503 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 41932 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:28.988863 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:28.993049 systemd-logind[1486]: New session 17 of user core. Jul 6 23:39:29.003142 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:39:29.213158 sshd[5767]: Connection closed by 10.0.0.1 port 41932 Jul 6 23:39:29.213498 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:29.225761 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:41932.service: Deactivated successfully. Jul 6 23:39:29.227617 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:39:29.228391 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:39:29.230948 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:41934.service - OpenSSH per-connection server daemon (10.0.0.1:41934). Jul 6 23:39:29.233336 systemd-logind[1486]: Removed session 17. Jul 6 23:39:29.280575 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 41934 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:29.282032 sshd-session[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:29.287242 systemd-logind[1486]: New session 18 of user core. Jul 6 23:39:29.297125 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:39:30.055006 sshd[5781]: Connection closed by 10.0.0.1 port 41934 Jul 6 23:39:30.055523 sshd-session[5779]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:30.065203 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:41934.service: Deactivated successfully. Jul 6 23:39:30.067477 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:39:30.070062 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:39:30.074170 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:41936.service - OpenSSH per-connection server daemon (10.0.0.1:41936). Jul 6 23:39:30.076999 systemd-logind[1486]: Removed session 18. Jul 6 23:39:30.130001 sshd[5802]: Accepted publickey for core from 10.0.0.1 port 41936 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:30.131357 sshd-session[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:30.135618 systemd-logind[1486]: New session 19 of user core. Jul 6 23:39:30.153135 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:39:30.451185 sshd[5804]: Connection closed by 10.0.0.1 port 41936 Jul 6 23:39:30.449973 sshd-session[5802]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:30.461668 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:41936.service: Deactivated successfully. Jul 6 23:39:30.465752 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:39:30.467220 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:39:30.471228 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:41938.service - OpenSSH per-connection server daemon (10.0.0.1:41938). Jul 6 23:39:30.472216 systemd-logind[1486]: Removed session 19. Jul 6 23:39:30.529491 sshd[5816]: Accepted publickey for core from 10.0.0.1 port 41938 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:30.532340 sshd-session[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:30.545046 systemd-logind[1486]: New session 20 of user core. Jul 6 23:39:30.561149 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:39:30.708606 sshd[5818]: Connection closed by 10.0.0.1 port 41938 Jul 6 23:39:30.708823 sshd-session[5816]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:30.712767 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:41938.service: Deactivated successfully. Jul 6 23:39:30.716866 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:39:30.718020 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:39:30.720148 systemd-logind[1486]: Removed session 20. Jul 6 23:39:35.726900 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:36112.service - OpenSSH per-connection server daemon (10.0.0.1:36112). Jul 6 23:39:35.797115 sshd[5834]: Accepted publickey for core from 10.0.0.1 port 36112 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:35.802071 sshd-session[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:35.806998 systemd-logind[1486]: New session 21 of user core. Jul 6 23:39:35.813133 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:39:35.847396 containerd[1513]: time="2025-07-06T23:39:35.847342339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26be662d3838c659ce062c6cd96ceb8f31cb1b99e4d00fa07f3841d3d0988b92\" id:\"d7d1ebb21ced6d8ec1313d78a012e64724fd30706102fd1d0ccdd891ff4c6deb\" pid:5848 exited_at:{seconds:1751845175 nanos:846870902}" Jul 6 23:39:35.982396 sshd[5860]: Connection closed by 10.0.0.1 port 36112 Jul 6 23:39:35.983766 sshd-session[5834]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:35.987370 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:36112.service: Deactivated successfully. Jul 6 23:39:35.990654 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:39:35.991413 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:39:35.993169 systemd-logind[1486]: Removed session 21. Jul 6 23:39:40.484104 kubelet[2640]: E0706 23:39:40.484067 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:39:40.998408 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:36122.service - OpenSSH per-connection server daemon (10.0.0.1:36122). Jul 6 23:39:41.058728 sshd[5877]: Accepted publickey for core from 10.0.0.1 port 36122 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:41.059968 sshd-session[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:41.066175 systemd-logind[1486]: New session 22 of user core. Jul 6 23:39:41.073781 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:39:41.166522 kubelet[2640]: I0706 23:39:41.166468 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:39:41.242899 sshd[5879]: Connection closed by 10.0.0.1 port 36122 Jul 6 23:39:41.243300 sshd-session[5877]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:41.247174 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:36122.service: Deactivated successfully. Jul 6 23:39:41.249175 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:39:41.249957 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:39:41.251776 systemd-logind[1486]: Removed session 22. Jul 6 23:39:41.481329 kubelet[2640]: E0706 23:39:41.481294 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:39:42.489184 kubelet[2640]: E0706 23:39:42.489074 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:39:46.259049 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:33368.service - OpenSSH per-connection server daemon (10.0.0.1:33368). Jul 6 23:39:46.320642 sshd[5902]: Accepted publickey for core from 10.0.0.1 port 33368 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:39:46.321359 sshd-session[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:46.327132 systemd-logind[1486]: New session 23 of user core. Jul 6 23:39:46.336123 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:39:46.553967 sshd[5904]: Connection closed by 10.0.0.1 port 33368 Jul 6 23:39:46.553529 sshd-session[5902]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:46.568047 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:33368.service: Deactivated successfully. Jul 6 23:39:46.574218 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:39:46.575231 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:39:46.577772 systemd-logind[1486]: Removed session 23. Jul 6 23:39:47.967063 containerd[1513]: time="2025-07-06T23:39:47.966783803Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa1578db5b3daca9d14316a379f1b7eb82acf2b301fbc51815727f6a502e010f\" id:\"fe708248f78021eb76df6e85258414de5be60127e6ae6eb8252fdf9029afe9d9\" pid:5931 exited_at:{seconds:1751845187 nanos:966532483}"