Jul 6 23:43:08.837062 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:43:08.837089 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:57:11 -00 2025 Jul 6 23:43:08.837100 kernel: KASLR enabled Jul 6 23:43:08.837106 kernel: efi: EFI v2.7 by EDK II Jul 6 23:43:08.837112 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 6 23:43:08.837117 kernel: random: crng init done Jul 6 23:43:08.837125 kernel: secureboot: Secure boot disabled Jul 6 23:43:08.837131 kernel: ACPI: Early table checksum verification disabled Jul 6 23:43:08.837137 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 6 23:43:08.837144 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:43:08.837150 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:08.837156 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:08.837162 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:08.837168 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:08.837175 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:08.837183 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:08.837189 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:08.837195 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:08.837201 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:08.837207 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 6 23:43:08.837214 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:43:08.837220 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:43:08.837226 kernel: NODE_DATA(0) allocated [mem 0xdc964a00-0xdc96bfff] Jul 6 23:43:08.837232 kernel: Zone ranges: Jul 6 23:43:08.837238 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:43:08.837246 kernel: DMA32 empty Jul 6 23:43:08.837252 kernel: Normal empty Jul 6 23:43:08.837258 kernel: Device empty Jul 6 23:43:08.837267 kernel: Movable zone start for each node Jul 6 23:43:08.837274 kernel: Early memory node ranges Jul 6 23:43:08.837280 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 6 23:43:08.837286 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 6 23:43:08.837292 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 6 23:43:08.837298 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 6 23:43:08.837305 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 6 23:43:08.837311 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 6 23:43:08.837317 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 6 23:43:08.837324 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 6 23:43:08.837330 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 6 23:43:08.837336 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 6 23:43:08.837345 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 6 23:43:08.837352 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 6 23:43:08.837358 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 6 23:43:08.837366 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:43:08.837373 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 6 23:43:08.837380 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 6 23:43:08.837389 kernel: psci: probing for conduit method from ACPI. Jul 6 23:43:08.837412 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:43:08.837418 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:43:08.837425 kernel: psci: Trusted OS migration not required Jul 6 23:43:08.837432 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:43:08.837438 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 6 23:43:08.837445 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:43:08.837454 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:43:08.837461 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 6 23:43:08.837467 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:43:08.837474 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:43:08.837480 kernel: CPU features: detected: Spectre-v4 Jul 6 23:43:08.837487 kernel: CPU features: detected: Spectre-BHB Jul 6 23:43:08.837494 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:43:08.837500 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:43:08.837507 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:43:08.837513 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:43:08.837520 kernel: alternatives: applying boot alternatives Jul 6 23:43:08.837528 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:43:08.837536 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:43:08.837543 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:43:08.837550 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:43:08.837556 kernel: Fallback order for Node 0: 0 Jul 6 23:43:08.837562 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 6 23:43:08.837569 kernel: Policy zone: DMA Jul 6 23:43:08.837575 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:43:08.837582 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 6 23:43:08.837589 kernel: software IO TLB: area num 4. Jul 6 23:43:08.837595 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 6 23:43:08.837602 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 6 23:43:08.837610 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:43:08.837616 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:43:08.837624 kernel: rcu: RCU event tracing is enabled. Jul 6 23:43:08.837631 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:43:08.837638 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:43:08.837644 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:43:08.837656 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:43:08.837664 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:43:08.837671 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:43:08.837677 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:43:08.837684 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:43:08.837692 kernel: GICv3: 256 SPIs implemented Jul 6 23:43:08.837699 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:43:08.837706 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:43:08.837712 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:43:08.837719 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 6 23:43:08.837725 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 6 23:43:08.837732 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 6 23:43:08.837739 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:43:08.837746 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:43:08.837752 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 6 23:43:08.837759 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 6 23:43:08.837766 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:43:08.837774 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:43:08.837781 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:43:08.837787 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:43:08.837794 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:43:08.837801 kernel: arm-pv: using stolen time PV Jul 6 23:43:08.837808 kernel: Console: colour dummy device 80x25 Jul 6 23:43:08.837815 kernel: ACPI: Core revision 20240827 Jul 6 23:43:08.837822 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:43:08.837829 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:43:08.837835 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:43:08.837844 kernel: landlock: Up and running. Jul 6 23:43:08.837851 kernel: SELinux: Initializing. Jul 6 23:43:08.837857 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:43:08.837864 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:43:08.837871 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:43:08.837878 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:43:08.837885 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:43:08.837892 kernel: Remapping and enabling EFI services. Jul 6 23:43:08.837899 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:43:08.837912 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:43:08.837919 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 6 23:43:08.837926 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 6 23:43:08.837935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:43:08.837942 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:43:08.837949 kernel: Detected PIPT I-cache on CPU2 Jul 6 23:43:08.837956 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 6 23:43:08.837964 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 6 23:43:08.837973 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:43:08.837980 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 6 23:43:08.837990 kernel: Detected PIPT I-cache on CPU3 Jul 6 23:43:08.838000 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 6 23:43:08.838009 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 6 23:43:08.838016 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:43:08.838023 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 6 23:43:08.838030 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:43:08.838038 kernel: SMP: Total of 4 processors activated. Jul 6 23:43:08.838047 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:43:08.838054 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:43:08.838061 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:43:08.838069 kernel: CPU features: detected: Common not Private translations Jul 6 23:43:08.838076 kernel: CPU features: detected: CRC32 instructions Jul 6 23:43:08.838083 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 6 23:43:08.838090 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:43:08.838098 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:43:08.838105 kernel: CPU features: detected: Privileged Access Never Jul 6 23:43:08.838114 kernel: CPU features: detected: RAS Extension Support Jul 6 23:43:08.838121 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:43:08.838128 kernel: alternatives: applying system-wide alternatives Jul 6 23:43:08.838136 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 6 23:43:08.838144 kernel: Memory: 2423964K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 125988K reserved, 16384K cma-reserved) Jul 6 23:43:08.838151 kernel: devtmpfs: initialized Jul 6 23:43:08.838158 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:43:08.838165 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:43:08.838172 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:43:08.838181 kernel: 0 pages in range for non-PLT usage Jul 6 23:43:08.838188 kernel: 508432 pages in range for PLT usage Jul 6 23:43:08.838195 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:43:08.838202 kernel: SMBIOS 3.0.0 present. Jul 6 23:43:08.838209 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 6 23:43:08.838216 kernel: DMI: Memory slots populated: 1/1 Jul 6 23:43:08.838223 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:43:08.838230 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:43:08.838238 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:43:08.838246 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:43:08.838253 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:43:08.838261 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 6 23:43:08.838268 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:43:08.838275 kernel: cpuidle: using governor menu Jul 6 23:43:08.838282 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:43:08.838289 kernel: ASID allocator initialised with 32768 entries Jul 6 23:43:08.838296 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:43:08.838303 kernel: Serial: AMBA PL011 UART driver Jul 6 23:43:08.838311 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:43:08.838319 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:43:08.838326 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:43:08.838333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:43:08.838340 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:43:08.838347 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:43:08.838354 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:43:08.838361 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:43:08.838369 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:43:08.838376 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:43:08.838384 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:43:08.838391 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:43:08.838405 kernel: ACPI: Interpreter enabled Jul 6 23:43:08.838412 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:43:08.838419 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:43:08.838426 kernel: ACPI: CPU0 has been hot-added Jul 6 23:43:08.838433 kernel: ACPI: CPU1 has been hot-added Jul 6 23:43:08.838440 kernel: ACPI: CPU2 has been hot-added Jul 6 23:43:08.838447 kernel: ACPI: CPU3 has been hot-added Jul 6 23:43:08.838456 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:43:08.838464 kernel: printk: legacy console [ttyAMA0] enabled Jul 6 23:43:08.838471 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:43:08.838628 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:43:08.838707 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:43:08.838770 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:43:08.838832 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 6 23:43:08.838896 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 6 23:43:08.838905 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 6 23:43:08.838912 kernel: PCI host bridge to bus 0000:00 Jul 6 23:43:08.838982 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 6 23:43:08.839039 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:43:08.839095 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 6 23:43:08.839152 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:43:08.839239 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 6 23:43:08.839313 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 6 23:43:08.839377 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 6 23:43:08.839459 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 6 23:43:08.839526 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 6 23:43:08.839589 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 6 23:43:08.839659 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 6 23:43:08.839749 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 6 23:43:08.839812 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 6 23:43:08.839870 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:43:08.839927 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 6 23:43:08.839937 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:43:08.839944 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:43:08.839952 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:43:08.839962 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:43:08.839969 kernel: iommu: Default domain type: Translated Jul 6 23:43:08.839976 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:43:08.839984 kernel: efivars: Registered efivars operations Jul 6 23:43:08.839991 kernel: vgaarb: loaded Jul 6 23:43:08.840001 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:43:08.840011 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:43:08.840019 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:43:08.840026 kernel: pnp: PnP ACPI init Jul 6 23:43:08.840108 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 6 23:43:08.840118 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:43:08.840126 kernel: NET: Registered PF_INET protocol family Jul 6 23:43:08.840133 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:43:08.840140 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:43:08.840148 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:43:08.840155 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:43:08.840163 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:43:08.840172 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:43:08.840180 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:43:08.840187 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:43:08.840194 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:43:08.840202 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:43:08.840209 kernel: kvm [1]: HYP mode not available Jul 6 23:43:08.840217 kernel: Initialise system trusted keyrings Jul 6 23:43:08.840224 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:43:08.840232 kernel: Key type asymmetric registered Jul 6 23:43:08.840240 kernel: Asymmetric key parser 'x509' registered Jul 6 23:43:08.840248 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:43:08.840255 kernel: io scheduler mq-deadline registered Jul 6 23:43:08.840262 kernel: io scheduler kyber registered Jul 6 23:43:08.840269 kernel: io scheduler bfq registered Jul 6 23:43:08.840276 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:43:08.840284 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:43:08.840292 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:43:08.840364 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 6 23:43:08.840373 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:43:08.840382 kernel: thunder_xcv, ver 1.0 Jul 6 23:43:08.840390 kernel: thunder_bgx, ver 1.0 Jul 6 23:43:08.840487 kernel: nicpf, ver 1.0 Jul 6 23:43:08.840497 kernel: nicvf, ver 1.0 Jul 6 23:43:08.840605 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:43:08.840692 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:43:08 UTC (1751845388) Jul 6 23:43:08.840703 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:43:08.840711 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 6 23:43:08.840729 kernel: watchdog: NMI not fully supported Jul 6 23:43:08.840736 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:43:08.840743 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:43:08.840751 kernel: Segment Routing with IPv6 Jul 6 23:43:08.840758 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:43:08.840765 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:43:08.840772 kernel: Key type dns_resolver registered Jul 6 23:43:08.840779 kernel: registered taskstats version 1 Jul 6 23:43:08.840786 kernel: Loading compiled-in X.509 certificates Jul 6 23:43:08.840794 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: f8c1d02496b1c3f2ac4a0c4b5b2a55d3dc0ca718' Jul 6 23:43:08.840802 kernel: Demotion targets for Node 0: null Jul 6 23:43:08.840809 kernel: Key type .fscrypt registered Jul 6 23:43:08.840815 kernel: Key type fscrypt-provisioning registered Jul 6 23:43:08.840822 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:43:08.840830 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:43:08.840837 kernel: ima: No architecture policies found Jul 6 23:43:08.840844 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:43:08.840853 kernel: clk: Disabling unused clocks Jul 6 23:43:08.840860 kernel: PM: genpd: Disabling unused power domains Jul 6 23:43:08.840867 kernel: Warning: unable to open an initial console. Jul 6 23:43:08.840874 kernel: Freeing unused kernel memory: 39488K Jul 6 23:43:08.840881 kernel: Run /init as init process Jul 6 23:43:08.840888 kernel: with arguments: Jul 6 23:43:08.840895 kernel: /init Jul 6 23:43:08.840902 kernel: with environment: Jul 6 23:43:08.840909 kernel: HOME=/ Jul 6 23:43:08.840916 kernel: TERM=linux Jul 6 23:43:08.840924 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:43:08.840933 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:43:08.840943 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:43:08.840952 systemd[1]: Detected virtualization kvm. Jul 6 23:43:08.840959 systemd[1]: Detected architecture arm64. Jul 6 23:43:08.840967 systemd[1]: Running in initrd. Jul 6 23:43:08.840974 systemd[1]: No hostname configured, using default hostname. Jul 6 23:43:08.840983 systemd[1]: Hostname set to . Jul 6 23:43:08.840991 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:43:08.840999 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:43:08.841006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:43:08.841020 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:43:08.841029 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:43:08.841043 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:43:08.841051 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:43:08.841071 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:43:08.841088 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:43:08.841096 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:43:08.841104 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:43:08.841113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:43:08.841121 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:43:08.841128 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:43:08.841137 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:43:08.841145 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:43:08.841153 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:43:08.841167 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:43:08.841175 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:43:08.841182 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:43:08.841190 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:43:08.841198 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:43:08.841207 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:43:08.841215 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:43:08.841222 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:43:08.841241 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:43:08.841249 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:43:08.841257 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:43:08.841265 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:43:08.841272 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:43:08.841280 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:43:08.841290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:43:08.841297 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:43:08.841306 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:43:08.841313 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:43:08.841323 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:43:08.841349 systemd-journald[245]: Collecting audit messages is disabled. Jul 6 23:43:08.841369 systemd-journald[245]: Journal started Jul 6 23:43:08.841390 systemd-journald[245]: Runtime Journal (/run/log/journal/a1277263d44b4caba72109a092725d36) is 6M, max 48.5M, 42.4M free. Jul 6 23:43:08.835909 systemd-modules-load[246]: Inserted module 'overlay' Jul 6 23:43:08.847761 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:43:08.848462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:43:08.851900 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:43:08.855394 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:43:08.855439 kernel: Bridge firewalling registered Jul 6 23:43:08.853769 systemd-modules-load[246]: Inserted module 'br_netfilter' Jul 6 23:43:08.854551 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:43:08.857584 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:43:08.863968 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:43:08.867076 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:43:08.868627 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:43:08.870095 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:43:08.873693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:43:08.881957 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:43:08.884227 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:43:08.886452 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:43:08.888980 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:43:08.891304 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:43:08.923703 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:43:08.939705 systemd-resolved[290]: Positive Trust Anchors: Jul 6 23:43:08.939725 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:43:08.939756 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:43:08.945546 systemd-resolved[290]: Defaulting to hostname 'linux'. Jul 6 23:43:08.946711 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:43:08.948165 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:43:09.036415 kernel: SCSI subsystem initialized Jul 6 23:43:09.040421 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:43:09.053435 kernel: iscsi: registered transport (tcp) Jul 6 23:43:09.070434 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:43:09.070465 kernel: QLogic iSCSI HBA Driver Jul 6 23:43:09.088284 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:43:09.108360 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:43:09.111579 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:43:09.159694 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:43:09.162039 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:43:09.229449 kernel: raid6: neonx8 gen() 15776 MB/s Jul 6 23:43:09.246414 kernel: raid6: neonx4 gen() 15814 MB/s Jul 6 23:43:09.263415 kernel: raid6: neonx2 gen() 13208 MB/s Jul 6 23:43:09.280413 kernel: raid6: neonx1 gen() 10437 MB/s Jul 6 23:43:09.297414 kernel: raid6: int64x8 gen() 6902 MB/s Jul 6 23:43:09.314417 kernel: raid6: int64x4 gen() 7359 MB/s Jul 6 23:43:09.331418 kernel: raid6: int64x2 gen() 6105 MB/s Jul 6 23:43:09.348430 kernel: raid6: int64x1 gen() 5059 MB/s Jul 6 23:43:09.348490 kernel: raid6: using algorithm neonx4 gen() 15814 MB/s Jul 6 23:43:09.365421 kernel: raid6: .... xor() 12347 MB/s, rmw enabled Jul 6 23:43:09.365445 kernel: raid6: using neon recovery algorithm Jul 6 23:43:09.372418 kernel: xor: measuring software checksum speed Jul 6 23:43:09.372442 kernel: 8regs : 21664 MB/sec Jul 6 23:43:09.372451 kernel: 32regs : 20423 MB/sec Jul 6 23:43:09.373778 kernel: arm64_neon : 28061 MB/sec Jul 6 23:43:09.373793 kernel: xor: using function: arm64_neon (28061 MB/sec) Jul 6 23:43:09.428427 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:43:09.435018 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:43:09.437467 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:43:09.460115 systemd-udevd[499]: Using default interface naming scheme 'v255'. Jul 6 23:43:09.464257 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:43:09.466815 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:43:09.498287 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jul 6 23:43:09.523446 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:43:09.526022 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:43:09.585389 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:43:09.592059 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:43:09.639455 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 6 23:43:09.646727 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:43:09.650472 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:43:09.650526 kernel: GPT:9289727 != 19775487 Jul 6 23:43:09.650537 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:43:09.650546 kernel: GPT:9289727 != 19775487 Jul 6 23:43:09.651620 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:43:09.651675 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:43:09.655375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:43:09.655516 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:43:09.658543 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:43:09.660510 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:43:09.684165 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:43:09.692465 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:43:09.693782 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:43:09.696343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:43:09.708917 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:43:09.709843 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:43:09.718414 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:43:09.719367 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:43:09.721124 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:43:09.722953 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:43:09.725371 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:43:09.726966 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:43:09.743546 disk-uuid[593]: Primary Header is updated. Jul 6 23:43:09.743546 disk-uuid[593]: Secondary Entries is updated. Jul 6 23:43:09.743546 disk-uuid[593]: Secondary Header is updated. Jul 6 23:43:09.746591 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:43:09.748835 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:43:10.762418 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:43:10.762827 disk-uuid[596]: The operation has completed successfully. Jul 6 23:43:10.798205 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:43:10.799351 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:43:10.825020 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:43:10.847596 sh[612]: Success Jul 6 23:43:10.861451 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:43:10.861514 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:43:10.861536 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:43:10.869428 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:43:10.904221 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:43:10.905883 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:43:10.922019 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:43:10.929942 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:43:10.929979 kernel: BTRFS: device fsid 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (624) Jul 6 23:43:10.931069 kernel: BTRFS info (device dm-0): first mount of filesystem 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d Jul 6 23:43:10.931834 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:43:10.933912 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:43:10.937956 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:43:10.939063 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:43:10.940331 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:43:10.941150 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:43:10.943585 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:43:10.969434 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (656) Jul 6 23:43:10.971617 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:43:10.971666 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:43:10.971677 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:43:10.977426 kernel: BTRFS info (device vda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:43:10.978139 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:43:10.980858 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:43:11.057486 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:43:11.061967 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:43:11.114311 systemd-networkd[799]: lo: Link UP Jul 6 23:43:11.114322 systemd-networkd[799]: lo: Gained carrier Jul 6 23:43:11.115868 systemd-networkd[799]: Enumeration completed Jul 6 23:43:11.116136 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:43:11.116496 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:43:11.116500 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:43:11.117309 systemd[1]: Reached target network.target - Network. Jul 6 23:43:11.117691 systemd-networkd[799]: eth0: Link UP Jul 6 23:43:11.117694 systemd-networkd[799]: eth0: Gained carrier Jul 6 23:43:11.117702 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:43:11.135969 ignition[698]: Ignition 2.21.0 Jul 6 23:43:11.135981 ignition[698]: Stage: fetch-offline Jul 6 23:43:11.136014 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:43:11.136022 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:43:11.136210 ignition[698]: parsed url from cmdline: "" Jul 6 23:43:11.136213 ignition[698]: no config URL provided Jul 6 23:43:11.136218 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:43:11.136225 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:43:11.136244 ignition[698]: op(1): [started] loading QEMU firmware config module Jul 6 23:43:11.141666 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:43:11.136248 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:43:11.146576 ignition[698]: op(1): [finished] loading QEMU firmware config module Jul 6 23:43:11.190999 ignition[698]: parsing config with SHA512: e285f6a65b8c65a997c2a8bbd8d5ee3a7283d4759822c064dac9d65ae8a3ccfb1bdfa5edcfea219c242bb4fe42c2b23323443aff41889f8d259ba787deaa559f Jul 6 23:43:11.196009 unknown[698]: fetched base config from "system" Jul 6 23:43:11.196023 unknown[698]: fetched user config from "qemu" Jul 6 23:43:11.196472 ignition[698]: fetch-offline: fetch-offline passed Jul 6 23:43:11.198555 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:43:11.196529 ignition[698]: Ignition finished successfully Jul 6 23:43:11.200112 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:43:11.200895 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:43:11.236779 ignition[812]: Ignition 2.21.0 Jul 6 23:43:11.236791 ignition[812]: Stage: kargs Jul 6 23:43:11.237111 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:43:11.237122 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:43:11.238323 ignition[812]: kargs: kargs passed Jul 6 23:43:11.240835 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:43:11.238376 ignition[812]: Ignition finished successfully Jul 6 23:43:11.244386 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:43:11.270381 ignition[820]: Ignition 2.21.0 Jul 6 23:43:11.270408 ignition[820]: Stage: disks Jul 6 23:43:11.270555 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:43:11.270564 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:43:11.273245 ignition[820]: disks: disks passed Jul 6 23:43:11.273307 ignition[820]: Ignition finished successfully Jul 6 23:43:11.275379 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:43:11.276807 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:43:11.278325 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:43:11.280206 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:43:11.281985 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:43:11.283519 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:43:11.285902 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:43:11.307663 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 6 23:43:11.314459 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:43:11.317492 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:43:11.389417 kernel: EXT4-fs (vda9): mounted filesystem 8d88df29-f94d-4ab8-8fb6-af875603e6d4 r/w with ordered data mode. Quota mode: none. Jul 6 23:43:11.389553 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:43:11.390639 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:43:11.392932 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:43:11.394349 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:43:11.395439 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:43:11.395481 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:43:11.395505 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:43:11.409203 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:43:11.411322 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:43:11.419130 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (839) Jul 6 23:43:11.419174 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:43:11.419186 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:43:11.419829 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:43:11.422974 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:43:11.455222 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:43:11.458483 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:43:11.461598 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:43:11.464759 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:43:11.534375 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:43:11.536080 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:43:11.537464 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:43:11.556411 kernel: BTRFS info (device vda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:43:11.568924 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:43:11.574562 ignition[952]: INFO : Ignition 2.21.0 Jul 6 23:43:11.574562 ignition[952]: INFO : Stage: mount Jul 6 23:43:11.576332 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:43:11.576332 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:43:11.576332 ignition[952]: INFO : mount: mount passed Jul 6 23:43:11.576332 ignition[952]: INFO : Ignition finished successfully Jul 6 23:43:11.578483 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:43:11.580856 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:43:11.929549 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:43:11.931012 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:43:11.949509 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (965) Jul 6 23:43:11.949545 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:43:11.949555 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:43:11.950601 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:43:11.953058 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:43:11.984446 ignition[982]: INFO : Ignition 2.21.0 Jul 6 23:43:11.984446 ignition[982]: INFO : Stage: files Jul 6 23:43:11.986415 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:43:11.986415 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:43:11.986415 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:43:11.989620 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:43:11.989620 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:43:11.992224 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:43:11.992224 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:43:11.992224 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:43:11.991751 unknown[982]: wrote ssh authorized keys file for user: core Jul 6 23:43:11.997248 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:43:11.997248 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 6 23:43:12.036469 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:43:12.258497 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:43:12.258497 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:43:12.262440 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:43:12.262440 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:43:12.262440 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:43:12.262440 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:43:12.262440 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:43:12.262440 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:43:12.262440 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:43:12.274803 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:43:12.274803 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:43:12.274803 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:43:12.274803 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:43:12.274803 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:43:12.274803 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 6 23:43:12.769931 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:43:12.841679 systemd-networkd[799]: eth0: Gained IPv6LL Jul 6 23:43:13.985884 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:43:13.985884 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:43:13.989517 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:43:13.989517 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:43:13.989517 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:43:13.989517 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 6 23:43:13.989517 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:43:13.989517 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:43:13.989517 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 6 23:43:13.989517 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:43:14.007078 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:43:14.010356 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:43:14.011949 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:43:14.011949 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:43:14.011949 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:43:14.011949 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:43:14.011949 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:43:14.011949 ignition[982]: INFO : files: files passed Jul 6 23:43:14.011949 ignition[982]: INFO : Ignition finished successfully Jul 6 23:43:14.013742 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:43:14.017289 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:43:14.021595 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:43:14.034833 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:43:14.034932 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:43:14.037297 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:43:14.038851 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:43:14.038851 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:43:14.041518 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:43:14.042502 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:43:14.044179 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:43:14.046422 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:43:14.079138 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:43:14.079255 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:43:14.081024 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:43:14.082796 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:43:14.084279 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:43:14.085124 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:43:14.110437 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:43:14.113344 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:43:14.133364 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:43:14.135561 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:43:14.136934 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:43:14.138749 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:43:14.138896 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:43:14.141377 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:43:14.143393 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:43:14.145075 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:43:14.146828 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:43:14.148742 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:43:14.150753 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:43:14.152900 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:43:14.154747 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:43:14.156716 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:43:14.158727 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:43:14.160473 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:43:14.162044 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:43:14.162197 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:43:14.164642 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:43:14.166682 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:43:14.168597 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:43:14.169485 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:43:14.170774 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:43:14.170906 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:43:14.173707 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:43:14.173832 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:43:14.175785 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:43:14.177362 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:43:14.178493 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:43:14.180598 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:43:14.182093 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:43:14.183887 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:43:14.184009 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:43:14.186116 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:43:14.186196 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:43:14.187879 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:43:14.188008 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:43:14.189768 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:43:14.189875 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:43:14.192232 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:43:14.194624 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:43:14.195677 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:43:14.195824 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:43:14.197853 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:43:14.197945 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:43:14.203670 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:43:14.212571 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:43:14.221094 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:43:14.227850 ignition[1037]: INFO : Ignition 2.21.0 Jul 6 23:43:14.227850 ignition[1037]: INFO : Stage: umount Jul 6 23:43:14.227850 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:43:14.227850 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:43:14.231847 ignition[1037]: INFO : umount: umount passed Jul 6 23:43:14.231847 ignition[1037]: INFO : Ignition finished successfully Jul 6 23:43:14.232040 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:43:14.233432 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:43:14.234375 systemd[1]: Stopped target network.target - Network. Jul 6 23:43:14.235671 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:43:14.235723 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:43:14.237165 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:43:14.237199 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:43:14.238609 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:43:14.238658 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:43:14.240146 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:43:14.240183 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:43:14.241875 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:43:14.243501 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:43:14.250673 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:43:14.251504 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:43:14.254832 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:43:14.255113 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:43:14.255152 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:43:14.258912 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:43:14.261778 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:43:14.261936 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:43:14.264873 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:43:14.265035 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:43:14.266934 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:43:14.266970 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:43:14.271235 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:43:14.272266 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:43:14.272347 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:43:14.274480 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:43:14.274531 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:43:14.277732 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:43:14.277793 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:43:14.279886 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:43:14.283796 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:43:14.284090 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:43:14.284166 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:43:14.287031 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:43:14.287122 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:43:14.297073 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:43:14.297249 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:43:14.300305 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:43:14.300490 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:43:14.302003 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:43:14.302052 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:43:14.303173 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:43:14.303201 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:43:14.304504 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:43:14.304555 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:43:14.306599 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:43:14.306662 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:43:14.308630 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:43:14.308692 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:43:14.311832 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:43:14.313360 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:43:14.313433 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:43:14.316152 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:43:14.316200 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:43:14.319002 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:43:14.319139 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:43:14.321589 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:43:14.321641 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:43:14.323372 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:43:14.323428 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:43:14.345970 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:43:14.346100 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:43:14.347902 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:43:14.350086 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:43:14.386479 systemd[1]: Switching root. Jul 6 23:43:14.431436 systemd-journald[245]: Journal stopped Jul 6 23:43:15.270226 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jul 6 23:43:15.270273 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:43:15.270286 kernel: SELinux: policy capability open_perms=1 Jul 6 23:43:15.270296 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:43:15.270306 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:43:15.270319 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:43:15.270333 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:43:15.270347 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:43:15.270363 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:43:15.270373 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:43:15.270382 kernel: audit: type=1403 audit(1751845394.607:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:43:15.270392 systemd[1]: Successfully loaded SELinux policy in 48.395ms. Jul 6 23:43:15.270430 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.843ms. Jul 6 23:43:15.270443 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:43:15.270454 systemd[1]: Detected virtualization kvm. Jul 6 23:43:15.270464 systemd[1]: Detected architecture arm64. Jul 6 23:43:15.270477 systemd[1]: Detected first boot. Jul 6 23:43:15.270488 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:43:15.270498 zram_generator::config[1082]: No configuration found. Jul 6 23:43:15.270509 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:43:15.270519 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:43:15.270530 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:43:15.270540 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:43:15.270551 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:43:15.270563 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:43:15.270573 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:43:15.270583 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:43:15.270593 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:43:15.270603 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:43:15.270613 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:43:15.270623 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:43:15.270640 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:43:15.270651 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:43:15.270664 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:43:15.270674 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:43:15.270685 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:43:15.270695 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:43:15.270705 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:43:15.270716 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:43:15.270726 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:43:15.270736 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:43:15.270751 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:43:15.270761 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:43:15.270771 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:43:15.270781 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:43:15.270791 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:43:15.270801 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:43:15.270812 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:43:15.270822 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:43:15.270832 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:43:15.270843 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:43:15.270853 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:43:15.270863 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:43:15.270873 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:43:15.270883 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:43:15.270893 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:43:15.270904 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:43:15.270914 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:43:15.270924 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:43:15.270935 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:43:15.270946 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:43:15.270956 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:43:15.270967 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:43:15.270978 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:43:15.270988 systemd[1]: Reached target machines.target - Containers. Jul 6 23:43:15.270998 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:43:15.271009 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:43:15.271021 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:43:15.271031 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:43:15.271041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:43:15.271051 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:43:15.271061 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:43:15.271071 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:43:15.271081 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:43:15.271092 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:43:15.271102 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:43:15.271113 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:43:15.271123 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:43:15.271133 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:43:15.271144 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:43:15.271153 kernel: fuse: init (API version 7.41) Jul 6 23:43:15.271163 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:43:15.271173 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:43:15.271186 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:43:15.271197 kernel: loop: module loaded Jul 6 23:43:15.271207 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:43:15.271217 kernel: ACPI: bus type drm_connector registered Jul 6 23:43:15.271227 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:43:15.271237 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:43:15.271247 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:43:15.271262 systemd[1]: Stopped verity-setup.service. Jul 6 23:43:15.271273 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:43:15.271283 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:43:15.271294 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:43:15.271304 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:43:15.271314 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:43:15.271325 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:43:15.271335 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:43:15.271347 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:43:15.271358 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:43:15.271369 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:43:15.271407 systemd-journald[1151]: Collecting audit messages is disabled. Jul 6 23:43:15.271431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:43:15.271441 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:43:15.271453 systemd-journald[1151]: Journal started Jul 6 23:43:15.271473 systemd-journald[1151]: Runtime Journal (/run/log/journal/a1277263d44b4caba72109a092725d36) is 6M, max 48.5M, 42.4M free. Jul 6 23:43:15.027997 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:43:15.051578 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:43:15.051985 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:43:15.273859 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:43:15.274912 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:43:15.275110 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:43:15.276290 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:43:15.276511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:43:15.277927 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:43:15.278118 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:43:15.279282 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:43:15.279490 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:43:15.280708 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:43:15.282132 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:43:15.283531 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:43:15.284816 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:43:15.297946 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:43:15.300285 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:43:15.302291 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:43:15.303234 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:43:15.303264 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:43:15.305223 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:43:15.313659 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:43:15.314822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:43:15.316276 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:43:15.318466 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:43:15.319605 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:43:15.320537 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:43:15.321646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:43:15.326544 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:43:15.330256 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:43:15.331735 systemd-journald[1151]: Time spent on flushing to /var/log/journal/a1277263d44b4caba72109a092725d36 is 27.031ms for 883 entries. Jul 6 23:43:15.331735 systemd-journald[1151]: System Journal (/var/log/journal/a1277263d44b4caba72109a092725d36) is 8M, max 195.6M, 187.6M free. Jul 6 23:43:15.370672 systemd-journald[1151]: Received client request to flush runtime journal. Jul 6 23:43:15.370723 kernel: loop0: detected capacity change from 0 to 107312 Jul 6 23:43:15.332277 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:43:15.337715 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:43:15.339259 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:43:15.340697 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:43:15.342736 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:43:15.349294 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:43:15.354681 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:43:15.372569 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:43:15.380340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:43:15.382439 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:43:15.405060 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jul 6 23:43:15.405113 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jul 6 23:43:15.406102 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:43:15.414187 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:43:15.417537 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:43:15.420453 kernel: loop1: detected capacity change from 0 to 138376 Jul 6 23:43:15.453708 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:43:15.455419 kernel: loop2: detected capacity change from 0 to 203944 Jul 6 23:43:15.458190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:43:15.480092 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jul 6 23:43:15.480111 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jul 6 23:43:15.484475 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:43:15.489102 kernel: loop3: detected capacity change from 0 to 107312 Jul 6 23:43:15.495596 kernel: loop4: detected capacity change from 0 to 138376 Jul 6 23:43:15.513422 kernel: loop5: detected capacity change from 0 to 203944 Jul 6 23:43:15.535595 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:43:15.536033 (sd-merge)[1226]: Merged extensions into '/usr'. Jul 6 23:43:15.541215 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:43:15.541237 systemd[1]: Reloading... Jul 6 23:43:15.589548 zram_generator::config[1251]: No configuration found. Jul 6 23:43:15.682820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:43:15.698435 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:43:15.758319 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:43:15.758986 systemd[1]: Reloading finished in 217 ms. Jul 6 23:43:15.792818 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:43:15.794164 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:43:15.814068 systemd[1]: Starting ensure-sysext.service... Jul 6 23:43:15.816090 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:43:15.833365 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:43:15.833420 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:43:15.833681 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:43:15.833881 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:43:15.834524 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:43:15.834752 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jul 6 23:43:15.834794 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jul 6 23:43:15.837283 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:43:15.837299 systemd-tmpfiles[1287]: Skipping /boot Jul 6 23:43:15.840333 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:43:15.840350 systemd[1]: Reloading... Jul 6 23:43:15.846512 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:43:15.846523 systemd-tmpfiles[1287]: Skipping /boot Jul 6 23:43:15.888429 zram_generator::config[1314]: No configuration found. Jul 6 23:43:15.969714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:43:16.048326 systemd[1]: Reloading finished in 207 ms. Jul 6 23:43:16.073429 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:43:16.080428 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:43:16.090617 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:43:16.093019 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:43:16.095510 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:43:16.098259 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:43:16.102541 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:43:16.105653 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:43:16.112066 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:43:16.118893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:43:16.120896 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:43:16.128460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:43:16.129471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:43:16.129641 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:43:16.131617 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:43:16.135436 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:43:16.137260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:43:16.140867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:43:16.143108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:43:16.143256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:43:16.145506 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:43:16.146013 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:43:16.148949 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Jul 6 23:43:16.153509 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:43:16.155694 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:43:16.159706 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:43:16.162041 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:43:16.163242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:43:16.163445 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:43:16.166250 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:43:16.178618 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:43:16.183983 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:43:16.186604 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:43:16.188285 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:43:16.188461 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:43:16.188985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:43:16.192781 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:43:16.194580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:43:16.194753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:43:16.198709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:43:16.199470 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:43:16.201087 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:43:16.213372 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:43:16.215178 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:43:16.218860 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:43:16.219034 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:43:16.219625 augenrules[1413]: No rules Jul 6 23:43:16.220807 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:43:16.221010 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:43:16.230142 systemd[1]: Finished ensure-sysext.service. Jul 6 23:43:16.241623 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:43:16.243003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:43:16.243085 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:43:16.247234 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:43:16.249609 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:43:16.249847 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:43:16.338458 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:43:16.339697 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:43:16.365232 systemd-networkd[1433]: lo: Link UP Jul 6 23:43:16.365242 systemd-networkd[1433]: lo: Gained carrier Jul 6 23:43:16.365873 systemd-resolved[1353]: Positive Trust Anchors: Jul 6 23:43:16.365885 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:43:16.365917 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:43:16.367900 systemd-networkd[1433]: Enumeration completed Jul 6 23:43:16.369563 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:43:16.373625 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:43:16.374382 systemd-resolved[1353]: Defaulting to hostname 'linux'. Jul 6 23:43:16.375729 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:43:16.377055 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:43:16.379236 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:43:16.380250 systemd[1]: Reached target network.target - Network. Jul 6 23:43:16.380964 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:43:16.381946 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:43:16.383129 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:43:16.384133 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:43:16.385295 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:43:16.386385 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:43:16.387517 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:43:16.388539 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:43:16.388572 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:43:16.389279 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:43:16.391417 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:43:16.394224 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:43:16.397818 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:43:16.399339 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:43:16.400666 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:43:16.402109 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:43:16.402117 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:43:16.407569 systemd-networkd[1433]: eth0: Link UP Jul 6 23:43:16.409587 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:43:16.410956 systemd-networkd[1433]: eth0: Gained carrier Jul 6 23:43:16.411063 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:43:16.411179 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:43:16.413525 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:43:16.423310 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:43:16.424450 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:43:16.425301 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:43:16.426162 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:43:16.426193 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:43:16.427489 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:43:16.430863 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:43:16.433674 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:43:16.435469 systemd-networkd[1433]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:43:16.435512 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:43:16.437515 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Jul 6 23:43:16.438863 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:43:16.439653 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:43:16.440526 systemd-timesyncd[1434]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:43:16.440606 systemd-timesyncd[1434]: Initial clock synchronization to Sun 2025-07-06 23:43:16.464334 UTC. Jul 6 23:43:16.450291 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:43:16.452584 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:43:16.463102 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:43:16.467197 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:43:16.471269 jq[1456]: false Jul 6 23:43:16.469032 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:43:16.474720 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:43:16.476471 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:43:16.476958 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:43:16.480306 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:43:16.483006 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:43:16.486458 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:43:16.490842 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:43:16.492077 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:43:16.492276 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:43:16.497656 jq[1480]: true Jul 6 23:43:16.498794 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:43:16.499343 extend-filesystems[1457]: Found /dev/vda6 Jul 6 23:43:16.500477 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:43:16.527232 extend-filesystems[1457]: Found /dev/vda9 Jul 6 23:43:16.541437 jq[1490]: true Jul 6 23:43:16.542374 extend-filesystems[1457]: Checking size of /dev/vda9 Jul 6 23:43:16.562249 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:43:16.577158 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:43:16.577390 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:43:16.589686 dbus-daemon[1451]: [system] SELinux support is enabled Jul 6 23:43:16.592042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:43:16.593336 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:43:16.601268 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:43:16.601532 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:43:16.602577 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:43:16.602699 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:43:16.605538 tar[1483]: linux-arm64/helm Jul 6 23:43:16.611315 extend-filesystems[1457]: Resized partition /dev/vda9 Jul 6 23:43:16.615261 extend-filesystems[1520]: resize2fs 1.47.2 (1-Jan-2025) Jul 6 23:43:16.621218 update_engine[1476]: I20250706 23:43:16.621055 1476 main.cc:92] Flatcar Update Engine starting Jul 6 23:43:16.630184 update_engine[1476]: I20250706 23:43:16.629972 1476 update_check_scheduler.cc:74] Next update check in 4m25s Jul 6 23:43:16.633413 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:43:16.640589 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:43:16.641873 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:43:16.644686 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:43:16.694080 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:43:16.730297 extend-filesystems[1520]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:43:16.730297 extend-filesystems[1520]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:43:16.730297 extend-filesystems[1520]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:43:16.728497 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:43:16.741389 bash[1529]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:43:16.743998 extend-filesystems[1457]: Resized filesystem in /dev/vda9 Jul 6 23:43:16.732749 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:43:16.735777 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:43:16.737440 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:43:16.753603 locksmithd[1530]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:43:16.763949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:43:16.771962 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:43:16.772188 systemd-logind[1475]: New seat seat0. Jul 6 23:43:16.772827 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:43:16.830728 containerd[1497]: time="2025-07-06T23:43:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:43:16.832049 containerd[1497]: time="2025-07-06T23:43:16.831983640Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:43:16.842785 containerd[1497]: time="2025-07-06T23:43:16.842734560Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10µs" Jul 6 23:43:16.842785 containerd[1497]: time="2025-07-06T23:43:16.842773920Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:43:16.842785 containerd[1497]: time="2025-07-06T23:43:16.842793960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:43:16.843040 containerd[1497]: time="2025-07-06T23:43:16.843009400Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:43:16.843040 containerd[1497]: time="2025-07-06T23:43:16.843037120Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:43:16.843102 containerd[1497]: time="2025-07-06T23:43:16.843063120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:43:16.843223 containerd[1497]: time="2025-07-06T23:43:16.843193320Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:43:16.843223 containerd[1497]: time="2025-07-06T23:43:16.843215760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:43:16.843533 containerd[1497]: time="2025-07-06T23:43:16.843508000Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:43:16.843556 containerd[1497]: time="2025-07-06T23:43:16.843532520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:43:16.843556 containerd[1497]: time="2025-07-06T23:43:16.843544480Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:43:16.843556 containerd[1497]: time="2025-07-06T23:43:16.843552560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:43:16.843722 containerd[1497]: time="2025-07-06T23:43:16.843699800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:43:16.844041 containerd[1497]: time="2025-07-06T23:43:16.843967440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:43:16.844071 containerd[1497]: time="2025-07-06T23:43:16.844058240Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:43:16.844092 containerd[1497]: time="2025-07-06T23:43:16.844072600Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:43:16.844128 containerd[1497]: time="2025-07-06T23:43:16.844110360Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:43:16.844861 containerd[1497]: time="2025-07-06T23:43:16.844498440Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:43:16.844861 containerd[1497]: time="2025-07-06T23:43:16.844591560Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:43:16.848661 containerd[1497]: time="2025-07-06T23:43:16.848624720Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:43:16.848806 containerd[1497]: time="2025-07-06T23:43:16.848788520Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:43:16.848869 containerd[1497]: time="2025-07-06T23:43:16.848855680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:43:16.848932 containerd[1497]: time="2025-07-06T23:43:16.848920000Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:43:16.848994 containerd[1497]: time="2025-07-06T23:43:16.848979800Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:43:16.849043 containerd[1497]: time="2025-07-06T23:43:16.849031720Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:43:16.849094 containerd[1497]: time="2025-07-06T23:43:16.849081600Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:43:16.849145 containerd[1497]: time="2025-07-06T23:43:16.849132440Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:43:16.849196 containerd[1497]: time="2025-07-06T23:43:16.849183520Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:43:16.849247 containerd[1497]: time="2025-07-06T23:43:16.849235120Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:43:16.849295 containerd[1497]: time="2025-07-06T23:43:16.849282360Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:43:16.849356 containerd[1497]: time="2025-07-06T23:43:16.849343000Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:43:16.849545 containerd[1497]: time="2025-07-06T23:43:16.849523040Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:43:16.849617 containerd[1497]: time="2025-07-06T23:43:16.849603120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:43:16.849691 containerd[1497]: time="2025-07-06T23:43:16.849676240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:43:16.849743 containerd[1497]: time="2025-07-06T23:43:16.849730920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:43:16.849799 containerd[1497]: time="2025-07-06T23:43:16.849786280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:43:16.849872 containerd[1497]: time="2025-07-06T23:43:16.849858280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:43:16.849924 containerd[1497]: time="2025-07-06T23:43:16.849912320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:43:16.849980 containerd[1497]: time="2025-07-06T23:43:16.849967960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:43:16.850037 containerd[1497]: time="2025-07-06T23:43:16.850024320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:43:16.850096 containerd[1497]: time="2025-07-06T23:43:16.850083440Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:43:16.850155 containerd[1497]: time="2025-07-06T23:43:16.850142280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:43:16.850432 containerd[1497]: time="2025-07-06T23:43:16.850392840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:43:16.851956 containerd[1497]: time="2025-07-06T23:43:16.850493320Z" level=info msg="Start snapshots syncer" Jul 6 23:43:16.851956 containerd[1497]: time="2025-07-06T23:43:16.850535080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:43:16.851956 containerd[1497]: time="2025-07-06T23:43:16.850750200Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.850800840Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.850876160Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.851000960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.851023800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.851036480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.851047640Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.851058520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.851068160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.851078520Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.851103320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.851114360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.851124320Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.851161080Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:43:16.852107 containerd[1497]: time="2025-07-06T23:43:16.851174280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:43:16.852321 containerd[1497]: time="2025-07-06T23:43:16.851183520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:43:16.852321 containerd[1497]: time="2025-07-06T23:43:16.851193280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:43:16.852321 containerd[1497]: time="2025-07-06T23:43:16.851200880Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:43:16.852321 containerd[1497]: time="2025-07-06T23:43:16.851209880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:43:16.852321 containerd[1497]: time="2025-07-06T23:43:16.851219200Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:43:16.852321 containerd[1497]: time="2025-07-06T23:43:16.851311720Z" level=info msg="runtime interface created" Jul 6 23:43:16.852321 containerd[1497]: time="2025-07-06T23:43:16.851317240Z" level=info msg="created NRI interface" Jul 6 23:43:16.852321 containerd[1497]: time="2025-07-06T23:43:16.851329200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:43:16.852321 containerd[1497]: time="2025-07-06T23:43:16.851340240Z" level=info msg="Connect containerd service" Jul 6 23:43:16.852321 containerd[1497]: time="2025-07-06T23:43:16.851366440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:43:16.852852 containerd[1497]: time="2025-07-06T23:43:16.852814040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:43:16.961529 containerd[1497]: time="2025-07-06T23:43:16.961476280Z" level=info msg="Start subscribing containerd event" Jul 6 23:43:16.962100 containerd[1497]: time="2025-07-06T23:43:16.962077360Z" level=info msg="Start recovering state" Jul 6 23:43:16.962262 containerd[1497]: time="2025-07-06T23:43:16.962247640Z" level=info msg="Start event monitor" Jul 6 23:43:16.962417 containerd[1497]: time="2025-07-06T23:43:16.962386280Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:43:16.962478 containerd[1497]: time="2025-07-06T23:43:16.962465320Z" level=info msg="Start streaming server" Jul 6 23:43:16.962524 containerd[1497]: time="2025-07-06T23:43:16.962513640Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:43:16.962582 containerd[1497]: time="2025-07-06T23:43:16.962570560Z" level=info msg="runtime interface starting up..." Jul 6 23:43:16.962694 containerd[1497]: time="2025-07-06T23:43:16.962679800Z" level=info msg="starting plugins..." Jul 6 23:43:16.962768 containerd[1497]: time="2025-07-06T23:43:16.962756360Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:43:16.962907 containerd[1497]: time="2025-07-06T23:43:16.961977320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:43:16.963164 containerd[1497]: time="2025-07-06T23:43:16.963136520Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:43:16.965277 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:43:16.966512 containerd[1497]: time="2025-07-06T23:43:16.965995760Z" level=info msg="containerd successfully booted in 0.135668s" Jul 6 23:43:17.034977 tar[1483]: linux-arm64/LICENSE Jul 6 23:43:17.035183 tar[1483]: linux-arm64/README.md Jul 6 23:43:17.056461 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:43:17.369582 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:43:17.389229 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:43:17.391881 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:43:17.411114 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:43:17.411326 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:43:17.413983 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:43:17.439818 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:43:17.442508 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:43:17.444428 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:43:17.445518 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:43:17.449554 systemd-networkd[1433]: eth0: Gained IPv6LL Jul 6 23:43:17.451891 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:43:17.453309 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:43:17.455667 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:43:17.457800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:43:17.460191 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:43:17.485700 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:43:17.486496 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:43:17.488196 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:43:17.491851 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:43:18.027242 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:43:18.028903 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:43:18.030028 systemd[1]: Startup finished in 2.106s (kernel) + 5.974s (initrd) + 3.475s (userspace) = 11.555s. Jul 6 23:43:18.031249 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:43:18.488679 kubelet[1609]: E0706 23:43:18.488575 1609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:43:18.491246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:43:18.491427 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:43:18.492576 systemd[1]: kubelet.service: Consumed 860ms CPU time, 257.3M memory peak. Jul 6 23:43:21.868211 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:43:21.872037 systemd[1]: Started sshd@0-10.0.0.127:22-10.0.0.1:47530.service - OpenSSH per-connection server daemon (10.0.0.1:47530). Jul 6 23:43:21.972204 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 47530 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:43:21.972433 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:43:21.987261 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:43:21.990389 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:43:21.992349 systemd-logind[1475]: New session 1 of user core. Jul 6 23:43:22.015460 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:43:22.018297 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:43:22.038544 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:43:22.040712 systemd-logind[1475]: New session c1 of user core. Jul 6 23:43:22.151738 systemd[1626]: Queued start job for default target default.target. Jul 6 23:43:22.167466 systemd[1626]: Created slice app.slice - User Application Slice. Jul 6 23:43:22.167493 systemd[1626]: Reached target paths.target - Paths. Jul 6 23:43:22.167531 systemd[1626]: Reached target timers.target - Timers. Jul 6 23:43:22.168808 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:43:22.177939 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:43:22.178011 systemd[1626]: Reached target sockets.target - Sockets. Jul 6 23:43:22.178054 systemd[1626]: Reached target basic.target - Basic System. Jul 6 23:43:22.178088 systemd[1626]: Reached target default.target - Main User Target. Jul 6 23:43:22.178117 systemd[1626]: Startup finished in 131ms. Jul 6 23:43:22.178266 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:43:22.179704 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:43:22.250655 systemd[1]: Started sshd@1-10.0.0.127:22-10.0.0.1:47538.service - OpenSSH per-connection server daemon (10.0.0.1:47538). Jul 6 23:43:22.303315 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 47538 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:43:22.304720 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:43:22.308758 systemd-logind[1475]: New session 2 of user core. Jul 6 23:43:22.320571 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:43:22.371271 sshd[1639]: Connection closed by 10.0.0.1 port 47538 Jul 6 23:43:22.371731 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Jul 6 23:43:22.387841 systemd[1]: sshd@1-10.0.0.127:22-10.0.0.1:47538.service: Deactivated successfully. Jul 6 23:43:22.390946 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:43:22.391646 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:43:22.393758 systemd[1]: Started sshd@2-10.0.0.127:22-10.0.0.1:47566.service - OpenSSH per-connection server daemon (10.0.0.1:47566). Jul 6 23:43:22.394640 systemd-logind[1475]: Removed session 2. Jul 6 23:43:22.447418 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 47566 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:43:22.448732 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:43:22.452794 systemd-logind[1475]: New session 3 of user core. Jul 6 23:43:22.464586 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:43:22.511941 sshd[1647]: Connection closed by 10.0.0.1 port 47566 Jul 6 23:43:22.512211 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Jul 6 23:43:22.522677 systemd[1]: sshd@2-10.0.0.127:22-10.0.0.1:47566.service: Deactivated successfully. Jul 6 23:43:22.524667 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:43:22.525243 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:43:22.527510 systemd[1]: Started sshd@3-10.0.0.127:22-10.0.0.1:38346.service - OpenSSH per-connection server daemon (10.0.0.1:38346). Jul 6 23:43:22.528514 systemd-logind[1475]: Removed session 3. Jul 6 23:43:22.584377 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 38346 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:43:22.584905 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:43:22.589620 systemd-logind[1475]: New session 4 of user core. Jul 6 23:43:22.612609 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:43:22.664836 sshd[1655]: Connection closed by 10.0.0.1 port 38346 Jul 6 23:43:22.665159 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Jul 6 23:43:22.672715 systemd[1]: sshd@3-10.0.0.127:22-10.0.0.1:38346.service: Deactivated successfully. Jul 6 23:43:22.674954 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:43:22.675623 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:43:22.677771 systemd[1]: Started sshd@4-10.0.0.127:22-10.0.0.1:38362.service - OpenSSH per-connection server daemon (10.0.0.1:38362). Jul 6 23:43:22.678679 systemd-logind[1475]: Removed session 4. Jul 6 23:43:22.730926 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 38362 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:43:22.732155 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:43:22.736226 systemd-logind[1475]: New session 5 of user core. Jul 6 23:43:22.753589 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:43:22.816222 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:43:22.816516 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:43:22.831063 sudo[1664]: pam_unix(sudo:session): session closed for user root Jul 6 23:43:22.833309 sshd[1663]: Connection closed by 10.0.0.1 port 38362 Jul 6 23:43:22.833099 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Jul 6 23:43:22.848032 systemd[1]: sshd@4-10.0.0.127:22-10.0.0.1:38362.service: Deactivated successfully. Jul 6 23:43:22.851006 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:43:22.852632 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:43:22.855737 systemd[1]: Started sshd@5-10.0.0.127:22-10.0.0.1:38388.service - OpenSSH per-connection server daemon (10.0.0.1:38388). Jul 6 23:43:22.856353 systemd-logind[1475]: Removed session 5. Jul 6 23:43:22.907197 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 38388 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:43:22.908584 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:43:22.913421 systemd-logind[1475]: New session 6 of user core. Jul 6 23:43:22.928584 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:43:22.980806 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:43:22.981442 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:43:23.052327 sudo[1675]: pam_unix(sudo:session): session closed for user root Jul 6 23:43:23.057754 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:43:23.058015 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:43:23.068590 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:43:23.116706 augenrules[1697]: No rules Jul 6 23:43:23.118028 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:43:23.118256 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:43:23.119672 sudo[1674]: pam_unix(sudo:session): session closed for user root Jul 6 23:43:23.121086 sshd[1673]: Connection closed by 10.0.0.1 port 38388 Jul 6 23:43:23.121389 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Jul 6 23:43:23.133722 systemd[1]: sshd@5-10.0.0.127:22-10.0.0.1:38388.service: Deactivated successfully. Jul 6 23:43:23.135217 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:43:23.135985 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:43:23.138367 systemd[1]: Started sshd@6-10.0.0.127:22-10.0.0.1:38394.service - OpenSSH per-connection server daemon (10.0.0.1:38394). Jul 6 23:43:23.139457 systemd-logind[1475]: Removed session 6. Jul 6 23:43:23.195804 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 38394 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:43:23.198378 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:43:23.207079 systemd-logind[1475]: New session 7 of user core. Jul 6 23:43:23.221618 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:43:23.275016 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:43:23.275643 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:43:23.678888 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:43:23.689807 (dockerd)[1730]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:43:23.990313 dockerd[1730]: time="2025-07-06T23:43:23.990171026Z" level=info msg="Starting up" Jul 6 23:43:23.991350 dockerd[1730]: time="2025-07-06T23:43:23.991317613Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:43:24.021967 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3287801652-merged.mount: Deactivated successfully. Jul 6 23:43:24.040034 systemd[1]: var-lib-docker-metacopy\x2dcheck3955598163-merged.mount: Deactivated successfully. Jul 6 23:43:24.065715 dockerd[1730]: time="2025-07-06T23:43:24.065672632Z" level=info msg="Loading containers: start." Jul 6 23:43:24.076443 kernel: Initializing XFRM netlink socket Jul 6 23:43:24.325413 systemd-networkd[1433]: docker0: Link UP Jul 6 23:43:24.336390 dockerd[1730]: time="2025-07-06T23:43:24.336333065Z" level=info msg="Loading containers: done." Jul 6 23:43:24.354213 dockerd[1730]: time="2025-07-06T23:43:24.354148989Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:43:24.354370 dockerd[1730]: time="2025-07-06T23:43:24.354252691Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:43:24.354415 dockerd[1730]: time="2025-07-06T23:43:24.354366679Z" level=info msg="Initializing buildkit" Jul 6 23:43:24.382085 dockerd[1730]: time="2025-07-06T23:43:24.382031488Z" level=info msg="Completed buildkit initialization" Jul 6 23:43:24.388500 dockerd[1730]: time="2025-07-06T23:43:24.388446521Z" level=info msg="Daemon has completed initialization" Jul 6 23:43:24.388590 dockerd[1730]: time="2025-07-06T23:43:24.388526929Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:43:24.388997 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:43:24.992627 containerd[1497]: time="2025-07-06T23:43:24.992584838Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:43:25.013837 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1835453425-merged.mount: Deactivated successfully. Jul 6 23:43:25.541019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount504934973.mount: Deactivated successfully. Jul 6 23:43:26.323754 containerd[1497]: time="2025-07-06T23:43:26.323676512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:26.324316 containerd[1497]: time="2025-07-06T23:43:26.324273726Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 6 23:43:26.325008 containerd[1497]: time="2025-07-06T23:43:26.324972999Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:26.327347 containerd[1497]: time="2025-07-06T23:43:26.327308828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:26.328433 containerd[1497]: time="2025-07-06T23:43:26.328386913Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.335755407s" Jul 6 23:43:26.328522 containerd[1497]: time="2025-07-06T23:43:26.328507300Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 6 23:43:26.331879 containerd[1497]: time="2025-07-06T23:43:26.331837248Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:43:27.417878 containerd[1497]: time="2025-07-06T23:43:27.417825339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:27.418631 containerd[1497]: time="2025-07-06T23:43:27.418572865Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 6 23:43:27.420082 containerd[1497]: time="2025-07-06T23:43:27.419518779Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:27.422810 containerd[1497]: time="2025-07-06T23:43:27.422769305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:27.423810 containerd[1497]: time="2025-07-06T23:43:27.423763445Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.091888456s" Jul 6 23:43:27.423810 containerd[1497]: time="2025-07-06T23:43:27.423804227Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 6 23:43:27.426194 containerd[1497]: time="2025-07-06T23:43:27.426076141Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:43:28.413925 containerd[1497]: time="2025-07-06T23:43:28.413874241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:28.414651 containerd[1497]: time="2025-07-06T23:43:28.414626557Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 6 23:43:28.415143 containerd[1497]: time="2025-07-06T23:43:28.415117935Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:28.417918 containerd[1497]: time="2025-07-06T23:43:28.417880149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:28.418843 containerd[1497]: time="2025-07-06T23:43:28.418809678Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 992.690274ms" Jul 6 23:43:28.418891 containerd[1497]: time="2025-07-06T23:43:28.418854862Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 6 23:43:28.419423 containerd[1497]: time="2025-07-06T23:43:28.419248949Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:43:28.669202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:43:28.670607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:43:28.802237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:43:28.805786 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:43:28.844640 kubelet[2012]: E0706 23:43:28.844522 2012 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:43:28.847839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:43:28.847978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:43:28.848531 systemd[1]: kubelet.service: Consumed 148ms CPU time, 106.8M memory peak. Jul 6 23:43:29.378688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2769402414.mount: Deactivated successfully. Jul 6 23:43:29.688823 containerd[1497]: time="2025-07-06T23:43:29.688772516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:29.689777 containerd[1497]: time="2025-07-06T23:43:29.689344567Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 6 23:43:29.690378 containerd[1497]: time="2025-07-06T23:43:29.690329509Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:29.691730 containerd[1497]: time="2025-07-06T23:43:29.691689043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:29.692429 containerd[1497]: time="2025-07-06T23:43:29.692379635Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.273105232s" Jul 6 23:43:29.692470 containerd[1497]: time="2025-07-06T23:43:29.692429500Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 6 23:43:29.693178 containerd[1497]: time="2025-07-06T23:43:29.693096080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:43:30.179769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652105585.mount: Deactivated successfully. Jul 6 23:43:30.875716 containerd[1497]: time="2025-07-06T23:43:30.875599766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:30.876880 containerd[1497]: time="2025-07-06T23:43:30.876853505Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 6 23:43:30.878122 containerd[1497]: time="2025-07-06T23:43:30.878066945Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:30.880507 containerd[1497]: time="2025-07-06T23:43:30.880478456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:30.882228 containerd[1497]: time="2025-07-06T23:43:30.882191542Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.189063846s" Jul 6 23:43:30.882275 containerd[1497]: time="2025-07-06T23:43:30.882229320Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:43:30.882778 containerd[1497]: time="2025-07-06T23:43:30.882741974Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:43:31.313061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825612205.mount: Deactivated successfully. Jul 6 23:43:31.318184 containerd[1497]: time="2025-07-06T23:43:31.318134036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:43:31.318952 containerd[1497]: time="2025-07-06T23:43:31.318918772Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 6 23:43:31.319656 containerd[1497]: time="2025-07-06T23:43:31.319608422Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:43:31.321818 containerd[1497]: time="2025-07-06T23:43:31.321777020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:43:31.322507 containerd[1497]: time="2025-07-06T23:43:31.322477835Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 439.702605ms" Jul 6 23:43:31.322606 containerd[1497]: time="2025-07-06T23:43:31.322591489Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:43:31.323111 containerd[1497]: time="2025-07-06T23:43:31.323082444Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:43:31.851818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737621666.mount: Deactivated successfully. Jul 6 23:43:33.171652 containerd[1497]: time="2025-07-06T23:43:33.171593130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:33.172015 containerd[1497]: time="2025-07-06T23:43:33.171993029Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 6 23:43:33.176363 containerd[1497]: time="2025-07-06T23:43:33.176110278Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:33.179237 containerd[1497]: time="2025-07-06T23:43:33.179200826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:33.181121 containerd[1497]: time="2025-07-06T23:43:33.181093836Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.857977096s" Jul 6 23:43:33.181254 containerd[1497]: time="2025-07-06T23:43:33.181234579Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 6 23:43:37.493543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:43:37.493710 systemd[1]: kubelet.service: Consumed 148ms CPU time, 106.8M memory peak. Jul 6 23:43:37.495867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:43:37.519305 systemd[1]: Reload requested from client PID 2170 ('systemctl') (unit session-7.scope)... Jul 6 23:43:37.519321 systemd[1]: Reloading... Jul 6 23:43:37.596427 zram_generator::config[2215]: No configuration found. Jul 6 23:43:37.673482 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:43:37.781648 systemd[1]: Reloading finished in 261 ms. Jul 6 23:43:37.833976 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:43:37.834063 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:43:37.834538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:43:37.834596 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.2M memory peak. Jul 6 23:43:37.836545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:43:37.953006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:43:37.956658 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:43:37.994673 kubelet[2257]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:43:37.994673 kubelet[2257]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:43:37.994673 kubelet[2257]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:43:37.995024 kubelet[2257]: I0706 23:43:37.994728 2257 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:43:38.533777 kubelet[2257]: I0706 23:43:38.533730 2257 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:43:38.534019 kubelet[2257]: I0706 23:43:38.533973 2257 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:43:38.534559 kubelet[2257]: I0706 23:43:38.534542 2257 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:43:38.585152 kubelet[2257]: E0706 23:43:38.585079 2257 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:43:38.586121 kubelet[2257]: I0706 23:43:38.586024 2257 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:43:38.593124 kubelet[2257]: I0706 23:43:38.593092 2257 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:43:38.596722 kubelet[2257]: I0706 23:43:38.596695 2257 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:43:38.597554 kubelet[2257]: I0706 23:43:38.597518 2257 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:43:38.597703 kubelet[2257]: I0706 23:43:38.597666 2257 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:43:38.597894 kubelet[2257]: I0706 23:43:38.597694 2257 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:43:38.597997 kubelet[2257]: I0706 23:43:38.597949 2257 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:43:38.597997 kubelet[2257]: I0706 23:43:38.597961 2257 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:43:38.598267 kubelet[2257]: I0706 23:43:38.598230 2257 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:43:38.601071 kubelet[2257]: I0706 23:43:38.600917 2257 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:43:38.601071 kubelet[2257]: I0706 23:43:38.600946 2257 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:43:38.601071 kubelet[2257]: I0706 23:43:38.600969 2257 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:43:38.601071 kubelet[2257]: I0706 23:43:38.601046 2257 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:43:38.605044 kubelet[2257]: W0706 23:43:38.604966 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 6 23:43:38.605128 kubelet[2257]: E0706 23:43:38.605054 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:43:38.605128 kubelet[2257]: W0706 23:43:38.604986 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 6 23:43:38.605128 kubelet[2257]: E0706 23:43:38.605079 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:43:38.607221 kubelet[2257]: I0706 23:43:38.606239 2257 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:43:38.607221 kubelet[2257]: I0706 23:43:38.606994 2257 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:43:38.607221 kubelet[2257]: W0706 23:43:38.607154 2257 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:43:38.609568 kubelet[2257]: I0706 23:43:38.609454 2257 server.go:1274] "Started kubelet" Jul 6 23:43:38.609888 kubelet[2257]: I0706 23:43:38.609826 2257 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:43:38.610206 kubelet[2257]: I0706 23:43:38.610175 2257 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:43:38.610452 kubelet[2257]: I0706 23:43:38.610314 2257 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:43:38.610762 kubelet[2257]: I0706 23:43:38.610700 2257 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:43:38.612210 kubelet[2257]: I0706 23:43:38.611379 2257 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:43:38.612769 kubelet[2257]: I0706 23:43:38.612483 2257 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:43:38.613622 kubelet[2257]: I0706 23:43:38.613596 2257 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:43:38.613702 kubelet[2257]: I0706 23:43:38.613694 2257 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:43:38.613785 kubelet[2257]: I0706 23:43:38.613769 2257 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:43:38.614394 kubelet[2257]: E0706 23:43:38.614363 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:43:38.614907 kubelet[2257]: E0706 23:43:38.614853 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="200ms" Jul 6 23:43:38.615062 kubelet[2257]: W0706 23:43:38.614977 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 6 23:43:38.615062 kubelet[2257]: E0706 23:43:38.615026 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:43:38.615664 kubelet[2257]: E0706 23:43:38.614263 2257 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.127:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.127:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fce20659bb266 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:43:38.609390182 +0000 UTC m=+0.649828143,LastTimestamp:2025-07-06 23:43:38.609390182 +0000 UTC m=+0.649828143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:43:38.616460 kubelet[2257]: I0706 23:43:38.616194 2257 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:43:38.616460 kubelet[2257]: I0706 23:43:38.616211 2257 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:43:38.616460 kubelet[2257]: I0706 23:43:38.616296 2257 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:43:38.616990 kubelet[2257]: E0706 23:43:38.616929 2257 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:43:38.626270 kubelet[2257]: I0706 23:43:38.626103 2257 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:43:38.626270 kubelet[2257]: I0706 23:43:38.626119 2257 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:43:38.626270 kubelet[2257]: I0706 23:43:38.626136 2257 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:43:38.627999 kubelet[2257]: I0706 23:43:38.627944 2257 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:43:38.628907 kubelet[2257]: I0706 23:43:38.628881 2257 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:43:38.628907 kubelet[2257]: I0706 23:43:38.628902 2257 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:43:38.628968 kubelet[2257]: I0706 23:43:38.628919 2257 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:43:38.628990 kubelet[2257]: E0706 23:43:38.628956 2257 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:43:38.636308 kubelet[2257]: W0706 23:43:38.636243 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 6 23:43:38.636308 kubelet[2257]: E0706 23:43:38.636311 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:43:38.658990 kubelet[2257]: I0706 23:43:38.658950 2257 policy_none.go:49] "None policy: Start" Jul 6 23:43:38.659910 kubelet[2257]: I0706 23:43:38.659890 2257 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:43:38.660017 kubelet[2257]: I0706 23:43:38.659934 2257 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:43:38.673096 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:43:38.692311 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:43:38.695327 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:43:38.710616 kubelet[2257]: I0706 23:43:38.710532 2257 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:43:38.710797 kubelet[2257]: I0706 23:43:38.710766 2257 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:43:38.710846 kubelet[2257]: I0706 23:43:38.710783 2257 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:43:38.711179 kubelet[2257]: I0706 23:43:38.711104 2257 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:43:38.714680 kubelet[2257]: E0706 23:43:38.714621 2257 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:43:38.738240 systemd[1]: Created slice kubepods-burstable-pod13db430b4a1b0f36936d5b0d82854b9d.slice - libcontainer container kubepods-burstable-pod13db430b4a1b0f36936d5b0d82854b9d.slice. Jul 6 23:43:38.758711 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 6 23:43:38.775187 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 6 23:43:38.812738 kubelet[2257]: I0706 23:43:38.812077 2257 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:43:38.812967 kubelet[2257]: E0706 23:43:38.812823 2257 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Jul 6 23:43:38.815080 kubelet[2257]: I0706 23:43:38.814919 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13db430b4a1b0f36936d5b0d82854b9d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"13db430b4a1b0f36936d5b0d82854b9d\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:43:38.815080 kubelet[2257]: I0706 23:43:38.814962 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:43:38.815080 kubelet[2257]: I0706 23:43:38.814983 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:43:38.815080 kubelet[2257]: I0706 23:43:38.815006 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:43:38.815080 kubelet[2257]: I0706 23:43:38.815023 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13db430b4a1b0f36936d5b0d82854b9d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"13db430b4a1b0f36936d5b0d82854b9d\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:43:38.815284 kubelet[2257]: I0706 23:43:38.815039 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13db430b4a1b0f36936d5b0d82854b9d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"13db430b4a1b0f36936d5b0d82854b9d\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:43:38.815284 kubelet[2257]: I0706 23:43:38.815054 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:43:38.815284 kubelet[2257]: I0706 23:43:38.815073 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:43:38.815284 kubelet[2257]: I0706 23:43:38.815087 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:43:38.815284 kubelet[2257]: E0706 23:43:38.815261 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="400ms" Jul 6 23:43:39.014560 kubelet[2257]: I0706 23:43:39.014527 2257 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:43:39.014866 kubelet[2257]: E0706 23:43:39.014839 2257 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Jul 6 23:43:39.057046 containerd[1497]: time="2025-07-06T23:43:39.056993767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:13db430b4a1b0f36936d5b0d82854b9d,Namespace:kube-system,Attempt:0,}" Jul 6 23:43:39.072649 containerd[1497]: time="2025-07-06T23:43:39.072506846Z" level=info msg="connecting to shim e835fbe6235747b542fa321f3382859ec7177dc471845fed792678148b6c8ed3" address="unix:///run/containerd/s/47b43d4606c5cfa06e4f5172d9ec13b8a1eafb03807269a78e3720b124760aa5" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:43:39.073469 containerd[1497]: time="2025-07-06T23:43:39.073421505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 6 23:43:39.079248 containerd[1497]: time="2025-07-06T23:43:39.079036109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 6 23:43:39.093501 containerd[1497]: time="2025-07-06T23:43:39.093458023Z" level=info msg="connecting to shim 3427faca1b71a8a8c168a222acfc5b0d72bcb96cea74bcbde37b566ff74d6cd7" address="unix:///run/containerd/s/9f89cc2ba1eaaf5e8f31d0c53f3971fa32aaff7169cad452e3a2d484500b81e8" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:43:39.104103 containerd[1497]: time="2025-07-06T23:43:39.104061318Z" level=info msg="connecting to shim 1727dd50e2939f5e180bb8c55529015b65572cb481cbf3b689de94ef82c1d220" address="unix:///run/containerd/s/dc8250008a84f8183a7cc9a15d6ed8a75ac39d73d0a67f26f82513d63c867348" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:43:39.106558 systemd[1]: Started cri-containerd-e835fbe6235747b542fa321f3382859ec7177dc471845fed792678148b6c8ed3.scope - libcontainer container e835fbe6235747b542fa321f3382859ec7177dc471845fed792678148b6c8ed3. Jul 6 23:43:39.129630 systemd[1]: Started cri-containerd-3427faca1b71a8a8c168a222acfc5b0d72bcb96cea74bcbde37b566ff74d6cd7.scope - libcontainer container 3427faca1b71a8a8c168a222acfc5b0d72bcb96cea74bcbde37b566ff74d6cd7. Jul 6 23:43:39.134302 systemd[1]: Started cri-containerd-1727dd50e2939f5e180bb8c55529015b65572cb481cbf3b689de94ef82c1d220.scope - libcontainer container 1727dd50e2939f5e180bb8c55529015b65572cb481cbf3b689de94ef82c1d220. Jul 6 23:43:39.150797 containerd[1497]: time="2025-07-06T23:43:39.150726280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:13db430b4a1b0f36936d5b0d82854b9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e835fbe6235747b542fa321f3382859ec7177dc471845fed792678148b6c8ed3\"" Jul 6 23:43:39.155094 containerd[1497]: time="2025-07-06T23:43:39.155050165Z" level=info msg="CreateContainer within sandbox \"e835fbe6235747b542fa321f3382859ec7177dc471845fed792678148b6c8ed3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:43:39.163785 containerd[1497]: time="2025-07-06T23:43:39.163737669Z" level=info msg="Container 34f6b825cae693cdd27d6724380345e2eb044290d57f7d7e7f2bbae21c0cfc5f: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:43:39.171868 containerd[1497]: time="2025-07-06T23:43:39.171737239Z" level=info msg="CreateContainer within sandbox \"e835fbe6235747b542fa321f3382859ec7177dc471845fed792678148b6c8ed3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"34f6b825cae693cdd27d6724380345e2eb044290d57f7d7e7f2bbae21c0cfc5f\"" Jul 6 23:43:39.172339 containerd[1497]: time="2025-07-06T23:43:39.172314373Z" level=info msg="StartContainer for \"34f6b825cae693cdd27d6724380345e2eb044290d57f7d7e7f2bbae21c0cfc5f\"" Jul 6 23:43:39.173601 containerd[1497]: time="2025-07-06T23:43:39.173546110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"3427faca1b71a8a8c168a222acfc5b0d72bcb96cea74bcbde37b566ff74d6cd7\"" Jul 6 23:43:39.173976 containerd[1497]: time="2025-07-06T23:43:39.173844261Z" level=info msg="connecting to shim 34f6b825cae693cdd27d6724380345e2eb044290d57f7d7e7f2bbae21c0cfc5f" address="unix:///run/containerd/s/47b43d4606c5cfa06e4f5172d9ec13b8a1eafb03807269a78e3720b124760aa5" protocol=ttrpc version=3 Jul 6 23:43:39.177737 containerd[1497]: time="2025-07-06T23:43:39.177704534Z" level=info msg="CreateContainer within sandbox \"3427faca1b71a8a8c168a222acfc5b0d72bcb96cea74bcbde37b566ff74d6cd7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:43:39.179341 containerd[1497]: time="2025-07-06T23:43:39.179308209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"1727dd50e2939f5e180bb8c55529015b65572cb481cbf3b689de94ef82c1d220\"" Jul 6 23:43:39.181722 containerd[1497]: time="2025-07-06T23:43:39.181681930Z" level=info msg="CreateContainer within sandbox \"1727dd50e2939f5e180bb8c55529015b65572cb481cbf3b689de94ef82c1d220\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:43:39.190113 containerd[1497]: time="2025-07-06T23:43:39.190006460Z" level=info msg="Container 5ce47c4c00fa77051df1c36d39a228267ef48b36940323bcb2585de3fba9092b: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:43:39.193601 systemd[1]: Started cri-containerd-34f6b825cae693cdd27d6724380345e2eb044290d57f7d7e7f2bbae21c0cfc5f.scope - libcontainer container 34f6b825cae693cdd27d6724380345e2eb044290d57f7d7e7f2bbae21c0cfc5f. Jul 6 23:43:39.196189 containerd[1497]: time="2025-07-06T23:43:39.196150741Z" level=info msg="Container f4aa4ea0b865091aec5d7b2ef8462e2e509ca14ec9fafaa3d9a35857022bdfa7: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:43:39.199582 containerd[1497]: time="2025-07-06T23:43:39.199471133Z" level=info msg="CreateContainer within sandbox \"3427faca1b71a8a8c168a222acfc5b0d72bcb96cea74bcbde37b566ff74d6cd7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5ce47c4c00fa77051df1c36d39a228267ef48b36940323bcb2585de3fba9092b\"" Jul 6 23:43:39.200087 containerd[1497]: time="2025-07-06T23:43:39.200062753Z" level=info msg="StartContainer for \"5ce47c4c00fa77051df1c36d39a228267ef48b36940323bcb2585de3fba9092b\"" Jul 6 23:43:39.201755 containerd[1497]: time="2025-07-06T23:43:39.201712125Z" level=info msg="connecting to shim 5ce47c4c00fa77051df1c36d39a228267ef48b36940323bcb2585de3fba9092b" address="unix:///run/containerd/s/9f89cc2ba1eaaf5e8f31d0c53f3971fa32aaff7169cad452e3a2d484500b81e8" protocol=ttrpc version=3 Jul 6 23:43:39.202257 containerd[1497]: time="2025-07-06T23:43:39.202209910Z" level=info msg="CreateContainer within sandbox \"1727dd50e2939f5e180bb8c55529015b65572cb481cbf3b689de94ef82c1d220\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f4aa4ea0b865091aec5d7b2ef8462e2e509ca14ec9fafaa3d9a35857022bdfa7\"" Jul 6 23:43:39.202838 containerd[1497]: time="2025-07-06T23:43:39.202812574Z" level=info msg="StartContainer for \"f4aa4ea0b865091aec5d7b2ef8462e2e509ca14ec9fafaa3d9a35857022bdfa7\"" Jul 6 23:43:39.203977 containerd[1497]: time="2025-07-06T23:43:39.203946114Z" level=info msg="connecting to shim f4aa4ea0b865091aec5d7b2ef8462e2e509ca14ec9fafaa3d9a35857022bdfa7" address="unix:///run/containerd/s/dc8250008a84f8183a7cc9a15d6ed8a75ac39d73d0a67f26f82513d63c867348" protocol=ttrpc version=3 Jul 6 23:43:39.216976 kubelet[2257]: E0706 23:43:39.216925 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="800ms" Jul 6 23:43:39.228614 systemd[1]: Started cri-containerd-f4aa4ea0b865091aec5d7b2ef8462e2e509ca14ec9fafaa3d9a35857022bdfa7.scope - libcontainer container f4aa4ea0b865091aec5d7b2ef8462e2e509ca14ec9fafaa3d9a35857022bdfa7. Jul 6 23:43:39.235050 systemd[1]: Started cri-containerd-5ce47c4c00fa77051df1c36d39a228267ef48b36940323bcb2585de3fba9092b.scope - libcontainer container 5ce47c4c00fa77051df1c36d39a228267ef48b36940323bcb2585de3fba9092b. Jul 6 23:43:39.241591 containerd[1497]: time="2025-07-06T23:43:39.241537588Z" level=info msg="StartContainer for \"34f6b825cae693cdd27d6724380345e2eb044290d57f7d7e7f2bbae21c0cfc5f\" returns successfully" Jul 6 23:43:39.327375 containerd[1497]: time="2025-07-06T23:43:39.323881073Z" level=info msg="StartContainer for \"5ce47c4c00fa77051df1c36d39a228267ef48b36940323bcb2585de3fba9092b\" returns successfully" Jul 6 23:43:39.329288 containerd[1497]: time="2025-07-06T23:43:39.329056634Z" level=info msg="StartContainer for \"f4aa4ea0b865091aec5d7b2ef8462e2e509ca14ec9fafaa3d9a35857022bdfa7\" returns successfully" Jul 6 23:43:39.420599 kubelet[2257]: I0706 23:43:39.416284 2257 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:43:39.420599 kubelet[2257]: E0706 23:43:39.416680 2257 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Jul 6 23:43:40.217981 kubelet[2257]: I0706 23:43:40.217938 2257 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:43:40.809120 kubelet[2257]: E0706 23:43:40.809076 2257 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 6 23:43:40.973990 kubelet[2257]: I0706 23:43:40.973933 2257 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:43:40.973990 kubelet[2257]: E0706 23:43:40.973978 2257 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 6 23:43:41.605746 kubelet[2257]: I0706 23:43:41.605707 2257 apiserver.go:52] "Watching apiserver" Jul 6 23:43:41.614561 kubelet[2257]: I0706 23:43:41.614528 2257 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:43:41.651453 kubelet[2257]: E0706 23:43:41.651414 2257 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 6 23:43:42.923419 systemd[1]: Reload requested from client PID 2535 ('systemctl') (unit session-7.scope)... Jul 6 23:43:42.923435 systemd[1]: Reloading... Jul 6 23:43:43.000450 zram_generator::config[2581]: No configuration found. Jul 6 23:43:43.070546 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:43:43.191107 systemd[1]: Reloading finished in 267 ms. Jul 6 23:43:43.219652 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:43:43.238705 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:43:43.239010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:43:43.239082 systemd[1]: kubelet.service: Consumed 1.048s CPU time, 128.8M memory peak. Jul 6 23:43:43.241016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:43:43.406386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:43:43.415797 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:43:43.453672 kubelet[2620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:43:43.453672 kubelet[2620]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:43:43.453672 kubelet[2620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:43:43.453672 kubelet[2620]: I0706 23:43:43.453641 2620 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:43:43.460820 kubelet[2620]: I0706 23:43:43.460776 2620 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:43:43.460820 kubelet[2620]: I0706 23:43:43.460806 2620 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:43:43.461629 kubelet[2620]: I0706 23:43:43.461057 2620 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:43:43.462693 kubelet[2620]: I0706 23:43:43.462673 2620 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:43:43.465452 kubelet[2620]: I0706 23:43:43.465359 2620 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:43:43.471248 kubelet[2620]: I0706 23:43:43.471228 2620 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:43:43.473941 kubelet[2620]: I0706 23:43:43.473911 2620 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:43:43.474077 kubelet[2620]: I0706 23:43:43.474054 2620 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:43:43.474215 kubelet[2620]: I0706 23:43:43.474180 2620 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:43:43.474384 kubelet[2620]: I0706 23:43:43.474211 2620 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:43:43.474504 kubelet[2620]: I0706 23:43:43.474388 2620 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:43:43.474504 kubelet[2620]: I0706 23:43:43.474413 2620 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:43:43.474504 kubelet[2620]: I0706 23:43:43.474449 2620 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:43:43.474592 kubelet[2620]: I0706 23:43:43.474558 2620 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:43:43.474592 kubelet[2620]: I0706 23:43:43.474570 2620 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:43:43.474592 kubelet[2620]: I0706 23:43:43.474590 2620 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:43:43.474792 kubelet[2620]: I0706 23:43:43.474603 2620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:43:43.475646 kubelet[2620]: I0706 23:43:43.475620 2620 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:43:43.476376 kubelet[2620]: I0706 23:43:43.476355 2620 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:43:43.477059 kubelet[2620]: I0706 23:43:43.477042 2620 server.go:1274] "Started kubelet" Jul 6 23:43:43.477686 kubelet[2620]: I0706 23:43:43.477622 2620 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:43:43.478732 kubelet[2620]: I0706 23:43:43.478702 2620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:43:43.479269 kubelet[2620]: I0706 23:43:43.479209 2620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:43:43.479739 kubelet[2620]: I0706 23:43:43.479679 2620 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:43:43.480325 kubelet[2620]: I0706 23:43:43.480307 2620 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:43:43.480951 kubelet[2620]: I0706 23:43:43.480916 2620 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:43:43.481646 kubelet[2620]: I0706 23:43:43.481624 2620 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:43:43.482149 kubelet[2620]: E0706 23:43:43.482112 2620 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:43:43.482205 kubelet[2620]: I0706 23:43:43.482177 2620 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:43:43.482625 kubelet[2620]: I0706 23:43:43.482310 2620 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:43:43.484606 kubelet[2620]: I0706 23:43:43.484578 2620 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:43:43.484711 kubelet[2620]: I0706 23:43:43.484686 2620 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:43:43.486097 kubelet[2620]: I0706 23:43:43.486074 2620 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:43:43.502891 kubelet[2620]: I0706 23:43:43.502850 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:43:43.504673 kubelet[2620]: I0706 23:43:43.504649 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:43:43.504791 kubelet[2620]: I0706 23:43:43.504780 2620 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:43:43.504858 kubelet[2620]: I0706 23:43:43.504849 2620 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:43:43.504953 kubelet[2620]: E0706 23:43:43.504935 2620 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:43:43.536993 kubelet[2620]: I0706 23:43:43.536963 2620 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:43:43.536993 kubelet[2620]: I0706 23:43:43.536985 2620 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:43:43.537155 kubelet[2620]: I0706 23:43:43.537006 2620 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:43:43.537208 kubelet[2620]: I0706 23:43:43.537184 2620 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:43:43.537238 kubelet[2620]: I0706 23:43:43.537201 2620 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:43:43.537238 kubelet[2620]: I0706 23:43:43.537221 2620 policy_none.go:49] "None policy: Start" Jul 6 23:43:43.537812 kubelet[2620]: I0706 23:43:43.537798 2620 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:43:43.537858 kubelet[2620]: I0706 23:43:43.537820 2620 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:43:43.537968 kubelet[2620]: I0706 23:43:43.537953 2620 state_mem.go:75] "Updated machine memory state" Jul 6 23:43:43.542385 kubelet[2620]: I0706 23:43:43.542354 2620 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:43:43.543171 kubelet[2620]: I0706 23:43:43.542945 2620 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:43:43.543171 kubelet[2620]: I0706 23:43:43.542961 2620 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:43:43.543250 kubelet[2620]: I0706 23:43:43.543192 2620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:43:43.613238 kubelet[2620]: E0706 23:43:43.613176 2620 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:43:43.645737 kubelet[2620]: I0706 23:43:43.645624 2620 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:43:43.663280 kubelet[2620]: I0706 23:43:43.663091 2620 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 6 23:43:43.663280 kubelet[2620]: I0706 23:43:43.663208 2620 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:43:43.683539 kubelet[2620]: I0706 23:43:43.683484 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:43:43.683539 kubelet[2620]: I0706 23:43:43.683534 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:43:43.683703 kubelet[2620]: I0706 23:43:43.683558 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:43:43.683703 kubelet[2620]: I0706 23:43:43.683580 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:43:43.683703 kubelet[2620]: I0706 23:43:43.683616 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13db430b4a1b0f36936d5b0d82854b9d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"13db430b4a1b0f36936d5b0d82854b9d\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:43:43.683703 kubelet[2620]: I0706 23:43:43.683640 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13db430b4a1b0f36936d5b0d82854b9d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"13db430b4a1b0f36936d5b0d82854b9d\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:43:43.683703 kubelet[2620]: I0706 23:43:43.683661 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13db430b4a1b0f36936d5b0d82854b9d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"13db430b4a1b0f36936d5b0d82854b9d\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:43:43.683806 kubelet[2620]: I0706 23:43:43.683680 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:43:43.683806 kubelet[2620]: I0706 23:43:43.683695 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:43:44.475649 kubelet[2620]: I0706 23:43:44.475552 2620 apiserver.go:52] "Watching apiserver" Jul 6 23:43:44.482671 kubelet[2620]: I0706 23:43:44.482599 2620 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:43:44.516783 kubelet[2620]: I0706 23:43:44.516342 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5163265849999998 podStartE2EDuration="1.516326585s" podCreationTimestamp="2025-07-06 23:43:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:43:44.516047096 +0000 UTC m=+1.096398834" watchObservedRunningTime="2025-07-06 23:43:44.516326585 +0000 UTC m=+1.096678283" Jul 6 23:43:44.527845 kubelet[2620]: E0706 23:43:44.527804 2620 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:43:44.536018 kubelet[2620]: I0706 23:43:44.535753 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.535722128 podStartE2EDuration="1.535722128s" podCreationTimestamp="2025-07-06 23:43:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:43:44.526723438 +0000 UTC m=+1.107075176" watchObservedRunningTime="2025-07-06 23:43:44.535722128 +0000 UTC m=+1.116073866" Jul 6 23:43:44.536018 kubelet[2620]: I0706 23:43:44.535884 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.535877897 podStartE2EDuration="1.535877897s" podCreationTimestamp="2025-07-06 23:43:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:43:44.535606091 +0000 UTC m=+1.115957829" watchObservedRunningTime="2025-07-06 23:43:44.535877897 +0000 UTC m=+1.116229635" Jul 6 23:43:49.030832 kubelet[2620]: I0706 23:43:49.030734 2620 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:43:49.031273 containerd[1497]: time="2025-07-06T23:43:49.031173350Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:43:49.031515 kubelet[2620]: I0706 23:43:49.031438 2620 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:43:50.011434 systemd[1]: Created slice kubepods-besteffort-poded4c81a1_8ce8_4d56_b41f_77fbacaa0c09.slice - libcontainer container kubepods-besteffort-poded4c81a1_8ce8_4d56_b41f_77fbacaa0c09.slice. Jul 6 23:43:50.023070 kubelet[2620]: I0706 23:43:50.023004 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl4td\" (UniqueName: \"kubernetes.io/projected/ed4c81a1-8ce8-4d56-b41f-77fbacaa0c09-kube-api-access-tl4td\") pod \"kube-proxy-cqp9q\" (UID: \"ed4c81a1-8ce8-4d56-b41f-77fbacaa0c09\") " pod="kube-system/kube-proxy-cqp9q" Jul 6 23:43:50.023070 kubelet[2620]: I0706 23:43:50.023066 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed4c81a1-8ce8-4d56-b41f-77fbacaa0c09-xtables-lock\") pod \"kube-proxy-cqp9q\" (UID: \"ed4c81a1-8ce8-4d56-b41f-77fbacaa0c09\") " pod="kube-system/kube-proxy-cqp9q" Jul 6 23:43:50.023233 kubelet[2620]: I0706 23:43:50.023086 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed4c81a1-8ce8-4d56-b41f-77fbacaa0c09-kube-proxy\") pod \"kube-proxy-cqp9q\" (UID: \"ed4c81a1-8ce8-4d56-b41f-77fbacaa0c09\") " pod="kube-system/kube-proxy-cqp9q" Jul 6 23:43:50.023233 kubelet[2620]: I0706 23:43:50.023118 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed4c81a1-8ce8-4d56-b41f-77fbacaa0c09-lib-modules\") pod \"kube-proxy-cqp9q\" (UID: \"ed4c81a1-8ce8-4d56-b41f-77fbacaa0c09\") " pod="kube-system/kube-proxy-cqp9q" Jul 6 23:43:50.204638 systemd[1]: Created slice kubepods-besteffort-pod6c257564_29ea_4128_8bc2_b571d40fba53.slice - libcontainer container kubepods-besteffort-pod6c257564_29ea_4128_8bc2_b571d40fba53.slice. Jul 6 23:43:50.225073 kubelet[2620]: I0706 23:43:50.225023 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6c257564-29ea-4128-8bc2-b571d40fba53-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-tgvrm\" (UID: \"6c257564-29ea-4128-8bc2-b571d40fba53\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-tgvrm" Jul 6 23:43:50.225539 kubelet[2620]: I0706 23:43:50.225168 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r52g\" (UniqueName: \"kubernetes.io/projected/6c257564-29ea-4128-8bc2-b571d40fba53-kube-api-access-7r52g\") pod \"tigera-operator-5bf8dfcb4-tgvrm\" (UID: \"6c257564-29ea-4128-8bc2-b571d40fba53\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-tgvrm" Jul 6 23:43:50.322609 containerd[1497]: time="2025-07-06T23:43:50.322488848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqp9q,Uid:ed4c81a1-8ce8-4d56-b41f-77fbacaa0c09,Namespace:kube-system,Attempt:0,}" Jul 6 23:43:50.351705 containerd[1497]: time="2025-07-06T23:43:50.351637599Z" level=info msg="connecting to shim 6bccc386e8cfb5c32f4751c85881d471691cbc15629c9f9eeaa459e7fc2b3f97" address="unix:///run/containerd/s/a5c224de23d3488b5b97de2bb10478a23dd17f67cb0dad83fbcfc419d7a18c70" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:43:50.382178 systemd[1]: Started cri-containerd-6bccc386e8cfb5c32f4751c85881d471691cbc15629c9f9eeaa459e7fc2b3f97.scope - libcontainer container 6bccc386e8cfb5c32f4751c85881d471691cbc15629c9f9eeaa459e7fc2b3f97. Jul 6 23:43:50.408559 containerd[1497]: time="2025-07-06T23:43:50.408506687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqp9q,Uid:ed4c81a1-8ce8-4d56-b41f-77fbacaa0c09,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bccc386e8cfb5c32f4751c85881d471691cbc15629c9f9eeaa459e7fc2b3f97\"" Jul 6 23:43:50.414034 containerd[1497]: time="2025-07-06T23:43:50.413972358Z" level=info msg="CreateContainer within sandbox \"6bccc386e8cfb5c32f4751c85881d471691cbc15629c9f9eeaa459e7fc2b3f97\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:43:50.428353 containerd[1497]: time="2025-07-06T23:43:50.428292707Z" level=info msg="Container 020cc8e96a60650fb3065c3f6c3a1d7aca615219ca7e5c846bad224edf89933c: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:43:50.435650 containerd[1497]: time="2025-07-06T23:43:50.435601341Z" level=info msg="CreateContainer within sandbox \"6bccc386e8cfb5c32f4751c85881d471691cbc15629c9f9eeaa459e7fc2b3f97\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"020cc8e96a60650fb3065c3f6c3a1d7aca615219ca7e5c846bad224edf89933c\"" Jul 6 23:43:50.436573 containerd[1497]: time="2025-07-06T23:43:50.436503737Z" level=info msg="StartContainer for \"020cc8e96a60650fb3065c3f6c3a1d7aca615219ca7e5c846bad224edf89933c\"" Jul 6 23:43:50.438055 containerd[1497]: time="2025-07-06T23:43:50.437978283Z" level=info msg="connecting to shim 020cc8e96a60650fb3065c3f6c3a1d7aca615219ca7e5c846bad224edf89933c" address="unix:///run/containerd/s/a5c224de23d3488b5b97de2bb10478a23dd17f67cb0dad83fbcfc419d7a18c70" protocol=ttrpc version=3 Jul 6 23:43:50.461629 systemd[1]: Started cri-containerd-020cc8e96a60650fb3065c3f6c3a1d7aca615219ca7e5c846bad224edf89933c.scope - libcontainer container 020cc8e96a60650fb3065c3f6c3a1d7aca615219ca7e5c846bad224edf89933c. Jul 6 23:43:50.500574 containerd[1497]: time="2025-07-06T23:43:50.500512614Z" level=info msg="StartContainer for \"020cc8e96a60650fb3065c3f6c3a1d7aca615219ca7e5c846bad224edf89933c\" returns successfully" Jul 6 23:43:50.509289 containerd[1497]: time="2025-07-06T23:43:50.509252022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-tgvrm,Uid:6c257564-29ea-4128-8bc2-b571d40fba53,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:43:50.530289 containerd[1497]: time="2025-07-06T23:43:50.530230955Z" level=info msg="connecting to shim 8bf71d12bf0f419310bc21d4f8b4dd7e9f9b50fba1f135c837fd205f179b3107" address="unix:///run/containerd/s/15a2287bbf2adacd7bb3667ee87f5b0b191591ea3f30486ee2b37015f4a112b4" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:43:50.564646 systemd[1]: Started cri-containerd-8bf71d12bf0f419310bc21d4f8b4dd7e9f9b50fba1f135c837fd205f179b3107.scope - libcontainer container 8bf71d12bf0f419310bc21d4f8b4dd7e9f9b50fba1f135c837fd205f179b3107. Jul 6 23:43:50.602150 containerd[1497]: time="2025-07-06T23:43:50.602031072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-tgvrm,Uid:6c257564-29ea-4128-8bc2-b571d40fba53,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8bf71d12bf0f419310bc21d4f8b4dd7e9f9b50fba1f135c837fd205f179b3107\"" Jul 6 23:43:50.604800 containerd[1497]: time="2025-07-06T23:43:50.604748823Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:43:51.165299 kubelet[2620]: I0706 23:43:51.165184 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cqp9q" podStartSLOduration=2.165164641 podStartE2EDuration="2.165164641s" podCreationTimestamp="2025-07-06 23:43:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:43:50.548605005 +0000 UTC m=+7.128956783" watchObservedRunningTime="2025-07-06 23:43:51.165164641 +0000 UTC m=+7.745516379" Jul 6 23:43:51.167183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825511470.mount: Deactivated successfully. Jul 6 23:43:51.815986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203061455.mount: Deactivated successfully. Jul 6 23:43:52.293049 containerd[1497]: time="2025-07-06T23:43:52.292991724Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:52.293558 containerd[1497]: time="2025-07-06T23:43:52.293527295Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 6 23:43:52.294416 containerd[1497]: time="2025-07-06T23:43:52.294376344Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:52.296427 containerd[1497]: time="2025-07-06T23:43:52.296379116Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:43:52.297091 containerd[1497]: time="2025-07-06T23:43:52.297034117Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.69223296s" Jul 6 23:43:52.297091 containerd[1497]: time="2025-07-06T23:43:52.297071326Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 6 23:43:52.302755 containerd[1497]: time="2025-07-06T23:43:52.302697108Z" level=info msg="CreateContainer within sandbox \"8bf71d12bf0f419310bc21d4f8b4dd7e9f9b50fba1f135c837fd205f179b3107\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:43:52.310359 containerd[1497]: time="2025-07-06T23:43:52.310300136Z" level=info msg="Container 73e7c8be31b16ee473986eaf7ceec021c39852818a61fadd456e2dbc171d1f35: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:43:52.317502 containerd[1497]: time="2025-07-06T23:43:52.317444652Z" level=info msg="CreateContainer within sandbox \"8bf71d12bf0f419310bc21d4f8b4dd7e9f9b50fba1f135c837fd205f179b3107\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"73e7c8be31b16ee473986eaf7ceec021c39852818a61fadd456e2dbc171d1f35\"" Jul 6 23:43:52.318161 containerd[1497]: time="2025-07-06T23:43:52.318101453Z" level=info msg="StartContainer for \"73e7c8be31b16ee473986eaf7ceec021c39852818a61fadd456e2dbc171d1f35\"" Jul 6 23:43:52.320381 containerd[1497]: time="2025-07-06T23:43:52.320333721Z" level=info msg="connecting to shim 73e7c8be31b16ee473986eaf7ceec021c39852818a61fadd456e2dbc171d1f35" address="unix:///run/containerd/s/15a2287bbf2adacd7bb3667ee87f5b0b191591ea3f30486ee2b37015f4a112b4" protocol=ttrpc version=3 Jul 6 23:43:52.343632 systemd[1]: Started cri-containerd-73e7c8be31b16ee473986eaf7ceec021c39852818a61fadd456e2dbc171d1f35.scope - libcontainer container 73e7c8be31b16ee473986eaf7ceec021c39852818a61fadd456e2dbc171d1f35. Jul 6 23:43:52.378451 containerd[1497]: time="2025-07-06T23:43:52.377649004Z" level=info msg="StartContainer for \"73e7c8be31b16ee473986eaf7ceec021c39852818a61fadd456e2dbc171d1f35\" returns successfully" Jul 6 23:43:54.513441 kubelet[2620]: I0706 23:43:54.513152 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-tgvrm" podStartSLOduration=2.817119815 podStartE2EDuration="4.513122274s" podCreationTimestamp="2025-07-06 23:43:50 +0000 UTC" firstStartedPulling="2025-07-06 23:43:50.60401051 +0000 UTC m=+7.184362248" lastFinishedPulling="2025-07-06 23:43:52.300009408 +0000 UTC m=+8.880364707" observedRunningTime="2025-07-06 23:43:52.553553743 +0000 UTC m=+9.133905481" watchObservedRunningTime="2025-07-06 23:43:54.513122274 +0000 UTC m=+11.093474012" Jul 6 23:43:57.990525 sudo[1709]: pam_unix(sudo:session): session closed for user root Jul 6 23:43:58.010855 sshd[1708]: Connection closed by 10.0.0.1 port 38394 Jul 6 23:43:58.013088 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jul 6 23:43:58.022565 systemd[1]: sshd@6-10.0.0.127:22-10.0.0.1:38394.service: Deactivated successfully. Jul 6 23:43:58.029338 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:43:58.034367 systemd[1]: session-7.scope: Consumed 6.536s CPU time, 228.6M memory peak. Jul 6 23:43:58.041751 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:43:58.047663 systemd-logind[1475]: Removed session 7. Jul 6 23:44:01.477950 update_engine[1476]: I20250706 23:44:01.477432 1476 update_attempter.cc:509] Updating boot flags... Jul 6 23:44:01.785924 systemd[1]: Created slice kubepods-besteffort-pod6bf057b3_4570_4b74_86aa_dd596cd1be40.slice - libcontainer container kubepods-besteffort-pod6bf057b3_4570_4b74_86aa_dd596cd1be40.slice. Jul 6 23:44:01.816757 kubelet[2620]: I0706 23:44:01.816710 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bf057b3-4570-4b74-86aa-dd596cd1be40-tigera-ca-bundle\") pod \"calico-typha-7f5b77b677-r95g4\" (UID: \"6bf057b3-4570-4b74-86aa-dd596cd1be40\") " pod="calico-system/calico-typha-7f5b77b677-r95g4" Jul 6 23:44:01.817120 kubelet[2620]: I0706 23:44:01.817090 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6bf057b3-4570-4b74-86aa-dd596cd1be40-typha-certs\") pod \"calico-typha-7f5b77b677-r95g4\" (UID: \"6bf057b3-4570-4b74-86aa-dd596cd1be40\") " pod="calico-system/calico-typha-7f5b77b677-r95g4" Jul 6 23:44:01.817172 kubelet[2620]: I0706 23:44:01.817145 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7v8w\" (UniqueName: \"kubernetes.io/projected/6bf057b3-4570-4b74-86aa-dd596cd1be40-kube-api-access-x7v8w\") pod \"calico-typha-7f5b77b677-r95g4\" (UID: \"6bf057b3-4570-4b74-86aa-dd596cd1be40\") " pod="calico-system/calico-typha-7f5b77b677-r95g4" Jul 6 23:44:01.911222 systemd[1]: Created slice kubepods-besteffort-pod39323c88_14e2_43dc_aa3b_49f319df4c97.slice - libcontainer container kubepods-besteffort-pod39323c88_14e2_43dc_aa3b_49f319df4c97.slice. Jul 6 23:44:02.018824 kubelet[2620]: I0706 23:44:02.018775 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/39323c88-14e2-43dc-aa3b-49f319df4c97-flexvol-driver-host\") pod \"calico-node-2zffg\" (UID: \"39323c88-14e2-43dc-aa3b-49f319df4c97\") " pod="calico-system/calico-node-2zffg" Jul 6 23:44:02.018962 kubelet[2620]: I0706 23:44:02.018864 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/39323c88-14e2-43dc-aa3b-49f319df4c97-var-run-calico\") pod \"calico-node-2zffg\" (UID: \"39323c88-14e2-43dc-aa3b-49f319df4c97\") " pod="calico-system/calico-node-2zffg" Jul 6 23:44:02.018962 kubelet[2620]: I0706 23:44:02.018884 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/39323c88-14e2-43dc-aa3b-49f319df4c97-cni-log-dir\") pod \"calico-node-2zffg\" (UID: \"39323c88-14e2-43dc-aa3b-49f319df4c97\") " pod="calico-system/calico-node-2zffg" Jul 6 23:44:02.018962 kubelet[2620]: I0706 23:44:02.018905 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/39323c88-14e2-43dc-aa3b-49f319df4c97-policysync\") pod \"calico-node-2zffg\" (UID: \"39323c88-14e2-43dc-aa3b-49f319df4c97\") " pod="calico-system/calico-node-2zffg" Jul 6 23:44:02.019055 kubelet[2620]: I0706 23:44:02.018968 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/39323c88-14e2-43dc-aa3b-49f319df4c97-cni-net-dir\") pod \"calico-node-2zffg\" (UID: \"39323c88-14e2-43dc-aa3b-49f319df4c97\") " pod="calico-system/calico-node-2zffg" Jul 6 23:44:02.019055 kubelet[2620]: I0706 23:44:02.018985 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39323c88-14e2-43dc-aa3b-49f319df4c97-lib-modules\") pod \"calico-node-2zffg\" (UID: \"39323c88-14e2-43dc-aa3b-49f319df4c97\") " pod="calico-system/calico-node-2zffg" Jul 6 23:44:02.019055 kubelet[2620]: I0706 23:44:02.019002 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/39323c88-14e2-43dc-aa3b-49f319df4c97-node-certs\") pod \"calico-node-2zffg\" (UID: \"39323c88-14e2-43dc-aa3b-49f319df4c97\") " pod="calico-system/calico-node-2zffg" Jul 6 23:44:02.019055 kubelet[2620]: I0706 23:44:02.019052 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/39323c88-14e2-43dc-aa3b-49f319df4c97-cni-bin-dir\") pod \"calico-node-2zffg\" (UID: \"39323c88-14e2-43dc-aa3b-49f319df4c97\") " pod="calico-system/calico-node-2zffg" Jul 6 23:44:02.019152 kubelet[2620]: I0706 23:44:02.019077 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39323c88-14e2-43dc-aa3b-49f319df4c97-xtables-lock\") pod \"calico-node-2zffg\" (UID: \"39323c88-14e2-43dc-aa3b-49f319df4c97\") " pod="calico-system/calico-node-2zffg" Jul 6 23:44:02.019152 kubelet[2620]: I0706 23:44:02.019126 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39323c88-14e2-43dc-aa3b-49f319df4c97-tigera-ca-bundle\") pod \"calico-node-2zffg\" (UID: \"39323c88-14e2-43dc-aa3b-49f319df4c97\") " pod="calico-system/calico-node-2zffg" Jul 6 23:44:02.019152 kubelet[2620]: I0706 23:44:02.019144 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39323c88-14e2-43dc-aa3b-49f319df4c97-var-lib-calico\") pod \"calico-node-2zffg\" (UID: \"39323c88-14e2-43dc-aa3b-49f319df4c97\") " pod="calico-system/calico-node-2zffg" Jul 6 23:44:02.019227 kubelet[2620]: I0706 23:44:02.019163 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxqhb\" (UniqueName: \"kubernetes.io/projected/39323c88-14e2-43dc-aa3b-49f319df4c97-kube-api-access-gxqhb\") pod \"calico-node-2zffg\" (UID: \"39323c88-14e2-43dc-aa3b-49f319df4c97\") " pod="calico-system/calico-node-2zffg" Jul 6 23:44:02.124568 containerd[1497]: time="2025-07-06T23:44:02.123311108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f5b77b677-r95g4,Uid:6bf057b3-4570-4b74-86aa-dd596cd1be40,Namespace:calico-system,Attempt:0,}" Jul 6 23:44:02.132870 kubelet[2620]: E0706 23:44:02.131751 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.132870 kubelet[2620]: W0706 23:44:02.131795 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.132870 kubelet[2620]: E0706 23:44:02.131818 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.135738 kubelet[2620]: E0706 23:44:02.135695 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.135738 kubelet[2620]: W0706 23:44:02.135719 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.135738 kubelet[2620]: E0706 23:44:02.135739 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.150201 kubelet[2620]: E0706 23:44:02.150106 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.150201 kubelet[2620]: W0706 23:44:02.150129 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.150201 kubelet[2620]: E0706 23:44:02.150148 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.172075 kubelet[2620]: E0706 23:44:02.171922 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9sc4" podUID="4b809c8d-416a-43ae-b2d8-1c4702f886b2" Jul 6 23:44:02.200492 containerd[1497]: time="2025-07-06T23:44:02.199978342Z" level=info msg="connecting to shim c0faa00de6f02e4fd95c622f1588e9b7726e00cde7e8e462820c94ddc4fc13d0" address="unix:///run/containerd/s/666b4411bbc5a93ced8e63902a866a890709d8beac6f5fa159c7c434c669b5e1" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:02.223671 containerd[1497]: time="2025-07-06T23:44:02.223620091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2zffg,Uid:39323c88-14e2-43dc-aa3b-49f319df4c97,Namespace:calico-system,Attempt:0,}" Jul 6 23:44:02.234061 kubelet[2620]: E0706 23:44:02.234007 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.234061 kubelet[2620]: W0706 23:44:02.234049 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.234061 kubelet[2620]: E0706 23:44:02.234070 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.235352 kubelet[2620]: E0706 23:44:02.235325 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.235352 kubelet[2620]: W0706 23:44:02.235345 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.235477 kubelet[2620]: E0706 23:44:02.235362 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.235605 kubelet[2620]: E0706 23:44:02.235588 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.235605 kubelet[2620]: W0706 23:44:02.235600 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.235663 kubelet[2620]: E0706 23:44:02.235610 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.236741 kubelet[2620]: E0706 23:44:02.236181 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.236741 kubelet[2620]: W0706 23:44:02.236196 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.236741 kubelet[2620]: E0706 23:44:02.236214 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.236741 kubelet[2620]: E0706 23:44:02.236433 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.236741 kubelet[2620]: W0706 23:44:02.236442 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.236741 kubelet[2620]: E0706 23:44:02.236451 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.236741 kubelet[2620]: E0706 23:44:02.236619 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.236741 kubelet[2620]: W0706 23:44:02.236627 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.236741 kubelet[2620]: E0706 23:44:02.236635 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.237122 kubelet[2620]: E0706 23:44:02.237100 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.237122 kubelet[2620]: W0706 23:44:02.237114 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.237122 kubelet[2620]: E0706 23:44:02.237124 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.238600 kubelet[2620]: E0706 23:44:02.238578 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.238600 kubelet[2620]: W0706 23:44:02.238596 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.238600 kubelet[2620]: E0706 23:44:02.238607 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.239797 kubelet[2620]: E0706 23:44:02.238932 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.239797 kubelet[2620]: W0706 23:44:02.238944 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.239797 kubelet[2620]: E0706 23:44:02.238953 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.239797 kubelet[2620]: E0706 23:44:02.239177 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.239797 kubelet[2620]: W0706 23:44:02.239185 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.239797 kubelet[2620]: E0706 23:44:02.239193 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.239797 kubelet[2620]: E0706 23:44:02.239500 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.239797 kubelet[2620]: W0706 23:44:02.239509 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.239797 kubelet[2620]: E0706 23:44:02.239518 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.240038 kubelet[2620]: E0706 23:44:02.240017 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.240038 kubelet[2620]: W0706 23:44:02.240033 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.240088 kubelet[2620]: E0706 23:44:02.240043 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.240240 kubelet[2620]: E0706 23:44:02.240223 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.240240 kubelet[2620]: W0706 23:44:02.240233 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.240240 kubelet[2620]: E0706 23:44:02.240241 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.240389 kubelet[2620]: E0706 23:44:02.240372 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.240431 kubelet[2620]: W0706 23:44:02.240413 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.240431 kubelet[2620]: E0706 23:44:02.240424 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.240559 kubelet[2620]: E0706 23:44:02.240539 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.240559 kubelet[2620]: W0706 23:44:02.240550 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.240559 kubelet[2620]: E0706 23:44:02.240558 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.241451 kubelet[2620]: E0706 23:44:02.241423 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.241451 kubelet[2620]: W0706 23:44:02.241443 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.241451 kubelet[2620]: E0706 23:44:02.241457 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.241658 kubelet[2620]: E0706 23:44:02.241638 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.241658 kubelet[2620]: W0706 23:44:02.241650 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.241658 kubelet[2620]: E0706 23:44:02.241658 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.241852 kubelet[2620]: E0706 23:44:02.241834 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.241852 kubelet[2620]: W0706 23:44:02.241846 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.241852 kubelet[2620]: E0706 23:44:02.241854 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.241976 kubelet[2620]: E0706 23:44:02.241959 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.241976 kubelet[2620]: W0706 23:44:02.241969 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.241976 kubelet[2620]: E0706 23:44:02.241976 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.243116 kubelet[2620]: E0706 23:44:02.243071 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.243116 kubelet[2620]: W0706 23:44:02.243094 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.243116 kubelet[2620]: E0706 23:44:02.243106 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.291262 containerd[1497]: time="2025-07-06T23:44:02.290587069Z" level=info msg="connecting to shim b1f6fa327ef10d5ad40fa72a013beb0ff13d7b6f3b89a368d46c015ca90a9bb5" address="unix:///run/containerd/s/a270fd1b7172d2617fdea856e56d4689669c0663f40c9728bfa949bcde80ce2f" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:02.322265 kubelet[2620]: E0706 23:44:02.322231 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.322265 kubelet[2620]: W0706 23:44:02.322255 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.322462 kubelet[2620]: E0706 23:44:02.322282 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.322462 kubelet[2620]: I0706 23:44:02.322311 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrvss\" (UniqueName: \"kubernetes.io/projected/4b809c8d-416a-43ae-b2d8-1c4702f886b2-kube-api-access-jrvss\") pod \"csi-node-driver-m9sc4\" (UID: \"4b809c8d-416a-43ae-b2d8-1c4702f886b2\") " pod="calico-system/csi-node-driver-m9sc4" Jul 6 23:44:02.322780 kubelet[2620]: E0706 23:44:02.322721 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.322780 kubelet[2620]: W0706 23:44:02.322741 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.322780 kubelet[2620]: E0706 23:44:02.322754 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.322780 kubelet[2620]: I0706 23:44:02.322778 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b809c8d-416a-43ae-b2d8-1c4702f886b2-kubelet-dir\") pod \"csi-node-driver-m9sc4\" (UID: \"4b809c8d-416a-43ae-b2d8-1c4702f886b2\") " pod="calico-system/csi-node-driver-m9sc4" Jul 6 23:44:02.323474 kubelet[2620]: E0706 23:44:02.323300 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.323474 kubelet[2620]: W0706 23:44:02.323316 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.323474 kubelet[2620]: E0706 23:44:02.323328 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.323474 kubelet[2620]: I0706 23:44:02.323345 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4b809c8d-416a-43ae-b2d8-1c4702f886b2-registration-dir\") pod \"csi-node-driver-m9sc4\" (UID: \"4b809c8d-416a-43ae-b2d8-1c4702f886b2\") " pod="calico-system/csi-node-driver-m9sc4" Jul 6 23:44:02.324416 kubelet[2620]: E0706 23:44:02.324379 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.324498 kubelet[2620]: W0706 23:44:02.324415 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.324498 kubelet[2620]: E0706 23:44:02.324452 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.324498 kubelet[2620]: I0706 23:44:02.324471 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4b809c8d-416a-43ae-b2d8-1c4702f886b2-socket-dir\") pod \"csi-node-driver-m9sc4\" (UID: \"4b809c8d-416a-43ae-b2d8-1c4702f886b2\") " pod="calico-system/csi-node-driver-m9sc4" Jul 6 23:44:02.325071 kubelet[2620]: E0706 23:44:02.324660 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.325071 kubelet[2620]: W0706 23:44:02.324678 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.325714 kubelet[2620]: E0706 23:44:02.325651 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.326024 kubelet[2620]: I0706 23:44:02.325837 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4b809c8d-416a-43ae-b2d8-1c4702f886b2-varrun\") pod \"csi-node-driver-m9sc4\" (UID: \"4b809c8d-416a-43ae-b2d8-1c4702f886b2\") " pod="calico-system/csi-node-driver-m9sc4" Jul 6 23:44:02.326024 kubelet[2620]: E0706 23:44:02.325925 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.326024 kubelet[2620]: W0706 23:44:02.325934 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.326024 kubelet[2620]: E0706 23:44:02.325951 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.327136 kubelet[2620]: E0706 23:44:02.327042 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.327136 kubelet[2620]: W0706 23:44:02.327061 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.327808 kubelet[2620]: E0706 23:44:02.327200 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.327808 kubelet[2620]: E0706 23:44:02.327308 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.327808 kubelet[2620]: W0706 23:44:02.327316 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.327808 kubelet[2620]: E0706 23:44:02.327394 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.327808 kubelet[2620]: E0706 23:44:02.327802 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.327808 kubelet[2620]: W0706 23:44:02.327810 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.328358 kubelet[2620]: E0706 23:44:02.327889 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.328358 kubelet[2620]: E0706 23:44:02.328008 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.328358 kubelet[2620]: W0706 23:44:02.328016 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.328358 kubelet[2620]: E0706 23:44:02.328062 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.328358 kubelet[2620]: E0706 23:44:02.328218 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.328358 kubelet[2620]: W0706 23:44:02.328226 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.328358 kubelet[2620]: E0706 23:44:02.328260 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.329850 kubelet[2620]: E0706 23:44:02.328378 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.329850 kubelet[2620]: W0706 23:44:02.328385 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.329850 kubelet[2620]: E0706 23:44:02.328393 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.329850 kubelet[2620]: E0706 23:44:02.328590 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.329850 kubelet[2620]: W0706 23:44:02.328597 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.329850 kubelet[2620]: E0706 23:44:02.328604 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.329850 kubelet[2620]: E0706 23:44:02.328747 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.329850 kubelet[2620]: W0706 23:44:02.328754 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.329850 kubelet[2620]: E0706 23:44:02.328761 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.329850 kubelet[2620]: E0706 23:44:02.329070 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.330069 kubelet[2620]: W0706 23:44:02.329079 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.330069 kubelet[2620]: E0706 23:44:02.329088 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.330603 systemd[1]: Started cri-containerd-b1f6fa327ef10d5ad40fa72a013beb0ff13d7b6f3b89a368d46c015ca90a9bb5.scope - libcontainer container b1f6fa327ef10d5ad40fa72a013beb0ff13d7b6f3b89a368d46c015ca90a9bb5. Jul 6 23:44:02.335803 systemd[1]: Started cri-containerd-c0faa00de6f02e4fd95c622f1588e9b7726e00cde7e8e462820c94ddc4fc13d0.scope - libcontainer container c0faa00de6f02e4fd95c622f1588e9b7726e00cde7e8e462820c94ddc4fc13d0. Jul 6 23:44:02.383331 containerd[1497]: time="2025-07-06T23:44:02.382929347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2zffg,Uid:39323c88-14e2-43dc-aa3b-49f319df4c97,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1f6fa327ef10d5ad40fa72a013beb0ff13d7b6f3b89a368d46c015ca90a9bb5\"" Jul 6 23:44:02.391913 containerd[1497]: time="2025-07-06T23:44:02.391847582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:44:02.394251 containerd[1497]: time="2025-07-06T23:44:02.394204524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f5b77b677-r95g4,Uid:6bf057b3-4570-4b74-86aa-dd596cd1be40,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0faa00de6f02e4fd95c622f1588e9b7726e00cde7e8e462820c94ddc4fc13d0\"" Jul 6 23:44:02.427301 kubelet[2620]: E0706 23:44:02.427260 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.427301 kubelet[2620]: W0706 23:44:02.427293 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.427301 kubelet[2620]: E0706 23:44:02.427314 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.427976 kubelet[2620]: E0706 23:44:02.427565 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.427976 kubelet[2620]: W0706 23:44:02.427579 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.427976 kubelet[2620]: E0706 23:44:02.427594 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.428091 kubelet[2620]: E0706 23:44:02.427937 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.428091 kubelet[2620]: W0706 23:44:02.428051 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.428091 kubelet[2620]: E0706 23:44:02.428078 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.428329 kubelet[2620]: E0706 23:44:02.428307 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.428329 kubelet[2620]: W0706 23:44:02.428319 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.428417 kubelet[2620]: E0706 23:44:02.428338 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.428711 kubelet[2620]: E0706 23:44:02.428685 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.428711 kubelet[2620]: W0706 23:44:02.428698 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.428801 kubelet[2620]: E0706 23:44:02.428727 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.429022 kubelet[2620]: E0706 23:44:02.428949 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.429022 kubelet[2620]: W0706 23:44:02.428962 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.429022 kubelet[2620]: E0706 23:44:02.428976 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.429557 kubelet[2620]: E0706 23:44:02.429224 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.429557 kubelet[2620]: W0706 23:44:02.429236 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.429557 kubelet[2620]: E0706 23:44:02.429254 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.429557 kubelet[2620]: E0706 23:44:02.429475 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.429557 kubelet[2620]: W0706 23:44:02.429485 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.429557 kubelet[2620]: E0706 23:44:02.429504 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.429785 kubelet[2620]: E0706 23:44:02.429748 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.429785 kubelet[2620]: W0706 23:44:02.429763 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.429838 kubelet[2620]: E0706 23:44:02.429794 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.430027 kubelet[2620]: E0706 23:44:02.430005 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.430055 kubelet[2620]: W0706 23:44:02.430035 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.430075 kubelet[2620]: E0706 23:44:02.430053 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.430246 kubelet[2620]: E0706 23:44:02.430226 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.430276 kubelet[2620]: W0706 23:44:02.430238 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.430276 kubelet[2620]: E0706 23:44:02.430269 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.430485 kubelet[2620]: E0706 23:44:02.430469 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.430517 kubelet[2620]: W0706 23:44:02.430495 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.430539 kubelet[2620]: E0706 23:44:02.430522 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.430691 kubelet[2620]: E0706 23:44:02.430678 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.430691 kubelet[2620]: W0706 23:44:02.430689 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.430753 kubelet[2620]: E0706 23:44:02.430718 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.430898 kubelet[2620]: E0706 23:44:02.430883 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.430925 kubelet[2620]: W0706 23:44:02.430897 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.430925 kubelet[2620]: E0706 23:44:02.430917 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.431071 kubelet[2620]: E0706 23:44:02.431045 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.431098 kubelet[2620]: W0706 23:44:02.431071 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.431098 kubelet[2620]: E0706 23:44:02.431087 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.431268 kubelet[2620]: E0706 23:44:02.431252 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.431268 kubelet[2620]: W0706 23:44:02.431264 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.431318 kubelet[2620]: E0706 23:44:02.431277 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.431484 kubelet[2620]: E0706 23:44:02.431470 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.431518 kubelet[2620]: W0706 23:44:02.431484 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.431518 kubelet[2620]: E0706 23:44:02.431497 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.431891 kubelet[2620]: E0706 23:44:02.431869 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.431891 kubelet[2620]: W0706 23:44:02.431886 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.431947 kubelet[2620]: E0706 23:44:02.431912 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.432061 kubelet[2620]: E0706 23:44:02.432047 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.432061 kubelet[2620]: W0706 23:44:02.432059 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.432135 kubelet[2620]: E0706 23:44:02.432078 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.432233 kubelet[2620]: E0706 23:44:02.432219 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.432233 kubelet[2620]: W0706 23:44:02.432230 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.432279 kubelet[2620]: E0706 23:44:02.432245 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.432383 kubelet[2620]: E0706 23:44:02.432370 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.432383 kubelet[2620]: W0706 23:44:02.432381 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.432504 kubelet[2620]: E0706 23:44:02.432490 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.432553 kubelet[2620]: E0706 23:44:02.432544 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.432574 kubelet[2620]: W0706 23:44:02.432553 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.432574 kubelet[2620]: E0706 23:44:02.432561 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.432696 kubelet[2620]: E0706 23:44:02.432685 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.432729 kubelet[2620]: W0706 23:44:02.432696 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.432729 kubelet[2620]: E0706 23:44:02.432722 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.432935 kubelet[2620]: E0706 23:44:02.432921 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.432958 kubelet[2620]: W0706 23:44:02.432935 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.432958 kubelet[2620]: E0706 23:44:02.432949 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.433127 kubelet[2620]: E0706 23:44:02.433116 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.433149 kubelet[2620]: W0706 23:44:02.433127 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.433149 kubelet[2620]: E0706 23:44:02.433136 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:02.443659 kubelet[2620]: E0706 23:44:02.443623 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:44:02.443659 kubelet[2620]: W0706 23:44:02.443645 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:44:02.443790 kubelet[2620]: E0706 23:44:02.443675 2620 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:44:03.191176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913081457.mount: Deactivated successfully. Jul 6 23:44:03.301986 containerd[1497]: time="2025-07-06T23:44:03.301933007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:03.302454 containerd[1497]: time="2025-07-06T23:44:03.302417371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Jul 6 23:44:03.303213 containerd[1497]: time="2025-07-06T23:44:03.303172302Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:03.305652 containerd[1497]: time="2025-07-06T23:44:03.305618726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:03.307259 containerd[1497]: time="2025-07-06T23:44:03.307124307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 915.228636ms" Jul 6 23:44:03.307259 containerd[1497]: time="2025-07-06T23:44:03.307167354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 6 23:44:03.310120 containerd[1497]: time="2025-07-06T23:44:03.310017928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:44:03.311426 containerd[1497]: time="2025-07-06T23:44:03.311296070Z" level=info msg="CreateContainer within sandbox \"b1f6fa327ef10d5ad40fa72a013beb0ff13d7b6f3b89a368d46c015ca90a9bb5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:44:03.334833 containerd[1497]: time="2025-07-06T23:44:03.334776378Z" level=info msg="Container 820c4d27769f11c4b0b26ad316fc7a8339f1533654535ce869f589f0d38f6f6d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:03.349571 containerd[1497]: time="2025-07-06T23:44:03.349528455Z" level=info msg="CreateContainer within sandbox \"b1f6fa327ef10d5ad40fa72a013beb0ff13d7b6f3b89a368d46c015ca90a9bb5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"820c4d27769f11c4b0b26ad316fc7a8339f1533654535ce869f589f0d38f6f6d\"" Jul 6 23:44:03.351750 containerd[1497]: time="2025-07-06T23:44:03.351525521Z" level=info msg="StartContainer for \"820c4d27769f11c4b0b26ad316fc7a8339f1533654535ce869f589f0d38f6f6d\"" Jul 6 23:44:03.354624 containerd[1497]: time="2025-07-06T23:44:03.353948621Z" level=info msg="connecting to shim 820c4d27769f11c4b0b26ad316fc7a8339f1533654535ce869f589f0d38f6f6d" address="unix:///run/containerd/s/a270fd1b7172d2617fdea856e56d4689669c0663f40c9728bfa949bcde80ce2f" protocol=ttrpc version=3 Jul 6 23:44:03.384613 systemd[1]: Started cri-containerd-820c4d27769f11c4b0b26ad316fc7a8339f1533654535ce869f589f0d38f6f6d.scope - libcontainer container 820c4d27769f11c4b0b26ad316fc7a8339f1533654535ce869f589f0d38f6f6d. Jul 6 23:44:03.428527 containerd[1497]: time="2025-07-06T23:44:03.428468454Z" level=info msg="StartContainer for \"820c4d27769f11c4b0b26ad316fc7a8339f1533654535ce869f589f0d38f6f6d\" returns successfully" Jul 6 23:44:03.465272 systemd[1]: cri-containerd-820c4d27769f11c4b0b26ad316fc7a8339f1533654535ce869f589f0d38f6f6d.scope: Deactivated successfully. Jul 6 23:44:03.504658 containerd[1497]: time="2025-07-06T23:44:03.504598126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"820c4d27769f11c4b0b26ad316fc7a8339f1533654535ce869f589f0d38f6f6d\" id:\"820c4d27769f11c4b0b26ad316fc7a8339f1533654535ce869f589f0d38f6f6d\" pid:3244 exited_at:{seconds:1751845443 nanos:483307717}" Jul 6 23:44:03.504658 containerd[1497]: time="2025-07-06T23:44:03.504634932Z" level=info msg="received exit event container_id:\"820c4d27769f11c4b0b26ad316fc7a8339f1533654535ce869f589f0d38f6f6d\" id:\"820c4d27769f11c4b0b26ad316fc7a8339f1533654535ce869f589f0d38f6f6d\" pid:3244 exited_at:{seconds:1751845443 nanos:483307717}" Jul 6 23:44:04.505960 kubelet[2620]: E0706 23:44:04.505899 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9sc4" podUID="4b809c8d-416a-43ae-b2d8-1c4702f886b2" Jul 6 23:44:05.481056 containerd[1497]: time="2025-07-06T23:44:05.481015207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:05.481788 containerd[1497]: time="2025-07-06T23:44:05.481764649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=31717828" Jul 6 23:44:05.482767 containerd[1497]: time="2025-07-06T23:44:05.482720245Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:05.484869 containerd[1497]: time="2025-07-06T23:44:05.484824587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:05.486043 containerd[1497]: time="2025-07-06T23:44:05.486013460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.175894715s" Jul 6 23:44:05.486445 containerd[1497]: time="2025-07-06T23:44:05.486337953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 6 23:44:05.488777 containerd[1497]: time="2025-07-06T23:44:05.488737303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:44:05.503231 containerd[1497]: time="2025-07-06T23:44:05.503167210Z" level=info msg="CreateContainer within sandbox \"c0faa00de6f02e4fd95c622f1588e9b7726e00cde7e8e462820c94ddc4fc13d0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:44:05.516065 containerd[1497]: time="2025-07-06T23:44:05.516002977Z" level=info msg="Container 5c81f0f7bccdfe217970953072ac70a4077975db4f4cf0ebfce2c029c234b9b0: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:05.522856 containerd[1497]: time="2025-07-06T23:44:05.522790241Z" level=info msg="CreateContainer within sandbox \"c0faa00de6f02e4fd95c622f1588e9b7726e00cde7e8e462820c94ddc4fc13d0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5c81f0f7bccdfe217970953072ac70a4077975db4f4cf0ebfce2c029c234b9b0\"" Jul 6 23:44:05.523597 containerd[1497]: time="2025-07-06T23:44:05.523520440Z" level=info msg="StartContainer for \"5c81f0f7bccdfe217970953072ac70a4077975db4f4cf0ebfce2c029c234b9b0\"" Jul 6 23:44:05.525019 containerd[1497]: time="2025-07-06T23:44:05.524988238Z" level=info msg="connecting to shim 5c81f0f7bccdfe217970953072ac70a4077975db4f4cf0ebfce2c029c234b9b0" address="unix:///run/containerd/s/666b4411bbc5a93ced8e63902a866a890709d8beac6f5fa159c7c434c669b5e1" protocol=ttrpc version=3 Jul 6 23:44:05.547597 systemd[1]: Started cri-containerd-5c81f0f7bccdfe217970953072ac70a4077975db4f4cf0ebfce2c029c234b9b0.scope - libcontainer container 5c81f0f7bccdfe217970953072ac70a4077975db4f4cf0ebfce2c029c234b9b0. Jul 6 23:44:05.646482 containerd[1497]: time="2025-07-06T23:44:05.646423227Z" level=info msg="StartContainer for \"5c81f0f7bccdfe217970953072ac70a4077975db4f4cf0ebfce2c029c234b9b0\" returns successfully" Jul 6 23:44:06.505171 kubelet[2620]: E0706 23:44:06.505130 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9sc4" podUID="4b809c8d-416a-43ae-b2d8-1c4702f886b2" Jul 6 23:44:07.614286 kubelet[2620]: I0706 23:44:07.614248 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:44:08.506095 kubelet[2620]: E0706 23:44:08.506009 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9sc4" podUID="4b809c8d-416a-43ae-b2d8-1c4702f886b2" Jul 6 23:44:09.639436 containerd[1497]: time="2025-07-06T23:44:09.639340981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:09.640517 containerd[1497]: time="2025-07-06T23:44:09.639649305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 6 23:44:09.640517 containerd[1497]: time="2025-07-06T23:44:09.640313240Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:09.642717 containerd[1497]: time="2025-07-06T23:44:09.642665817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:09.643422 containerd[1497]: time="2025-07-06T23:44:09.643199053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 4.15433641s" Jul 6 23:44:09.643422 containerd[1497]: time="2025-07-06T23:44:09.643234939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 6 23:44:09.645547 containerd[1497]: time="2025-07-06T23:44:09.645498743Z" level=info msg="CreateContainer within sandbox \"b1f6fa327ef10d5ad40fa72a013beb0ff13d7b6f3b89a368d46c015ca90a9bb5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:44:09.654448 containerd[1497]: time="2025-07-06T23:44:09.653535454Z" level=info msg="Container dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:09.661203 containerd[1497]: time="2025-07-06T23:44:09.661074374Z" level=info msg="CreateContainer within sandbox \"b1f6fa327ef10d5ad40fa72a013beb0ff13d7b6f3b89a368d46c015ca90a9bb5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc\"" Jul 6 23:44:09.662032 containerd[1497]: time="2025-07-06T23:44:09.661998506Z" level=info msg="StartContainer for \"dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc\"" Jul 6 23:44:09.663803 containerd[1497]: time="2025-07-06T23:44:09.663774520Z" level=info msg="connecting to shim dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc" address="unix:///run/containerd/s/a270fd1b7172d2617fdea856e56d4689669c0663f40c9728bfa949bcde80ce2f" protocol=ttrpc version=3 Jul 6 23:44:09.688647 systemd[1]: Started cri-containerd-dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc.scope - libcontainer container dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc. Jul 6 23:44:09.746808 containerd[1497]: time="2025-07-06T23:44:09.746771928Z" level=info msg="StartContainer for \"dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc\" returns successfully" Jul 6 23:44:10.378300 systemd[1]: cri-containerd-dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc.scope: Deactivated successfully. Jul 6 23:44:10.378771 systemd[1]: cri-containerd-dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc.scope: Consumed 539ms CPU time, 174.1M memory peak, 2.3M read from disk, 165.8M written to disk. Jul 6 23:44:10.380021 containerd[1497]: time="2025-07-06T23:44:10.379862674Z" level=info msg="received exit event container_id:\"dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc\" id:\"dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc\" pid:3347 exited_at:{seconds:1751845450 nanos:379283834}" Jul 6 23:44:10.380533 containerd[1497]: time="2025-07-06T23:44:10.380483040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc\" id:\"dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc\" pid:3347 exited_at:{seconds:1751845450 nanos:379283834}" Jul 6 23:44:10.401228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dba1798cb02de462dc81eefe9025a62bf73b8046dcdd3b86507e9342b6154dcc-rootfs.mount: Deactivated successfully. Jul 6 23:44:10.412336 kubelet[2620]: I0706 23:44:10.412300 2620 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:44:10.510746 systemd[1]: Created slice kubepods-besteffort-pod4b809c8d_416a_43ae_b2d8_1c4702f886b2.slice - libcontainer container kubepods-besteffort-pod4b809c8d_416a_43ae_b2d8_1c4702f886b2.slice. Jul 6 23:44:10.530393 containerd[1497]: time="2025-07-06T23:44:10.529240122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9sc4,Uid:4b809c8d-416a-43ae-b2d8-1c4702f886b2,Namespace:calico-system,Attempt:0,}" Jul 6 23:44:10.556854 kubelet[2620]: I0706 23:44:10.556788 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f5b77b677-r95g4" podStartSLOduration=6.4640741219999995 podStartE2EDuration="9.556770582s" podCreationTimestamp="2025-07-06 23:44:01 +0000 UTC" firstStartedPulling="2025-07-06 23:44:02.395476831 +0000 UTC m=+18.975828569" lastFinishedPulling="2025-07-06 23:44:05.488173291 +0000 UTC m=+22.068525029" observedRunningTime="2025-07-06 23:44:06.62480801 +0000 UTC m=+23.205159788" watchObservedRunningTime="2025-07-06 23:44:10.556770582 +0000 UTC m=+27.137122320" Jul 6 23:44:10.576380 systemd[1]: Created slice kubepods-burstable-pod288be6c7_a3f8_4852_8e60_18dd898545eb.slice - libcontainer container kubepods-burstable-pod288be6c7_a3f8_4852_8e60_18dd898545eb.slice. Jul 6 23:44:10.595726 systemd[1]: Created slice kubepods-burstable-pod8e0e80fe_54df_43e2_a2ba_b36484172017.slice - libcontainer container kubepods-burstable-pod8e0e80fe_54df_43e2_a2ba_b36484172017.slice. Jul 6 23:44:10.605337 systemd[1]: Created slice kubepods-besteffort-pode9b113d2_3320_44e5_b15f_258eb075b4ee.slice - libcontainer container kubepods-besteffort-pode9b113d2_3320_44e5_b15f_258eb075b4ee.slice. Jul 6 23:44:10.617789 systemd[1]: Created slice kubepods-besteffort-podfc8416c7_bd5b_40b0_95a8_3be1e242a65b.slice - libcontainer container kubepods-besteffort-podfc8416c7_bd5b_40b0_95a8_3be1e242a65b.slice. Jul 6 23:44:10.626481 systemd[1]: Created slice kubepods-besteffort-podc02b118e_f7bf_40f8_822a_eb78b9bda868.slice - libcontainer container kubepods-besteffort-podc02b118e_f7bf_40f8_822a_eb78b9bda868.slice. Jul 6 23:44:10.632664 systemd[1]: Created slice kubepods-besteffort-pode99f84f0_d61f_40f9_ac56_5ed80e1ac00f.slice - libcontainer container kubepods-besteffort-pode99f84f0_d61f_40f9_ac56_5ed80e1ac00f.slice. Jul 6 23:44:10.644027 systemd[1]: Created slice kubepods-besteffort-podd453398f_d6b2_423a_89de_9abb6888d53a.slice - libcontainer container kubepods-besteffort-podd453398f_d6b2_423a_89de_9abb6888d53a.slice. Jul 6 23:44:10.648453 containerd[1497]: time="2025-07-06T23:44:10.648416818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:44:10.689997 kubelet[2620]: I0706 23:44:10.689947 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/288be6c7-a3f8-4852-8e60-18dd898545eb-config-volume\") pod \"coredns-7c65d6cfc9-qfcd4\" (UID: \"288be6c7-a3f8-4852-8e60-18dd898545eb\") " pod="kube-system/coredns-7c65d6cfc9-qfcd4" Jul 6 23:44:10.690596 kubelet[2620]: I0706 23:44:10.690576 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f-whisker-ca-bundle\") pod \"whisker-6d45f6c4cd-jznvb\" (UID: \"e99f84f0-d61f-40f9-ac56-5ed80e1ac00f\") " pod="calico-system/whisker-6d45f6c4cd-jznvb" Jul 6 23:44:10.690732 kubelet[2620]: I0706 23:44:10.690717 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fc8416c7-bd5b-40b0-95a8-3be1e242a65b-calico-apiserver-certs\") pod \"calico-apiserver-67b6cbff4d-w8zph\" (UID: \"fc8416c7-bd5b-40b0-95a8-3be1e242a65b\") " pod="calico-apiserver/calico-apiserver-67b6cbff4d-w8zph" Jul 6 23:44:10.690805 kubelet[2620]: I0706 23:44:10.690793 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2jrk\" (UniqueName: \"kubernetes.io/projected/e9b113d2-3320-44e5-b15f-258eb075b4ee-kube-api-access-x2jrk\") pod \"goldmane-58fd7646b9-k5kn7\" (UID: \"e9b113d2-3320-44e5-b15f-258eb075b4ee\") " pod="calico-system/goldmane-58fd7646b9-k5kn7" Jul 6 23:44:10.690877 kubelet[2620]: I0706 23:44:10.690865 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5drx6\" (UniqueName: \"kubernetes.io/projected/8e0e80fe-54df-43e2-a2ba-b36484172017-kube-api-access-5drx6\") pod \"coredns-7c65d6cfc9-bzjfx\" (UID: \"8e0e80fe-54df-43e2-a2ba-b36484172017\") " pod="kube-system/coredns-7c65d6cfc9-bzjfx" Jul 6 23:44:10.690954 kubelet[2620]: I0706 23:44:10.690943 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c02b118e-f7bf-40f8-822a-eb78b9bda868-tigera-ca-bundle\") pod \"calico-kube-controllers-6bf774cfd7-dkg74\" (UID: \"c02b118e-f7bf-40f8-822a-eb78b9bda868\") " pod="calico-system/calico-kube-controllers-6bf774cfd7-dkg74" Jul 6 23:44:10.691039 kubelet[2620]: I0706 23:44:10.691026 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfzkp\" (UniqueName: \"kubernetes.io/projected/d453398f-d6b2-423a-89de-9abb6888d53a-kube-api-access-gfzkp\") pod \"calico-apiserver-67b6cbff4d-5nxk9\" (UID: \"d453398f-d6b2-423a-89de-9abb6888d53a\") " pod="calico-apiserver/calico-apiserver-67b6cbff4d-5nxk9" Jul 6 23:44:10.691169 kubelet[2620]: I0706 23:44:10.691124 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e9b113d2-3320-44e5-b15f-258eb075b4ee-goldmane-key-pair\") pod \"goldmane-58fd7646b9-k5kn7\" (UID: \"e9b113d2-3320-44e5-b15f-258eb075b4ee\") " pod="calico-system/goldmane-58fd7646b9-k5kn7" Jul 6 23:44:10.691169 kubelet[2620]: I0706 23:44:10.691143 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f-whisker-backend-key-pair\") pod \"whisker-6d45f6c4cd-jznvb\" (UID: \"e99f84f0-d61f-40f9-ac56-5ed80e1ac00f\") " pod="calico-system/whisker-6d45f6c4cd-jznvb" Jul 6 23:44:10.691375 kubelet[2620]: I0706 23:44:10.691341 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn4cw\" (UniqueName: \"kubernetes.io/projected/c02b118e-f7bf-40f8-822a-eb78b9bda868-kube-api-access-xn4cw\") pod \"calico-kube-controllers-6bf774cfd7-dkg74\" (UID: \"c02b118e-f7bf-40f8-822a-eb78b9bda868\") " pod="calico-system/calico-kube-controllers-6bf774cfd7-dkg74" Jul 6 23:44:10.691617 kubelet[2620]: I0706 23:44:10.691444 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf4qz\" (UniqueName: \"kubernetes.io/projected/fc8416c7-bd5b-40b0-95a8-3be1e242a65b-kube-api-access-cf4qz\") pod \"calico-apiserver-67b6cbff4d-w8zph\" (UID: \"fc8416c7-bd5b-40b0-95a8-3be1e242a65b\") " pod="calico-apiserver/calico-apiserver-67b6cbff4d-w8zph" Jul 6 23:44:10.691617 kubelet[2620]: I0706 23:44:10.691483 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b113d2-3320-44e5-b15f-258eb075b4ee-config\") pod \"goldmane-58fd7646b9-k5kn7\" (UID: \"e9b113d2-3320-44e5-b15f-258eb075b4ee\") " pod="calico-system/goldmane-58fd7646b9-k5kn7" Jul 6 23:44:10.691617 kubelet[2620]: I0706 23:44:10.691509 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bsvc\" (UniqueName: \"kubernetes.io/projected/288be6c7-a3f8-4852-8e60-18dd898545eb-kube-api-access-9bsvc\") pod \"coredns-7c65d6cfc9-qfcd4\" (UID: \"288be6c7-a3f8-4852-8e60-18dd898545eb\") " pod="kube-system/coredns-7c65d6cfc9-qfcd4" Jul 6 23:44:10.691617 kubelet[2620]: I0706 23:44:10.691525 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9k6n\" (UniqueName: \"kubernetes.io/projected/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f-kube-api-access-q9k6n\") pod \"whisker-6d45f6c4cd-jznvb\" (UID: \"e99f84f0-d61f-40f9-ac56-5ed80e1ac00f\") " pod="calico-system/whisker-6d45f6c4cd-jznvb" Jul 6 23:44:10.691617 kubelet[2620]: I0706 23:44:10.691541 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d453398f-d6b2-423a-89de-9abb6888d53a-calico-apiserver-certs\") pod \"calico-apiserver-67b6cbff4d-5nxk9\" (UID: \"d453398f-d6b2-423a-89de-9abb6888d53a\") " pod="calico-apiserver/calico-apiserver-67b6cbff4d-5nxk9" Jul 6 23:44:10.691752 kubelet[2620]: I0706 23:44:10.691557 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b113d2-3320-44e5-b15f-258eb075b4ee-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-k5kn7\" (UID: \"e9b113d2-3320-44e5-b15f-258eb075b4ee\") " pod="calico-system/goldmane-58fd7646b9-k5kn7" Jul 6 23:44:10.692109 kubelet[2620]: I0706 23:44:10.692047 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e0e80fe-54df-43e2-a2ba-b36484172017-config-volume\") pod \"coredns-7c65d6cfc9-bzjfx\" (UID: \"8e0e80fe-54df-43e2-a2ba-b36484172017\") " pod="kube-system/coredns-7c65d6cfc9-bzjfx" Jul 6 23:44:10.790694 containerd[1497]: time="2025-07-06T23:44:10.790641993Z" level=error msg="Failed to destroy network for sandbox \"9a450c0c50a891fb5fee05e34c06aac257d056a1a0b21c85e0b41dae92e7436b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:10.792597 systemd[1]: run-netns-cni\x2d24465192\x2dd5ca\x2de2bc\x2d853d\x2d9a02cc96e9aa.mount: Deactivated successfully. Jul 6 23:44:10.806318 containerd[1497]: time="2025-07-06T23:44:10.806191111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9sc4,Uid:4b809c8d-416a-43ae-b2d8-1c4702f886b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a450c0c50a891fb5fee05e34c06aac257d056a1a0b21c85e0b41dae92e7436b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:10.824751 kubelet[2620]: E0706 23:44:10.824693 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a450c0c50a891fb5fee05e34c06aac257d056a1a0b21c85e0b41dae92e7436b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:10.824872 kubelet[2620]: E0706 23:44:10.824775 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a450c0c50a891fb5fee05e34c06aac257d056a1a0b21c85e0b41dae92e7436b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m9sc4" Jul 6 23:44:10.824872 kubelet[2620]: E0706 23:44:10.824793 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a450c0c50a891fb5fee05e34c06aac257d056a1a0b21c85e0b41dae92e7436b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m9sc4" Jul 6 23:44:10.824872 kubelet[2620]: E0706 23:44:10.824830 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m9sc4_calico-system(4b809c8d-416a-43ae-b2d8-1c4702f886b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m9sc4_calico-system(4b809c8d-416a-43ae-b2d8-1c4702f886b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a450c0c50a891fb5fee05e34c06aac257d056a1a0b21c85e0b41dae92e7436b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m9sc4" podUID="4b809c8d-416a-43ae-b2d8-1c4702f886b2" Jul 6 23:44:10.881192 containerd[1497]: time="2025-07-06T23:44:10.881148632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qfcd4,Uid:288be6c7-a3f8-4852-8e60-18dd898545eb,Namespace:kube-system,Attempt:0,}" Jul 6 23:44:10.902360 containerd[1497]: time="2025-07-06T23:44:10.902248840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bzjfx,Uid:8e0e80fe-54df-43e2-a2ba-b36484172017,Namespace:kube-system,Attempt:0,}" Jul 6 23:44:10.913986 containerd[1497]: time="2025-07-06T23:44:10.913949663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-k5kn7,Uid:e9b113d2-3320-44e5-b15f-258eb075b4ee,Namespace:calico-system,Attempt:0,}" Jul 6 23:44:10.921698 containerd[1497]: time="2025-07-06T23:44:10.921660773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b6cbff4d-w8zph,Uid:fc8416c7-bd5b-40b0-95a8-3be1e242a65b,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:44:10.930381 containerd[1497]: time="2025-07-06T23:44:10.930276569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf774cfd7-dkg74,Uid:c02b118e-f7bf-40f8-822a-eb78b9bda868,Namespace:calico-system,Attempt:0,}" Jul 6 23:44:10.937872 containerd[1497]: time="2025-07-06T23:44:10.937820015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d45f6c4cd-jznvb,Uid:e99f84f0-d61f-40f9-ac56-5ed80e1ac00f,Namespace:calico-system,Attempt:0,}" Jul 6 23:44:10.948559 containerd[1497]: time="2025-07-06T23:44:10.948528421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b6cbff4d-5nxk9,Uid:d453398f-d6b2-423a-89de-9abb6888d53a,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:44:11.086122 containerd[1497]: time="2025-07-06T23:44:11.086023013Z" level=error msg="Failed to destroy network for sandbox \"39df46ba1d22af5f539938b09b1f605209f42203b920871b487852291d2711fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.087364 containerd[1497]: time="2025-07-06T23:44:11.087033229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-k5kn7,Uid:e9b113d2-3320-44e5-b15f-258eb075b4ee,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39df46ba1d22af5f539938b09b1f605209f42203b920871b487852291d2711fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.087502 kubelet[2620]: E0706 23:44:11.087294 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39df46ba1d22af5f539938b09b1f605209f42203b920871b487852291d2711fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.087502 kubelet[2620]: E0706 23:44:11.087362 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39df46ba1d22af5f539938b09b1f605209f42203b920871b487852291d2711fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-k5kn7" Jul 6 23:44:11.087502 kubelet[2620]: E0706 23:44:11.087381 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39df46ba1d22af5f539938b09b1f605209f42203b920871b487852291d2711fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-k5kn7" Jul 6 23:44:11.087598 kubelet[2620]: E0706 23:44:11.087436 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-k5kn7_calico-system(e9b113d2-3320-44e5-b15f-258eb075b4ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-k5kn7_calico-system(e9b113d2-3320-44e5-b15f-258eb075b4ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39df46ba1d22af5f539938b09b1f605209f42203b920871b487852291d2711fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-k5kn7" podUID="e9b113d2-3320-44e5-b15f-258eb075b4ee" Jul 6 23:44:11.092498 containerd[1497]: time="2025-07-06T23:44:11.092447316Z" level=error msg="Failed to destroy network for sandbox \"ce978df512e24a995641833f51dd57d2ac186d6efefdb7dc1802d79554cec72e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.092951 containerd[1497]: time="2025-07-06T23:44:11.092920100Z" level=error msg="Failed to destroy network for sandbox \"db832a9a47516080fee06a3b24e41b63b67ee41c19058af47091176ef1f87f04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.093606 containerd[1497]: time="2025-07-06T23:44:11.093561426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bzjfx,Uid:8e0e80fe-54df-43e2-a2ba-b36484172017,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce978df512e24a995641833f51dd57d2ac186d6efefdb7dc1802d79554cec72e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.093846 kubelet[2620]: E0706 23:44:11.093796 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce978df512e24a995641833f51dd57d2ac186d6efefdb7dc1802d79554cec72e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.093896 kubelet[2620]: E0706 23:44:11.093869 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce978df512e24a995641833f51dd57d2ac186d6efefdb7dc1802d79554cec72e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bzjfx" Jul 6 23:44:11.093968 kubelet[2620]: E0706 23:44:11.093945 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce978df512e24a995641833f51dd57d2ac186d6efefdb7dc1802d79554cec72e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bzjfx" Jul 6 23:44:11.094054 kubelet[2620]: E0706 23:44:11.094018 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-bzjfx_kube-system(8e0e80fe-54df-43e2-a2ba-b36484172017)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-bzjfx_kube-system(8e0e80fe-54df-43e2-a2ba-b36484172017)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce978df512e24a995641833f51dd57d2ac186d6efefdb7dc1802d79554cec72e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bzjfx" podUID="8e0e80fe-54df-43e2-a2ba-b36484172017" Jul 6 23:44:11.094767 containerd[1497]: time="2025-07-06T23:44:11.094309167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qfcd4,Uid:288be6c7-a3f8-4852-8e60-18dd898545eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db832a9a47516080fee06a3b24e41b63b67ee41c19058af47091176ef1f87f04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.094844 kubelet[2620]: E0706 23:44:11.094610 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db832a9a47516080fee06a3b24e41b63b67ee41c19058af47091176ef1f87f04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.094844 kubelet[2620]: E0706 23:44:11.094646 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db832a9a47516080fee06a3b24e41b63b67ee41c19058af47091176ef1f87f04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qfcd4" Jul 6 23:44:11.094844 kubelet[2620]: E0706 23:44:11.094661 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db832a9a47516080fee06a3b24e41b63b67ee41c19058af47091176ef1f87f04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qfcd4" Jul 6 23:44:11.094929 kubelet[2620]: E0706 23:44:11.094687 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qfcd4_kube-system(288be6c7-a3f8-4852-8e60-18dd898545eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qfcd4_kube-system(288be6c7-a3f8-4852-8e60-18dd898545eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db832a9a47516080fee06a3b24e41b63b67ee41c19058af47091176ef1f87f04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qfcd4" podUID="288be6c7-a3f8-4852-8e60-18dd898545eb" Jul 6 23:44:11.104529 containerd[1497]: time="2025-07-06T23:44:11.104440048Z" level=error msg="Failed to destroy network for sandbox \"219e7a32bff2c9141e40eade673a7ca4667d09036be22413e0174bcfc7a52fa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.105526 containerd[1497]: time="2025-07-06T23:44:11.105491190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf774cfd7-dkg74,Uid:c02b118e-f7bf-40f8-822a-eb78b9bda868,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"219e7a32bff2c9141e40eade673a7ca4667d09036be22413e0174bcfc7a52fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.105919 kubelet[2620]: E0706 23:44:11.105858 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"219e7a32bff2c9141e40eade673a7ca4667d09036be22413e0174bcfc7a52fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.105971 kubelet[2620]: E0706 23:44:11.105933 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"219e7a32bff2c9141e40eade673a7ca4667d09036be22413e0174bcfc7a52fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bf774cfd7-dkg74" Jul 6 23:44:11.106007 kubelet[2620]: E0706 23:44:11.105988 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"219e7a32bff2c9141e40eade673a7ca4667d09036be22413e0174bcfc7a52fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bf774cfd7-dkg74" Jul 6 23:44:11.106142 kubelet[2620]: E0706 23:44:11.106032 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bf774cfd7-dkg74_calico-system(c02b118e-f7bf-40f8-822a-eb78b9bda868)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bf774cfd7-dkg74_calico-system(c02b118e-f7bf-40f8-822a-eb78b9bda868)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"219e7a32bff2c9141e40eade673a7ca4667d09036be22413e0174bcfc7a52fa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bf774cfd7-dkg74" podUID="c02b118e-f7bf-40f8-822a-eb78b9bda868" Jul 6 23:44:11.107492 containerd[1497]: time="2025-07-06T23:44:11.107452533Z" level=error msg="Failed to destroy network for sandbox \"51130e1b7d217b60adb0249070e782116be0290ef9c1e5b1bdafabeecc6461c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.108288 containerd[1497]: time="2025-07-06T23:44:11.108259402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b6cbff4d-5nxk9,Uid:d453398f-d6b2-423a-89de-9abb6888d53a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51130e1b7d217b60adb0249070e782116be0290ef9c1e5b1bdafabeecc6461c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.108553 kubelet[2620]: E0706 23:44:11.108510 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51130e1b7d217b60adb0249070e782116be0290ef9c1e5b1bdafabeecc6461c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.108625 kubelet[2620]: E0706 23:44:11.108568 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51130e1b7d217b60adb0249070e782116be0290ef9c1e5b1bdafabeecc6461c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b6cbff4d-5nxk9" Jul 6 23:44:11.108714 kubelet[2620]: E0706 23:44:11.108590 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51130e1b7d217b60adb0249070e782116be0290ef9c1e5b1bdafabeecc6461c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b6cbff4d-5nxk9" Jul 6 23:44:11.108754 kubelet[2620]: E0706 23:44:11.108732 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67b6cbff4d-5nxk9_calico-apiserver(d453398f-d6b2-423a-89de-9abb6888d53a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67b6cbff4d-5nxk9_calico-apiserver(d453398f-d6b2-423a-89de-9abb6888d53a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51130e1b7d217b60adb0249070e782116be0290ef9c1e5b1bdafabeecc6461c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b6cbff4d-5nxk9" podUID="d453398f-d6b2-423a-89de-9abb6888d53a" Jul 6 23:44:11.112487 containerd[1497]: time="2025-07-06T23:44:11.112454446Z" level=error msg="Failed to destroy network for sandbox \"6ef3ccef5add4f4e9bd72b2a305c4e9f2f3b1a4387b7d5f0b8ca0eb1f78b0ad8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.113963 containerd[1497]: time="2025-07-06T23:44:11.113922243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b6cbff4d-w8zph,Uid:fc8416c7-bd5b-40b0-95a8-3be1e242a65b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ef3ccef5add4f4e9bd72b2a305c4e9f2f3b1a4387b7d5f0b8ca0eb1f78b0ad8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.114626 kubelet[2620]: E0706 23:44:11.114590 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ef3ccef5add4f4e9bd72b2a305c4e9f2f3b1a4387b7d5f0b8ca0eb1f78b0ad8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.114684 kubelet[2620]: E0706 23:44:11.114638 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ef3ccef5add4f4e9bd72b2a305c4e9f2f3b1a4387b7d5f0b8ca0eb1f78b0ad8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b6cbff4d-w8zph" Jul 6 23:44:11.114684 kubelet[2620]: E0706 23:44:11.114671 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ef3ccef5add4f4e9bd72b2a305c4e9f2f3b1a4387b7d5f0b8ca0eb1f78b0ad8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b6cbff4d-w8zph" Jul 6 23:44:11.114759 kubelet[2620]: E0706 23:44:11.114712 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67b6cbff4d-w8zph_calico-apiserver(fc8416c7-bd5b-40b0-95a8-3be1e242a65b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67b6cbff4d-w8zph_calico-apiserver(fc8416c7-bd5b-40b0-95a8-3be1e242a65b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ef3ccef5add4f4e9bd72b2a305c4e9f2f3b1a4387b7d5f0b8ca0eb1f78b0ad8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b6cbff4d-w8zph" podUID="fc8416c7-bd5b-40b0-95a8-3be1e242a65b" Jul 6 23:44:11.122469 containerd[1497]: time="2025-07-06T23:44:11.122422106Z" level=error msg="Failed to destroy network for sandbox \"2c9c4241a39cbd641e31387cf420ff764df6eb66c248f87a1d0f3e650fb8eec3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.123690 containerd[1497]: time="2025-07-06T23:44:11.123655631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d45f6c4cd-jznvb,Uid:e99f84f0-d61f-40f9-ac56-5ed80e1ac00f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9c4241a39cbd641e31387cf420ff764df6eb66c248f87a1d0f3e650fb8eec3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.123911 kubelet[2620]: E0706 23:44:11.123873 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9c4241a39cbd641e31387cf420ff764df6eb66c248f87a1d0f3e650fb8eec3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:44:11.123980 kubelet[2620]: E0706 23:44:11.123932 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9c4241a39cbd641e31387cf420ff764df6eb66c248f87a1d0f3e650fb8eec3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d45f6c4cd-jznvb" Jul 6 23:44:11.123980 kubelet[2620]: E0706 23:44:11.123950 2620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9c4241a39cbd641e31387cf420ff764df6eb66c248f87a1d0f3e650fb8eec3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d45f6c4cd-jznvb" Jul 6 23:44:11.124039 kubelet[2620]: E0706 23:44:11.123994 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d45f6c4cd-jznvb_calico-system(e99f84f0-d61f-40f9-ac56-5ed80e1ac00f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d45f6c4cd-jznvb_calico-system(e99f84f0-d61f-40f9-ac56-5ed80e1ac00f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c9c4241a39cbd641e31387cf420ff764df6eb66c248f87a1d0f3e650fb8eec3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d45f6c4cd-jznvb" podUID="e99f84f0-d61f-40f9-ac56-5ed80e1ac00f" Jul 6 23:44:14.069132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719123783.mount: Deactivated successfully. Jul 6 23:44:14.231851 containerd[1497]: time="2025-07-06T23:44:14.205539705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 6 23:44:14.231851 containerd[1497]: time="2025-07-06T23:44:14.219639188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:14.232611 containerd[1497]: time="2025-07-06T23:44:14.231006738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.582541553s" Jul 6 23:44:14.232611 containerd[1497]: time="2025-07-06T23:44:14.232456755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 6 23:44:14.232611 containerd[1497]: time="2025-07-06T23:44:14.232550606Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:14.233116 containerd[1497]: time="2025-07-06T23:44:14.233004902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:14.249276 containerd[1497]: time="2025-07-06T23:44:14.249232485Z" level=info msg="CreateContainer within sandbox \"b1f6fa327ef10d5ad40fa72a013beb0ff13d7b6f3b89a368d46c015ca90a9bb5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:44:14.257434 containerd[1497]: time="2025-07-06T23:44:14.256690436Z" level=info msg="Container f1ebb5d107a896b6e60d8e643a1eb50da8020b7a85a73fdfeabe89332adc2773: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:14.273806 containerd[1497]: time="2025-07-06T23:44:14.273753282Z" level=info msg="CreateContainer within sandbox \"b1f6fa327ef10d5ad40fa72a013beb0ff13d7b6f3b89a368d46c015ca90a9bb5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f1ebb5d107a896b6e60d8e643a1eb50da8020b7a85a73fdfeabe89332adc2773\"" Jul 6 23:44:14.275436 containerd[1497]: time="2025-07-06T23:44:14.274280866Z" level=info msg="StartContainer for \"f1ebb5d107a896b6e60d8e643a1eb50da8020b7a85a73fdfeabe89332adc2773\"" Jul 6 23:44:14.277502 containerd[1497]: time="2025-07-06T23:44:14.277469496Z" level=info msg="connecting to shim f1ebb5d107a896b6e60d8e643a1eb50da8020b7a85a73fdfeabe89332adc2773" address="unix:///run/containerd/s/a270fd1b7172d2617fdea856e56d4689669c0663f40c9728bfa949bcde80ce2f" protocol=ttrpc version=3 Jul 6 23:44:14.297607 systemd[1]: Started cri-containerd-f1ebb5d107a896b6e60d8e643a1eb50da8020b7a85a73fdfeabe89332adc2773.scope - libcontainer container f1ebb5d107a896b6e60d8e643a1eb50da8020b7a85a73fdfeabe89332adc2773. Jul 6 23:44:14.462632 containerd[1497]: time="2025-07-06T23:44:14.462564317Z" level=info msg="StartContainer for \"f1ebb5d107a896b6e60d8e643a1eb50da8020b7a85a73fdfeabe89332adc2773\" returns successfully" Jul 6 23:44:14.718999 kubelet[2620]: I0706 23:44:14.718862 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2zffg" podStartSLOduration=1.872558598 podStartE2EDuration="13.718843557s" podCreationTimestamp="2025-07-06 23:44:01 +0000 UTC" firstStartedPulling="2025-07-06 23:44:02.387378343 +0000 UTC m=+18.967730041" lastFinishedPulling="2025-07-06 23:44:14.233663262 +0000 UTC m=+30.814015000" observedRunningTime="2025-07-06 23:44:14.717663533 +0000 UTC m=+31.298015311" watchObservedRunningTime="2025-07-06 23:44:14.718843557 +0000 UTC m=+31.299195295" Jul 6 23:44:14.843435 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:44:14.843556 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:44:15.121246 kubelet[2620]: I0706 23:44:15.121127 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f-whisker-ca-bundle\") pod \"e99f84f0-d61f-40f9-ac56-5ed80e1ac00f\" (UID: \"e99f84f0-d61f-40f9-ac56-5ed80e1ac00f\") " Jul 6 23:44:15.122305 kubelet[2620]: I0706 23:44:15.121672 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f-whisker-backend-key-pair\") pod \"e99f84f0-d61f-40f9-ac56-5ed80e1ac00f\" (UID: \"e99f84f0-d61f-40f9-ac56-5ed80e1ac00f\") " Jul 6 23:44:15.122524 kubelet[2620]: I0706 23:44:15.121713 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9k6n\" (UniqueName: \"kubernetes.io/projected/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f-kube-api-access-q9k6n\") pod \"e99f84f0-d61f-40f9-ac56-5ed80e1ac00f\" (UID: \"e99f84f0-d61f-40f9-ac56-5ed80e1ac00f\") " Jul 6 23:44:15.137013 kubelet[2620]: I0706 23:44:15.136867 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e99f84f0-d61f-40f9-ac56-5ed80e1ac00f" (UID: "e99f84f0-d61f-40f9-ac56-5ed80e1ac00f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:44:15.147555 systemd[1]: var-lib-kubelet-pods-e99f84f0\x2dd61f\x2d40f9\x2dac56\x2d5ed80e1ac00f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:44:15.148713 kubelet[2620]: I0706 23:44:15.147691 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e99f84f0-d61f-40f9-ac56-5ed80e1ac00f" (UID: "e99f84f0-d61f-40f9-ac56-5ed80e1ac00f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:44:15.148942 kubelet[2620]: I0706 23:44:15.148883 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f-kube-api-access-q9k6n" (OuterVolumeSpecName: "kube-api-access-q9k6n") pod "e99f84f0-d61f-40f9-ac56-5ed80e1ac00f" (UID: "e99f84f0-d61f-40f9-ac56-5ed80e1ac00f"). InnerVolumeSpecName "kube-api-access-q9k6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:44:15.154189 systemd[1]: var-lib-kubelet-pods-e99f84f0\x2dd61f\x2d40f9\x2dac56\x2d5ed80e1ac00f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq9k6n.mount: Deactivated successfully. Jul 6 23:44:15.223331 kubelet[2620]: I0706 23:44:15.223289 2620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9k6n\" (UniqueName: \"kubernetes.io/projected/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f-kube-api-access-q9k6n\") on node \"localhost\" DevicePath \"\"" Jul 6 23:44:15.223331 kubelet[2620]: I0706 23:44:15.223329 2620 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 6 23:44:15.223331 kubelet[2620]: I0706 23:44:15.223339 2620 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 6 23:44:15.523604 systemd[1]: Removed slice kubepods-besteffort-pode99f84f0_d61f_40f9_ac56_5ed80e1ac00f.slice - libcontainer container kubepods-besteffort-pode99f84f0_d61f_40f9_ac56_5ed80e1ac00f.slice. Jul 6 23:44:15.661010 kubelet[2620]: I0706 23:44:15.660977 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:44:15.734183 systemd[1]: Created slice kubepods-besteffort-pod724f52e6_a7f1_45fb_bb0d_6651ed4bebf9.slice - libcontainer container kubepods-besteffort-pod724f52e6_a7f1_45fb_bb0d_6651ed4bebf9.slice. Jul 6 23:44:15.827344 kubelet[2620]: I0706 23:44:15.827195 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w9r4\" (UniqueName: \"kubernetes.io/projected/724f52e6-a7f1-45fb-bb0d-6651ed4bebf9-kube-api-access-6w9r4\") pod \"whisker-86484ff98-knzxl\" (UID: \"724f52e6-a7f1-45fb-bb0d-6651ed4bebf9\") " pod="calico-system/whisker-86484ff98-knzxl" Jul 6 23:44:15.827344 kubelet[2620]: I0706 23:44:15.827241 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/724f52e6-a7f1-45fb-bb0d-6651ed4bebf9-whisker-ca-bundle\") pod \"whisker-86484ff98-knzxl\" (UID: \"724f52e6-a7f1-45fb-bb0d-6651ed4bebf9\") " pod="calico-system/whisker-86484ff98-knzxl" Jul 6 23:44:15.827344 kubelet[2620]: I0706 23:44:15.827266 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/724f52e6-a7f1-45fb-bb0d-6651ed4bebf9-whisker-backend-key-pair\") pod \"whisker-86484ff98-knzxl\" (UID: \"724f52e6-a7f1-45fb-bb0d-6651ed4bebf9\") " pod="calico-system/whisker-86484ff98-knzxl" Jul 6 23:44:16.038465 containerd[1497]: time="2025-07-06T23:44:16.038415560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86484ff98-knzxl,Uid:724f52e6-a7f1-45fb-bb0d-6651ed4bebf9,Namespace:calico-system,Attempt:0,}" Jul 6 23:44:16.451212 systemd-networkd[1433]: cali35849594f0c: Link UP Jul 6 23:44:16.451761 systemd-networkd[1433]: cali35849594f0c: Gained carrier Jul 6 23:44:16.468347 containerd[1497]: 2025-07-06 23:44:16.060 [INFO][3721] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:44:16.468347 containerd[1497]: 2025-07-06 23:44:16.134 [INFO][3721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--86484ff98--knzxl-eth0 whisker-86484ff98- calico-system 724f52e6-a7f1-45fb-bb0d-6651ed4bebf9 849 0 2025-07-06 23:44:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86484ff98 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-86484ff98-knzxl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali35849594f0c [] [] }} ContainerID="4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" Namespace="calico-system" Pod="whisker-86484ff98-knzxl" WorkloadEndpoint="localhost-k8s-whisker--86484ff98--knzxl-" Jul 6 23:44:16.468347 containerd[1497]: 2025-07-06 23:44:16.134 [INFO][3721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" Namespace="calico-system" Pod="whisker-86484ff98-knzxl" WorkloadEndpoint="localhost-k8s-whisker--86484ff98--knzxl-eth0" Jul 6 23:44:16.468347 containerd[1497]: 2025-07-06 23:44:16.397 [INFO][3735] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" HandleID="k8s-pod-network.4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" Workload="localhost-k8s-whisker--86484ff98--knzxl-eth0" Jul 6 23:44:16.468660 containerd[1497]: 2025-07-06 23:44:16.397 [INFO][3735] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" HandleID="k8s-pod-network.4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" Workload="localhost-k8s-whisker--86484ff98--knzxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a6d40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-86484ff98-knzxl", "timestamp":"2025-07-06 23:44:16.397574993 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:44:16.468660 containerd[1497]: 2025-07-06 23:44:16.397 [INFO][3735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:44:16.468660 containerd[1497]: 2025-07-06 23:44:16.397 [INFO][3735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:44:16.468660 containerd[1497]: 2025-07-06 23:44:16.398 [INFO][3735] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:44:16.468660 containerd[1497]: 2025-07-06 23:44:16.411 [INFO][3735] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" host="localhost" Jul 6 23:44:16.468660 containerd[1497]: 2025-07-06 23:44:16.417 [INFO][3735] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:44:16.468660 containerd[1497]: 2025-07-06 23:44:16.422 [INFO][3735] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:44:16.468660 containerd[1497]: 2025-07-06 23:44:16.424 [INFO][3735] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:16.468660 containerd[1497]: 2025-07-06 23:44:16.426 [INFO][3735] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:16.468660 containerd[1497]: 2025-07-06 23:44:16.426 [INFO][3735] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" host="localhost" Jul 6 23:44:16.468899 containerd[1497]: 2025-07-06 23:44:16.428 [INFO][3735] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72 Jul 6 23:44:16.468899 containerd[1497]: 2025-07-06 23:44:16.432 [INFO][3735] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" host="localhost" Jul 6 23:44:16.468899 containerd[1497]: 2025-07-06 23:44:16.438 [INFO][3735] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" host="localhost" Jul 6 23:44:16.468899 containerd[1497]: 2025-07-06 23:44:16.438 [INFO][3735] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" host="localhost" Jul 6 23:44:16.468899 containerd[1497]: 2025-07-06 23:44:16.438 [INFO][3735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:44:16.468899 containerd[1497]: 2025-07-06 23:44:16.438 [INFO][3735] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" HandleID="k8s-pod-network.4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" Workload="localhost-k8s-whisker--86484ff98--knzxl-eth0" Jul 6 23:44:16.469026 containerd[1497]: 2025-07-06 23:44:16.441 [INFO][3721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" Namespace="calico-system" Pod="whisker-86484ff98-knzxl" WorkloadEndpoint="localhost-k8s-whisker--86484ff98--knzxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86484ff98--knzxl-eth0", GenerateName:"whisker-86484ff98-", Namespace:"calico-system", SelfLink:"", UID:"724f52e6-a7f1-45fb-bb0d-6651ed4bebf9", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 44, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86484ff98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-86484ff98-knzxl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali35849594f0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:16.469026 containerd[1497]: 2025-07-06 23:44:16.441 [INFO][3721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" Namespace="calico-system" Pod="whisker-86484ff98-knzxl" WorkloadEndpoint="localhost-k8s-whisker--86484ff98--knzxl-eth0" Jul 6 23:44:16.469097 containerd[1497]: 2025-07-06 23:44:16.441 [INFO][3721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35849594f0c ContainerID="4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" Namespace="calico-system" Pod="whisker-86484ff98-knzxl" WorkloadEndpoint="localhost-k8s-whisker--86484ff98--knzxl-eth0" Jul 6 23:44:16.469097 containerd[1497]: 2025-07-06 23:44:16.451 [INFO][3721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" Namespace="calico-system" Pod="whisker-86484ff98-knzxl" WorkloadEndpoint="localhost-k8s-whisker--86484ff98--knzxl-eth0" Jul 6 23:44:16.469137 containerd[1497]: 2025-07-06 23:44:16.452 [INFO][3721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" Namespace="calico-system" Pod="whisker-86484ff98-knzxl" WorkloadEndpoint="localhost-k8s-whisker--86484ff98--knzxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86484ff98--knzxl-eth0", GenerateName:"whisker-86484ff98-", Namespace:"calico-system", SelfLink:"", UID:"724f52e6-a7f1-45fb-bb0d-6651ed4bebf9", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 44, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86484ff98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72", Pod:"whisker-86484ff98-knzxl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali35849594f0c", MAC:"4a:4d:de:bf:6f:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:16.469183 containerd[1497]: 2025-07-06 23:44:16.461 [INFO][3721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" Namespace="calico-system" Pod="whisker-86484ff98-knzxl" WorkloadEndpoint="localhost-k8s-whisker--86484ff98--knzxl-eth0" Jul 6 23:44:16.571665 containerd[1497]: time="2025-07-06T23:44:16.571607633Z" level=info msg="connecting to shim 4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72" address="unix:///run/containerd/s/806c71de87ecc014756b02512782f65af5520834ef26bdf59ed6dfdb43a7f5ce" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:16.599613 systemd[1]: Started cri-containerd-4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72.scope - libcontainer container 4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72. Jul 6 23:44:16.612446 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:44:16.633176 containerd[1497]: time="2025-07-06T23:44:16.633132890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86484ff98-knzxl,Uid:724f52e6-a7f1-45fb-bb0d-6651ed4bebf9,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72\"" Jul 6 23:44:16.634977 containerd[1497]: time="2025-07-06T23:44:16.634889971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:44:17.411590 containerd[1497]: time="2025-07-06T23:44:17.411521393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:17.412112 containerd[1497]: time="2025-07-06T23:44:17.412075655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 6 23:44:17.413767 containerd[1497]: time="2025-07-06T23:44:17.413719118Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:17.416318 containerd[1497]: time="2025-07-06T23:44:17.416174430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:17.416772 containerd[1497]: time="2025-07-06T23:44:17.416744454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 781.805637ms" Jul 6 23:44:17.416848 containerd[1497]: time="2025-07-06T23:44:17.416777657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 6 23:44:17.422087 containerd[1497]: time="2025-07-06T23:44:17.422036002Z" level=info msg="CreateContainer within sandbox \"4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:44:17.437429 containerd[1497]: time="2025-07-06T23:44:17.436422080Z" level=info msg="Container 4d41a495cea8eb04887a734dd935d59a9e45526aa167d613c4c76abc212b624f: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:17.451672 containerd[1497]: time="2025-07-06T23:44:17.451436228Z" level=info msg="CreateContainer within sandbox \"4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4d41a495cea8eb04887a734dd935d59a9e45526aa167d613c4c76abc212b624f\"" Jul 6 23:44:17.461449 containerd[1497]: time="2025-07-06T23:44:17.460631290Z" level=info msg="StartContainer for \"4d41a495cea8eb04887a734dd935d59a9e45526aa167d613c4c76abc212b624f\"" Jul 6 23:44:17.462129 containerd[1497]: time="2025-07-06T23:44:17.462102533Z" level=info msg="connecting to shim 4d41a495cea8eb04887a734dd935d59a9e45526aa167d613c4c76abc212b624f" address="unix:///run/containerd/s/806c71de87ecc014756b02512782f65af5520834ef26bdf59ed6dfdb43a7f5ce" protocol=ttrpc version=3 Jul 6 23:44:17.493605 systemd[1]: Started cri-containerd-4d41a495cea8eb04887a734dd935d59a9e45526aa167d613c4c76abc212b624f.scope - libcontainer container 4d41a495cea8eb04887a734dd935d59a9e45526aa167d613c4c76abc212b624f. Jul 6 23:44:17.511040 kubelet[2620]: I0706 23:44:17.510988 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e99f84f0-d61f-40f9-ac56-5ed80e1ac00f" path="/var/lib/kubelet/pods/e99f84f0-d61f-40f9-ac56-5ed80e1ac00f/volumes" Jul 6 23:44:17.573781 containerd[1497]: time="2025-07-06T23:44:17.573730256Z" level=info msg="StartContainer for \"4d41a495cea8eb04887a734dd935d59a9e45526aa167d613c4c76abc212b624f\" returns successfully" Jul 6 23:44:17.577347 containerd[1497]: time="2025-07-06T23:44:17.577290572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:44:18.377539 systemd-networkd[1433]: cali35849594f0c: Gained IPv6LL Jul 6 23:44:18.781244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3060703068.mount: Deactivated successfully. Jul 6 23:44:18.801023 containerd[1497]: time="2025-07-06T23:44:18.800972018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:18.801509 containerd[1497]: time="2025-07-06T23:44:18.801477272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 6 23:44:18.802253 containerd[1497]: time="2025-07-06T23:44:18.802210751Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:18.804817 containerd[1497]: time="2025-07-06T23:44:18.804774827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.227446731s" Jul 6 23:44:18.804817 containerd[1497]: time="2025-07-06T23:44:18.804814431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 6 23:44:18.810101 containerd[1497]: time="2025-07-06T23:44:18.810046074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:18.817142 containerd[1497]: time="2025-07-06T23:44:18.817100554Z" level=info msg="CreateContainer within sandbox \"4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:44:18.833938 containerd[1497]: time="2025-07-06T23:44:18.833495238Z" level=info msg="Container bc81df54d8f3db339d7f6fe2c8f328aa3c4946cfcb0872796b56862307bfc64f: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:18.840304 containerd[1497]: time="2025-07-06T23:44:18.840247045Z" level=info msg="CreateContainer within sandbox \"4c0315175a49524bf5b7fd78127e6c060cd55455861fab104f8e0092555c3b72\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"bc81df54d8f3db339d7f6fe2c8f328aa3c4946cfcb0872796b56862307bfc64f\"" Jul 6 23:44:18.841432 containerd[1497]: time="2025-07-06T23:44:18.841000926Z" level=info msg="StartContainer for \"bc81df54d8f3db339d7f6fe2c8f328aa3c4946cfcb0872796b56862307bfc64f\"" Jul 6 23:44:18.842376 containerd[1497]: time="2025-07-06T23:44:18.842330189Z" level=info msg="connecting to shim bc81df54d8f3db339d7f6fe2c8f328aa3c4946cfcb0872796b56862307bfc64f" address="unix:///run/containerd/s/806c71de87ecc014756b02512782f65af5520834ef26bdf59ed6dfdb43a7f5ce" protocol=ttrpc version=3 Jul 6 23:44:18.861676 systemd[1]: Started cri-containerd-bc81df54d8f3db339d7f6fe2c8f328aa3c4946cfcb0872796b56862307bfc64f.scope - libcontainer container bc81df54d8f3db339d7f6fe2c8f328aa3c4946cfcb0872796b56862307bfc64f. Jul 6 23:44:18.896709 containerd[1497]: time="2025-07-06T23:44:18.896662678Z" level=info msg="StartContainer for \"bc81df54d8f3db339d7f6fe2c8f328aa3c4946cfcb0872796b56862307bfc64f\" returns successfully" Jul 6 23:44:19.700128 kubelet[2620]: I0706 23:44:19.700057 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-86484ff98-knzxl" podStartSLOduration=2.529074748 podStartE2EDuration="4.70002284s" podCreationTimestamp="2025-07-06 23:44:15 +0000 UTC" firstStartedPulling="2025-07-06 23:44:16.634568215 +0000 UTC m=+33.214919953" lastFinishedPulling="2025-07-06 23:44:18.805516347 +0000 UTC m=+35.385868045" observedRunningTime="2025-07-06 23:44:19.699624718 +0000 UTC m=+36.279976456" watchObservedRunningTime="2025-07-06 23:44:19.70002284 +0000 UTC m=+36.280374578" Jul 6 23:44:22.529248 containerd[1497]: time="2025-07-06T23:44:22.529207386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b6cbff4d-5nxk9,Uid:d453398f-d6b2-423a-89de-9abb6888d53a,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:44:22.529670 containerd[1497]: time="2025-07-06T23:44:22.529438288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-k5kn7,Uid:e9b113d2-3320-44e5-b15f-258eb075b4ee,Namespace:calico-system,Attempt:0,}" Jul 6 23:44:22.804312 systemd-networkd[1433]: calif34311717c3: Link UP Jul 6 23:44:22.806226 systemd-networkd[1433]: calif34311717c3: Gained carrier Jul 6 23:44:22.823257 containerd[1497]: 2025-07-06 23:44:22.656 [INFO][4106] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:44:22.823257 containerd[1497]: 2025-07-06 23:44:22.706 [INFO][4106] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0 calico-apiserver-67b6cbff4d- calico-apiserver d453398f-d6b2-423a-89de-9abb6888d53a 790 0 2025-07-06 23:43:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67b6cbff4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67b6cbff4d-5nxk9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif34311717c3 [] [] }} ContainerID="eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-5nxk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-" Jul 6 23:44:22.823257 containerd[1497]: 2025-07-06 23:44:22.707 [INFO][4106] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-5nxk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0" Jul 6 23:44:22.823257 containerd[1497]: 2025-07-06 23:44:22.747 [INFO][4135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" HandleID="k8s-pod-network.eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" Workload="localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0" Jul 6 23:44:22.823928 containerd[1497]: 2025-07-06 23:44:22.747 [INFO][4135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" HandleID="k8s-pod-network.eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" Workload="localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400050cf20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67b6cbff4d-5nxk9", "timestamp":"2025-07-06 23:44:22.747345426 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:44:22.823928 containerd[1497]: 2025-07-06 23:44:22.747 [INFO][4135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:44:22.823928 containerd[1497]: 2025-07-06 23:44:22.747 [INFO][4135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:44:22.823928 containerd[1497]: 2025-07-06 23:44:22.747 [INFO][4135] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:44:22.823928 containerd[1497]: 2025-07-06 23:44:22.758 [INFO][4135] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" host="localhost" Jul 6 23:44:22.823928 containerd[1497]: 2025-07-06 23:44:22.766 [INFO][4135] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:44:22.823928 containerd[1497]: 2025-07-06 23:44:22.773 [INFO][4135] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:44:22.823928 containerd[1497]: 2025-07-06 23:44:22.777 [INFO][4135] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:22.823928 containerd[1497]: 2025-07-06 23:44:22.780 [INFO][4135] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:22.823928 containerd[1497]: 2025-07-06 23:44:22.780 [INFO][4135] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" host="localhost" Jul 6 23:44:22.824145 containerd[1497]: 2025-07-06 23:44:22.782 [INFO][4135] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295 Jul 6 23:44:22.824145 containerd[1497]: 2025-07-06 23:44:22.787 [INFO][4135] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" host="localhost" Jul 6 23:44:22.824145 containerd[1497]: 2025-07-06 23:44:22.793 [INFO][4135] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" host="localhost" Jul 6 23:44:22.824145 containerd[1497]: 2025-07-06 23:44:22.793 [INFO][4135] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" host="localhost" Jul 6 23:44:22.824145 containerd[1497]: 2025-07-06 23:44:22.793 [INFO][4135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:44:22.824145 containerd[1497]: 2025-07-06 23:44:22.793 [INFO][4135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" HandleID="k8s-pod-network.eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" Workload="localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0" Jul 6 23:44:22.824258 containerd[1497]: 2025-07-06 23:44:22.800 [INFO][4106] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-5nxk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0", GenerateName:"calico-apiserver-67b6cbff4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d453398f-d6b2-423a-89de-9abb6888d53a", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 43, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b6cbff4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67b6cbff4d-5nxk9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif34311717c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:22.824316 containerd[1497]: 2025-07-06 23:44:22.800 [INFO][4106] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-5nxk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0" Jul 6 23:44:22.824316 containerd[1497]: 2025-07-06 23:44:22.800 [INFO][4106] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif34311717c3 ContainerID="eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-5nxk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0" Jul 6 23:44:22.824316 containerd[1497]: 2025-07-06 23:44:22.805 [INFO][4106] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-5nxk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0" Jul 6 23:44:22.824434 containerd[1497]: 2025-07-06 23:44:22.806 [INFO][4106] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-5nxk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0", GenerateName:"calico-apiserver-67b6cbff4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d453398f-d6b2-423a-89de-9abb6888d53a", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 43, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b6cbff4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295", Pod:"calico-apiserver-67b6cbff4d-5nxk9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif34311717c3", MAC:"2e:83:02:9d:fb:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:22.824492 containerd[1497]: 2025-07-06 23:44:22.820 [INFO][4106] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-5nxk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--5nxk9-eth0" Jul 6 23:44:22.873223 containerd[1497]: time="2025-07-06T23:44:22.873161873Z" level=info msg="connecting to shim eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295" address="unix:///run/containerd/s/99f8f84a02af3603ccae2c62ce1862b3c4a0552333b488f7fa9d19031f09b49f" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:22.917588 systemd-networkd[1433]: cali2bf07b22690: Link UP Jul 6 23:44:22.920456 systemd-networkd[1433]: cali2bf07b22690: Gained carrier Jul 6 23:44:22.920883 systemd[1]: Started cri-containerd-eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295.scope - libcontainer container eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295. Jul 6 23:44:22.947642 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:44:22.948962 containerd[1497]: 2025-07-06 23:44:22.658 [INFO][4118] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:44:22.948962 containerd[1497]: 2025-07-06 23:44:22.707 [INFO][4118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0 goldmane-58fd7646b9- calico-system e9b113d2-3320-44e5-b15f-258eb075b4ee 788 0 2025-07-06 23:44:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-k5kn7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2bf07b22690 [] [] }} ContainerID="2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" Namespace="calico-system" Pod="goldmane-58fd7646b9-k5kn7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--k5kn7-" Jul 6 23:44:22.948962 containerd[1497]: 2025-07-06 23:44:22.707 [INFO][4118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" Namespace="calico-system" Pod="goldmane-58fd7646b9-k5kn7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0" Jul 6 23:44:22.948962 containerd[1497]: 2025-07-06 23:44:22.763 [INFO][4137] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" HandleID="k8s-pod-network.2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" Workload="localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0" Jul 6 23:44:22.949183 containerd[1497]: 2025-07-06 23:44:22.763 [INFO][4137] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" HandleID="k8s-pod-network.2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" Workload="localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000483cb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-k5kn7", "timestamp":"2025-07-06 23:44:22.763309099 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:44:22.949183 containerd[1497]: 2025-07-06 23:44:22.763 [INFO][4137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:44:22.949183 containerd[1497]: 2025-07-06 23:44:22.794 [INFO][4137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:44:22.949183 containerd[1497]: 2025-07-06 23:44:22.795 [INFO][4137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:44:22.949183 containerd[1497]: 2025-07-06 23:44:22.860 [INFO][4137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" host="localhost" Jul 6 23:44:22.949183 containerd[1497]: 2025-07-06 23:44:22.868 [INFO][4137] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:44:22.949183 containerd[1497]: 2025-07-06 23:44:22.878 [INFO][4137] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:44:22.949183 containerd[1497]: 2025-07-06 23:44:22.881 [INFO][4137] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:22.949183 containerd[1497]: 2025-07-06 23:44:22.887 [INFO][4137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:22.949183 containerd[1497]: 2025-07-06 23:44:22.887 [INFO][4137] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" host="localhost" Jul 6 23:44:22.949654 containerd[1497]: 2025-07-06 23:44:22.892 [INFO][4137] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f Jul 6 23:44:22.949654 containerd[1497]: 2025-07-06 23:44:22.897 [INFO][4137] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" host="localhost" Jul 6 23:44:22.949654 containerd[1497]: 2025-07-06 23:44:22.904 [INFO][4137] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" host="localhost" Jul 6 23:44:22.949654 containerd[1497]: 2025-07-06 23:44:22.904 [INFO][4137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" host="localhost" Jul 6 23:44:22.949654 containerd[1497]: 2025-07-06 23:44:22.905 [INFO][4137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:44:22.949654 containerd[1497]: 2025-07-06 23:44:22.905 [INFO][4137] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" HandleID="k8s-pod-network.2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" Workload="localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0" Jul 6 23:44:22.949791 containerd[1497]: 2025-07-06 23:44:22.911 [INFO][4118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" Namespace="calico-system" Pod="goldmane-58fd7646b9-k5kn7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e9b113d2-3320-44e5-b15f-258eb075b4ee", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 44, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-k5kn7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2bf07b22690", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:22.949791 containerd[1497]: 2025-07-06 23:44:22.911 [INFO][4118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" Namespace="calico-system" Pod="goldmane-58fd7646b9-k5kn7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0" Jul 6 23:44:22.949873 containerd[1497]: 2025-07-06 23:44:22.911 [INFO][4118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2bf07b22690 ContainerID="2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" Namespace="calico-system" Pod="goldmane-58fd7646b9-k5kn7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0" Jul 6 23:44:22.949873 containerd[1497]: 2025-07-06 23:44:22.928 [INFO][4118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" Namespace="calico-system" Pod="goldmane-58fd7646b9-k5kn7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0" Jul 6 23:44:22.949916 containerd[1497]: 2025-07-06 23:44:22.932 [INFO][4118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" Namespace="calico-system" Pod="goldmane-58fd7646b9-k5kn7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e9b113d2-3320-44e5-b15f-258eb075b4ee", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 44, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f", Pod:"goldmane-58fd7646b9-k5kn7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2bf07b22690", MAC:"c2:fd:4f:3f:57:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:22.949970 containerd[1497]: 2025-07-06 23:44:22.944 [INFO][4118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" Namespace="calico-system" Pod="goldmane-58fd7646b9-k5kn7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--k5kn7-eth0" Jul 6 23:44:22.987998 containerd[1497]: time="2025-07-06T23:44:22.987952196Z" level=info msg="connecting to shim 2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f" address="unix:///run/containerd/s/6febe6075cee98e963877ad4fc7d1ab5c7d95089f5e694fb28099ec146562b83" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:22.990887 containerd[1497]: time="2025-07-06T23:44:22.990694096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b6cbff4d-5nxk9,Uid:d453398f-d6b2-423a-89de-9abb6888d53a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295\"" Jul 6 23:44:22.996362 containerd[1497]: time="2025-07-06T23:44:22.994797285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:44:23.022620 systemd[1]: Started cri-containerd-2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f.scope - libcontainer container 2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f. Jul 6 23:44:23.034688 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:44:23.057427 containerd[1497]: time="2025-07-06T23:44:23.057067023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-k5kn7,Uid:e9b113d2-3320-44e5-b15f-258eb075b4ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f\"" Jul 6 23:44:24.356133 systemd[1]: Started sshd@7-10.0.0.127:22-10.0.0.1:60322.service - OpenSSH per-connection server daemon (10.0.0.1:60322). Jul 6 23:44:24.422877 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 60322 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:44:24.425124 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:24.432046 systemd-logind[1475]: New session 8 of user core. Jul 6 23:44:24.441654 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:44:24.506785 containerd[1497]: time="2025-07-06T23:44:24.506741267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf774cfd7-dkg74,Uid:c02b118e-f7bf-40f8-822a-eb78b9bda868,Namespace:calico-system,Attempt:0,}" Jul 6 23:44:24.507834 containerd[1497]: time="2025-07-06T23:44:24.507803082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b6cbff4d-w8zph,Uid:fc8416c7-bd5b-40b0-95a8-3be1e242a65b,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:44:24.579616 containerd[1497]: time="2025-07-06T23:44:24.579558226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:24.579807 containerd[1497]: time="2025-07-06T23:44:24.579781526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 6 23:44:24.580348 containerd[1497]: time="2025-07-06T23:44:24.580191642Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:24.586784 containerd[1497]: time="2025-07-06T23:44:24.586727784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:24.587429 containerd[1497]: time="2025-07-06T23:44:24.587379602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.591009648s" Jul 6 23:44:24.587794 containerd[1497]: time="2025-07-06T23:44:24.587717152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:44:24.605040 containerd[1497]: time="2025-07-06T23:44:24.604997769Z" level=info msg="CreateContainer within sandbox \"eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:44:24.614643 containerd[1497]: time="2025-07-06T23:44:24.612470834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:44:24.621671 containerd[1497]: time="2025-07-06T23:44:24.621601567Z" level=info msg="Container de2a7d48794ddaf3635d4e18996db73494f4965ac1996babed873420f3355386: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:24.633657 containerd[1497]: time="2025-07-06T23:44:24.633609475Z" level=info msg="CreateContainer within sandbox \"eb934cff6563163c719d6d73bdf405b56593b79b4c1aefaaa6f7ba692f11e295\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de2a7d48794ddaf3635d4e18996db73494f4965ac1996babed873420f3355386\"" Jul 6 23:44:24.635201 containerd[1497]: time="2025-07-06T23:44:24.635147852Z" level=info msg="StartContainer for \"de2a7d48794ddaf3635d4e18996db73494f4965ac1996babed873420f3355386\"" Jul 6 23:44:24.636314 containerd[1497]: time="2025-07-06T23:44:24.636279032Z" level=info msg="connecting to shim de2a7d48794ddaf3635d4e18996db73494f4965ac1996babed873420f3355386" address="unix:///run/containerd/s/99f8f84a02af3603ccae2c62ce1862b3c4a0552333b488f7fa9d19031f09b49f" protocol=ttrpc version=3 Jul 6 23:44:24.682635 systemd[1]: Started cri-containerd-de2a7d48794ddaf3635d4e18996db73494f4965ac1996babed873420f3355386.scope - libcontainer container de2a7d48794ddaf3635d4e18996db73494f4965ac1996babed873420f3355386. Jul 6 23:44:24.720536 systemd-networkd[1433]: calie54daba75aa: Link UP Jul 6 23:44:24.721524 systemd-networkd[1433]: calie54daba75aa: Gained carrier Jul 6 23:44:24.742702 containerd[1497]: 2025-07-06 23:44:24.589 [INFO][4325] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:44:24.742702 containerd[1497]: 2025-07-06 23:44:24.616 [INFO][4325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0 calico-apiserver-67b6cbff4d- calico-apiserver fc8416c7-bd5b-40b0-95a8-3be1e242a65b 785 0 2025-07-06 23:43:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67b6cbff4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67b6cbff4d-w8zph eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie54daba75aa [] [] }} ContainerID="d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-w8zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-" Jul 6 23:44:24.742702 containerd[1497]: 2025-07-06 23:44:24.616 [INFO][4325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-w8zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0" Jul 6 23:44:24.742702 containerd[1497]: 2025-07-06 23:44:24.656 [INFO][4357] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" HandleID="k8s-pod-network.d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" Workload="localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0" Jul 6 23:44:24.743031 containerd[1497]: 2025-07-06 23:44:24.657 [INFO][4357] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" HandleID="k8s-pod-network.d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" Workload="localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042c6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67b6cbff4d-w8zph", "timestamp":"2025-07-06 23:44:24.656958352 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:44:24.743031 containerd[1497]: 2025-07-06 23:44:24.657 [INFO][4357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:44:24.743031 containerd[1497]: 2025-07-06 23:44:24.657 [INFO][4357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:44:24.743031 containerd[1497]: 2025-07-06 23:44:24.657 [INFO][4357] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:44:24.743031 containerd[1497]: 2025-07-06 23:44:24.671 [INFO][4357] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" host="localhost" Jul 6 23:44:24.743031 containerd[1497]: 2025-07-06 23:44:24.677 [INFO][4357] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:44:24.743031 containerd[1497]: 2025-07-06 23:44:24.692 [INFO][4357] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:44:24.743031 containerd[1497]: 2025-07-06 23:44:24.694 [INFO][4357] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:24.743031 containerd[1497]: 2025-07-06 23:44:24.696 [INFO][4357] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:24.743031 containerd[1497]: 2025-07-06 23:44:24.696 [INFO][4357] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" host="localhost" Jul 6 23:44:24.743239 containerd[1497]: 2025-07-06 23:44:24.698 [INFO][4357] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58 Jul 6 23:44:24.743239 containerd[1497]: 2025-07-06 23:44:24.702 [INFO][4357] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" host="localhost" Jul 6 23:44:24.743239 containerd[1497]: 2025-07-06 23:44:24.712 [INFO][4357] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" host="localhost" Jul 6 23:44:24.743239 containerd[1497]: 2025-07-06 23:44:24.712 [INFO][4357] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" host="localhost" Jul 6 23:44:24.743239 containerd[1497]: 2025-07-06 23:44:24.712 [INFO][4357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:44:24.743239 containerd[1497]: 2025-07-06 23:44:24.713 [INFO][4357] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" HandleID="k8s-pod-network.d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" Workload="localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0" Jul 6 23:44:24.743438 containerd[1497]: 2025-07-06 23:44:24.716 [INFO][4325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-w8zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0", GenerateName:"calico-apiserver-67b6cbff4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc8416c7-bd5b-40b0-95a8-3be1e242a65b", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 43, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b6cbff4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67b6cbff4d-w8zph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie54daba75aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:24.743561 containerd[1497]: 2025-07-06 23:44:24.716 [INFO][4325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-w8zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0" Jul 6 23:44:24.743561 containerd[1497]: 2025-07-06 23:44:24.716 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie54daba75aa ContainerID="d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-w8zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0" Jul 6 23:44:24.743561 containerd[1497]: 2025-07-06 23:44:24.722 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-w8zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0" Jul 6 23:44:24.743647 containerd[1497]: 2025-07-06 23:44:24.722 [INFO][4325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-w8zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0", GenerateName:"calico-apiserver-67b6cbff4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc8416c7-bd5b-40b0-95a8-3be1e242a65b", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 43, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b6cbff4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58", Pod:"calico-apiserver-67b6cbff4d-w8zph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie54daba75aa", MAC:"7a:0c:a0:9f:d3:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:24.743696 containerd[1497]: 2025-07-06 23:44:24.739 [INFO][4325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" Namespace="calico-apiserver" Pod="calico-apiserver-67b6cbff4d-w8zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b6cbff4d--w8zph-eth0" Jul 6 23:44:24.757800 containerd[1497]: time="2025-07-06T23:44:24.757760721Z" level=info msg="StartContainer for \"de2a7d48794ddaf3635d4e18996db73494f4965ac1996babed873420f3355386\" returns successfully" Jul 6 23:44:24.773422 containerd[1497]: time="2025-07-06T23:44:24.773317745Z" level=info msg="connecting to shim d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58" address="unix:///run/containerd/s/85faccea42fb4fe3497748b17f4386a49362f57bd2a6128eac05a35c2b6184ab" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:24.789301 sshd[4313]: Connection closed by 10.0.0.1 port 60322 Jul 6 23:44:24.790230 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:24.795568 systemd[1]: sshd@7-10.0.0.127:22-10.0.0.1:60322.service: Deactivated successfully. Jul 6 23:44:24.798068 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:44:24.800037 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:44:24.802451 systemd-logind[1475]: Removed session 8. Jul 6 23:44:24.821827 systemd[1]: Started cri-containerd-d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58.scope - libcontainer container d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58. Jul 6 23:44:24.840690 systemd-networkd[1433]: calicf96d73f1ab: Link UP Jul 6 23:44:24.840973 systemd-networkd[1433]: calicf96d73f1ab: Gained carrier Jul 6 23:44:24.843387 systemd-networkd[1433]: calif34311717c3: Gained IPv6LL Jul 6 23:44:24.850925 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:44:24.863125 containerd[1497]: 2025-07-06 23:44:24.599 [INFO][4322] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:44:24.863125 containerd[1497]: 2025-07-06 23:44:24.623 [INFO][4322] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0 calico-kube-controllers-6bf774cfd7- calico-system c02b118e-f7bf-40f8-822a-eb78b9bda868 789 0 2025-07-06 23:44:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bf774cfd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bf774cfd7-dkg74 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicf96d73f1ab [] [] }} ContainerID="a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" Namespace="calico-system" Pod="calico-kube-controllers-6bf774cfd7-dkg74" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-" Jul 6 23:44:24.863125 containerd[1497]: 2025-07-06 23:44:24.623 [INFO][4322] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" Namespace="calico-system" Pod="calico-kube-controllers-6bf774cfd7-dkg74" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0" Jul 6 23:44:24.863125 containerd[1497]: 2025-07-06 23:44:24.671 [INFO][4364] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" HandleID="k8s-pod-network.a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" Workload="localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0" Jul 6 23:44:24.863405 containerd[1497]: 2025-07-06 23:44:24.671 [INFO][4364] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" HandleID="k8s-pod-network.a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" Workload="localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bf774cfd7-dkg74", "timestamp":"2025-07-06 23:44:24.671246743 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:44:24.863405 containerd[1497]: 2025-07-06 23:44:24.672 [INFO][4364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:44:24.863405 containerd[1497]: 2025-07-06 23:44:24.713 [INFO][4364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:44:24.863405 containerd[1497]: 2025-07-06 23:44:24.713 [INFO][4364] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:44:24.863405 containerd[1497]: 2025-07-06 23:44:24.772 [INFO][4364] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" host="localhost" Jul 6 23:44:24.863405 containerd[1497]: 2025-07-06 23:44:24.780 [INFO][4364] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:44:24.863405 containerd[1497]: 2025-07-06 23:44:24.795 [INFO][4364] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:44:24.863405 containerd[1497]: 2025-07-06 23:44:24.802 [INFO][4364] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:24.863405 containerd[1497]: 2025-07-06 23:44:24.805 [INFO][4364] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:24.863405 containerd[1497]: 2025-07-06 23:44:24.805 [INFO][4364] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" host="localhost" Jul 6 23:44:24.863799 containerd[1497]: 2025-07-06 23:44:24.808 [INFO][4364] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83 Jul 6 23:44:24.863799 containerd[1497]: 2025-07-06 23:44:24.816 [INFO][4364] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" host="localhost" Jul 6 23:44:24.863799 containerd[1497]: 2025-07-06 23:44:24.829 [INFO][4364] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" host="localhost" Jul 6 23:44:24.863799 containerd[1497]: 2025-07-06 23:44:24.829 [INFO][4364] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" host="localhost" Jul 6 23:44:24.863799 containerd[1497]: 2025-07-06 23:44:24.829 [INFO][4364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:44:24.863799 containerd[1497]: 2025-07-06 23:44:24.829 [INFO][4364] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" HandleID="k8s-pod-network.a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" Workload="localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0" Jul 6 23:44:24.863966 containerd[1497]: 2025-07-06 23:44:24.835 [INFO][4322] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" Namespace="calico-system" Pod="calico-kube-controllers-6bf774cfd7-dkg74" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0", GenerateName:"calico-kube-controllers-6bf774cfd7-", Namespace:"calico-system", SelfLink:"", UID:"c02b118e-f7bf-40f8-822a-eb78b9bda868", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 44, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bf774cfd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bf774cfd7-dkg74", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf96d73f1ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:24.864043 containerd[1497]: 2025-07-06 23:44:24.836 [INFO][4322] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" Namespace="calico-system" Pod="calico-kube-controllers-6bf774cfd7-dkg74" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0" Jul 6 23:44:24.864043 containerd[1497]: 2025-07-06 23:44:24.836 [INFO][4322] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf96d73f1ab ContainerID="a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" Namespace="calico-system" Pod="calico-kube-controllers-6bf774cfd7-dkg74" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0" Jul 6 23:44:24.864043 containerd[1497]: 2025-07-06 23:44:24.841 [INFO][4322] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" Namespace="calico-system" Pod="calico-kube-controllers-6bf774cfd7-dkg74" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0" Jul 6 23:44:24.864133 containerd[1497]: 2025-07-06 23:44:24.841 [INFO][4322] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" Namespace="calico-system" Pod="calico-kube-controllers-6bf774cfd7-dkg74" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0", GenerateName:"calico-kube-controllers-6bf774cfd7-", Namespace:"calico-system", SelfLink:"", UID:"c02b118e-f7bf-40f8-822a-eb78b9bda868", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 44, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bf774cfd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83", Pod:"calico-kube-controllers-6bf774cfd7-dkg74", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf96d73f1ab", MAC:"56:5e:c7:3e:b7:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:24.864200 containerd[1497]: 2025-07-06 23:44:24.860 [INFO][4322] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" Namespace="calico-system" Pod="calico-kube-controllers-6bf774cfd7-dkg74" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf774cfd7--dkg74-eth0" Jul 6 23:44:24.894784 containerd[1497]: time="2025-07-06T23:44:24.894444121Z" level=info msg="connecting to shim a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83" address="unix:///run/containerd/s/679f6079b614daa77b6271df64580fdfcdb533c6a6badf4dc4df6984d3572a46" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:24.899068 containerd[1497]: time="2025-07-06T23:44:24.899007847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b6cbff4d-w8zph,Uid:fc8416c7-bd5b-40b0-95a8-3be1e242a65b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58\"" Jul 6 23:44:24.903545 containerd[1497]: time="2025-07-06T23:44:24.903373156Z" level=info msg="CreateContainer within sandbox \"d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:44:24.905974 systemd-networkd[1433]: cali2bf07b22690: Gained IPv6LL Jul 6 23:44:24.927426 containerd[1497]: time="2025-07-06T23:44:24.926450049Z" level=info msg="Container 0e68b03dc18dafa126b4243d61256b2c1e4b6fe27a4102692c6f9ddbeb6989fa: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:24.935951 containerd[1497]: time="2025-07-06T23:44:24.935905730Z" level=info msg="CreateContainer within sandbox \"d183ee9720fcc6ae3eb1d19ab8e174b776cdcacf94b3a29638ea6a0d77d92d58\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0e68b03dc18dafa126b4243d61256b2c1e4b6fe27a4102692c6f9ddbeb6989fa\"" Jul 6 23:44:24.936775 containerd[1497]: time="2025-07-06T23:44:24.936753006Z" level=info msg="StartContainer for \"0e68b03dc18dafa126b4243d61256b2c1e4b6fe27a4102692c6f9ddbeb6989fa\"" Jul 6 23:44:24.938095 containerd[1497]: time="2025-07-06T23:44:24.938067482Z" level=info msg="connecting to shim 0e68b03dc18dafa126b4243d61256b2c1e4b6fe27a4102692c6f9ddbeb6989fa" address="unix:///run/containerd/s/85faccea42fb4fe3497748b17f4386a49362f57bd2a6128eac05a35c2b6184ab" protocol=ttrpc version=3 Jul 6 23:44:24.944679 systemd[1]: Started cri-containerd-a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83.scope - libcontainer container a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83. Jul 6 23:44:24.956474 systemd[1]: Started cri-containerd-0e68b03dc18dafa126b4243d61256b2c1e4b6fe27a4102692c6f9ddbeb6989fa.scope - libcontainer container 0e68b03dc18dafa126b4243d61256b2c1e4b6fe27a4102692c6f9ddbeb6989fa. Jul 6 23:44:24.970466 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:44:25.009249 containerd[1497]: time="2025-07-06T23:44:25.009206387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf774cfd7-dkg74,Uid:c02b118e-f7bf-40f8-822a-eb78b9bda868,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83\"" Jul 6 23:44:25.023982 containerd[1497]: time="2025-07-06T23:44:25.023939577Z" level=info msg="StartContainer for \"0e68b03dc18dafa126b4243d61256b2c1e4b6fe27a4102692c6f9ddbeb6989fa\" returns successfully" Jul 6 23:44:25.510408 containerd[1497]: time="2025-07-06T23:44:25.510184926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bzjfx,Uid:8e0e80fe-54df-43e2-a2ba-b36484172017,Namespace:kube-system,Attempt:0,}" Jul 6 23:44:25.510857 containerd[1497]: time="2025-07-06T23:44:25.510568760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qfcd4,Uid:288be6c7-a3f8-4852-8e60-18dd898545eb,Namespace:kube-system,Attempt:0,}" Jul 6 23:44:25.943053 systemd-networkd[1433]: cali609528db4e4: Link UP Jul 6 23:44:25.944965 systemd-networkd[1433]: cali609528db4e4: Gained carrier Jul 6 23:44:25.989659 kubelet[2620]: I0706 23:44:25.989433 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67b6cbff4d-5nxk9" podStartSLOduration=26.394336525 podStartE2EDuration="27.989410311s" podCreationTimestamp="2025-07-06 23:43:58 +0000 UTC" firstStartedPulling="2025-07-06 23:44:22.993558607 +0000 UTC m=+39.573910345" lastFinishedPulling="2025-07-06 23:44:24.588632393 +0000 UTC m=+41.168984131" observedRunningTime="2025-07-06 23:44:25.987594674 +0000 UTC m=+42.567946372" watchObservedRunningTime="2025-07-06 23:44:25.989410311 +0000 UTC m=+42.569762049" Jul 6 23:44:25.990094 kubelet[2620]: I0706 23:44:25.989887 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67b6cbff4d-w8zph" podStartSLOduration=27.989874151 podStartE2EDuration="27.989874151s" podCreationTimestamp="2025-07-06 23:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:44:25.845047068 +0000 UTC m=+42.425398806" watchObservedRunningTime="2025-07-06 23:44:25.989874151 +0000 UTC m=+42.570225889" Jul 6 23:44:26.007987 containerd[1497]: 2025-07-06 23:44:25.652 [INFO][4580] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:44:26.007987 containerd[1497]: 2025-07-06 23:44:25.667 [INFO][4580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0 coredns-7c65d6cfc9- kube-system 8e0e80fe-54df-43e2-a2ba-b36484172017 786 0 2025-07-06 23:43:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-bzjfx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali609528db4e4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzjfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bzjfx-" Jul 6 23:44:26.007987 containerd[1497]: 2025-07-06 23:44:25.667 [INFO][4580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzjfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0" Jul 6 23:44:26.007987 containerd[1497]: 2025-07-06 23:44:25.720 [INFO][4612] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" HandleID="k8s-pod-network.41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" Workload="localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0" Jul 6 23:44:26.008313 containerd[1497]: 2025-07-06 23:44:25.721 [INFO][4612] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" HandleID="k8s-pod-network.41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" Workload="localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001af3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-bzjfx", "timestamp":"2025-07-06 23:44:25.720729473 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:44:26.008313 containerd[1497]: 2025-07-06 23:44:25.721 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:44:26.008313 containerd[1497]: 2025-07-06 23:44:25.721 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:44:26.008313 containerd[1497]: 2025-07-06 23:44:25.721 [INFO][4612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:44:26.008313 containerd[1497]: 2025-07-06 23:44:25.733 [INFO][4612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" host="localhost" Jul 6 23:44:26.008313 containerd[1497]: 2025-07-06 23:44:25.738 [INFO][4612] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:44:26.008313 containerd[1497]: 2025-07-06 23:44:25.745 [INFO][4612] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:44:26.008313 containerd[1497]: 2025-07-06 23:44:25.748 [INFO][4612] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:26.008313 containerd[1497]: 2025-07-06 23:44:25.753 [INFO][4612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:26.008313 containerd[1497]: 2025-07-06 23:44:25.753 [INFO][4612] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" host="localhost" Jul 6 23:44:26.008675 containerd[1497]: 2025-07-06 23:44:25.756 [INFO][4612] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7 Jul 6 23:44:26.008675 containerd[1497]: 2025-07-06 23:44:25.820 [INFO][4612] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" host="localhost" Jul 6 23:44:26.008675 containerd[1497]: 2025-07-06 23:44:25.925 [INFO][4612] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" host="localhost" Jul 6 23:44:26.008675 containerd[1497]: 2025-07-06 23:44:25.925 [INFO][4612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" host="localhost" Jul 6 23:44:26.008675 containerd[1497]: 2025-07-06 23:44:25.925 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:44:26.008675 containerd[1497]: 2025-07-06 23:44:25.925 [INFO][4612] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" HandleID="k8s-pod-network.41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" Workload="localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0" Jul 6 23:44:26.008801 containerd[1497]: 2025-07-06 23:44:25.934 [INFO][4580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzjfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8e0e80fe-54df-43e2-a2ba-b36484172017", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 43, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-bzjfx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali609528db4e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:26.009278 containerd[1497]: 2025-07-06 23:44:25.935 [INFO][4580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzjfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0" Jul 6 23:44:26.009278 containerd[1497]: 2025-07-06 23:44:25.935 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali609528db4e4 ContainerID="41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzjfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0" Jul 6 23:44:26.009278 containerd[1497]: 2025-07-06 23:44:25.946 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzjfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0" Jul 6 23:44:26.009375 containerd[1497]: 2025-07-06 23:44:25.946 [INFO][4580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzjfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8e0e80fe-54df-43e2-a2ba-b36484172017", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 43, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7", Pod:"coredns-7c65d6cfc9-bzjfx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali609528db4e4", MAC:"12:37:b0:da:db:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:26.009375 containerd[1497]: 2025-07-06 23:44:26.003 [INFO][4580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzjfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bzjfx-eth0" Jul 6 23:44:26.121927 systemd-networkd[1433]: calicf96d73f1ab: Gained IPv6LL Jul 6 23:44:26.236190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116874890.mount: Deactivated successfully. Jul 6 23:44:26.320003 systemd-networkd[1433]: cali3423c998aaf: Link UP Jul 6 23:44:26.320850 systemd-networkd[1433]: cali3423c998aaf: Gained carrier Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:25.666 [INFO][4591] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:25.686 [INFO][4591] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0 coredns-7c65d6cfc9- kube-system 288be6c7-a3f8-4852-8e60-18dd898545eb 778 0 2025-07-06 23:43:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-qfcd4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3423c998aaf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qfcd4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qfcd4-" Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:25.687 [INFO][4591] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qfcd4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0" Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:25.751 [INFO][4620] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" HandleID="k8s-pod-network.efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" Workload="localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0" Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:25.752 [INFO][4620] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" HandleID="k8s-pod-network.efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" Workload="localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000255b10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-qfcd4", "timestamp":"2025-07-06 23:44:25.751610095 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:25.753 [INFO][4620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:25.925 [INFO][4620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:25.926 [INFO][4620] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:25.990 [INFO][4620] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" host="localhost" Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:26.143 [INFO][4620] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:26.152 [INFO][4620] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:26.189 [INFO][4620] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:26.277 [INFO][4620] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:26.277 [INFO][4620] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" host="localhost" Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:26.282 [INFO][4620] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:26.299 [INFO][4620] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" host="localhost" Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:26.310 [INFO][4620] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" host="localhost" Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:26.311 [INFO][4620] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" host="localhost" Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:26.311 [INFO][4620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:44:26.358501 containerd[1497]: 2025-07-06 23:44:26.311 [INFO][4620] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" HandleID="k8s-pod-network.efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" Workload="localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0" Jul 6 23:44:26.359181 containerd[1497]: 2025-07-06 23:44:26.316 [INFO][4591] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qfcd4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"288be6c7-a3f8-4852-8e60-18dd898545eb", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 43, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-qfcd4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3423c998aaf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:26.359181 containerd[1497]: 2025-07-06 23:44:26.316 [INFO][4591] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qfcd4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0" Jul 6 23:44:26.359181 containerd[1497]: 2025-07-06 23:44:26.316 [INFO][4591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3423c998aaf ContainerID="efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qfcd4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0" Jul 6 23:44:26.359181 containerd[1497]: 2025-07-06 23:44:26.321 [INFO][4591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qfcd4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0" Jul 6 23:44:26.359181 containerd[1497]: 2025-07-06 23:44:26.322 [INFO][4591] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qfcd4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"288be6c7-a3f8-4852-8e60-18dd898545eb", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 43, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa", Pod:"coredns-7c65d6cfc9-qfcd4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3423c998aaf", MAC:"d2:fa:32:4c:03:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:26.359181 containerd[1497]: 2025-07-06 23:44:26.337 [INFO][4591] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qfcd4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qfcd4-eth0" Jul 6 23:44:26.441533 systemd-networkd[1433]: calie54daba75aa: Gained IPv6LL Jul 6 23:44:26.507637 containerd[1497]: time="2025-07-06T23:44:26.507381829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9sc4,Uid:4b809c8d-416a-43ae-b2d8-1c4702f886b2,Namespace:calico-system,Attempt:0,}" Jul 6 23:44:26.512426 containerd[1497]: time="2025-07-06T23:44:26.512303640Z" level=info msg="connecting to shim efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa" address="unix:///run/containerd/s/b60eadc26bbe03e39b41cbd69a5c51499f19d03044c483f6f79e03c7d3dc300d" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:26.518616 containerd[1497]: time="2025-07-06T23:44:26.518003156Z" level=info msg="connecting to shim 41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7" address="unix:///run/containerd/s/b58b179a066ff86df8ec43badccdf8188314378e670b58200fc1864646b49633" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:26.576638 systemd[1]: Started cri-containerd-efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa.scope - libcontainer container efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa. Jul 6 23:44:26.581791 systemd[1]: Started cri-containerd-41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7.scope - libcontainer container 41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7. Jul 6 23:44:26.613240 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:44:26.697984 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:44:26.721727 containerd[1497]: time="2025-07-06T23:44:26.721375217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bzjfx,Uid:8e0e80fe-54df-43e2-a2ba-b36484172017,Namespace:kube-system,Attempt:0,} returns sandbox id \"41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7\"" Jul 6 23:44:26.728890 containerd[1497]: time="2025-07-06T23:44:26.728787356Z" level=info msg="CreateContainer within sandbox \"41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:44:26.749738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004863123.mount: Deactivated successfully. Jul 6 23:44:26.751529 containerd[1497]: time="2025-07-06T23:44:26.751476010Z" level=info msg="Container 7764727b61be2d07eb9aa39bfcfc45f0ac2b25b34177651d3f83c464dbc41b89: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:26.767601 containerd[1497]: time="2025-07-06T23:44:26.767214404Z" level=info msg="CreateContainer within sandbox \"41be2736d03b850036be94770d022278a8702a4976f7df5a9cae11c78753ccb7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7764727b61be2d07eb9aa39bfcfc45f0ac2b25b34177651d3f83c464dbc41b89\"" Jul 6 23:44:26.768210 containerd[1497]: time="2025-07-06T23:44:26.768158523Z" level=info msg="StartContainer for \"7764727b61be2d07eb9aa39bfcfc45f0ac2b25b34177651d3f83c464dbc41b89\"" Jul 6 23:44:26.769644 containerd[1497]: time="2025-07-06T23:44:26.769600364Z" level=info msg="connecting to shim 7764727b61be2d07eb9aa39bfcfc45f0ac2b25b34177651d3f83c464dbc41b89" address="unix:///run/containerd/s/b58b179a066ff86df8ec43badccdf8188314378e670b58200fc1864646b49633" protocol=ttrpc version=3 Jul 6 23:44:26.770903 containerd[1497]: time="2025-07-06T23:44:26.770841427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qfcd4,Uid:288be6c7-a3f8-4852-8e60-18dd898545eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa\"" Jul 6 23:44:26.779895 kubelet[2620]: I0706 23:44:26.779852 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:44:26.780192 kubelet[2620]: I0706 23:44:26.780173 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:44:26.796794 containerd[1497]: time="2025-07-06T23:44:26.796643662Z" level=info msg="CreateContainer within sandbox \"efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:44:26.803477 systemd-networkd[1433]: cali596c01b30d4: Link UP Jul 6 23:44:26.810198 systemd-networkd[1433]: cali596c01b30d4: Gained carrier Jul 6 23:44:26.833783 containerd[1497]: time="2025-07-06T23:44:26.833657272Z" level=info msg="Container fb388f6bf1122005e3e10c9a60ec19b088b8678664829ace5f4d2571cb3d4e43: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:26.836763 systemd[1]: Started cri-containerd-7764727b61be2d07eb9aa39bfcfc45f0ac2b25b34177651d3f83c464dbc41b89.scope - libcontainer container 7764727b61be2d07eb9aa39bfcfc45f0ac2b25b34177651d3f83c464dbc41b89. Jul 6 23:44:26.843976 containerd[1497]: time="2025-07-06T23:44:26.843930490Z" level=info msg="CreateContainer within sandbox \"efb798e11691f76832474099075de68ca28bdcd5395c1904d5c1e814d554e1aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fb388f6bf1122005e3e10c9a60ec19b088b8678664829ace5f4d2571cb3d4e43\"" Jul 6 23:44:26.845511 containerd[1497]: time="2025-07-06T23:44:26.845381171Z" level=info msg="StartContainer for \"fb388f6bf1122005e3e10c9a60ec19b088b8678664829ace5f4d2571cb3d4e43\"" Jul 6 23:44:26.846949 containerd[1497]: time="2025-07-06T23:44:26.846918099Z" level=info msg="connecting to shim fb388f6bf1122005e3e10c9a60ec19b088b8678664829ace5f4d2571cb3d4e43" address="unix:///run/containerd/s/b60eadc26bbe03e39b41cbd69a5c51499f19d03044c483f6f79e03c7d3dc300d" protocol=ttrpc version=3 Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.564 [INFO][4701] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.600 [INFO][4701] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--m9sc4-eth0 csi-node-driver- calico-system 4b809c8d-416a-43ae-b2d8-1c4702f886b2 682 0 2025-07-06 23:44:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-m9sc4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali596c01b30d4 [] [] }} ContainerID="ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" Namespace="calico-system" Pod="csi-node-driver-m9sc4" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9sc4-" Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.600 [INFO][4701] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" Namespace="calico-system" Pod="csi-node-driver-m9sc4" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9sc4-eth0" Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.684 [INFO][4768] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" HandleID="k8s-pod-network.ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" Workload="localhost-k8s-csi--node--driver--m9sc4-eth0" Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.684 [INFO][4768] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" HandleID="k8s-pod-network.ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" Workload="localhost-k8s-csi--node--driver--m9sc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b8ee0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-m9sc4", "timestamp":"2025-07-06 23:44:26.684217594 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.684 [INFO][4768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.685 [INFO][4768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.685 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.716 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" host="localhost" Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.729 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.740 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.745 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.752 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.755 [INFO][4768] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" host="localhost" Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.762 [INFO][4768] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.770 [INFO][4768] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" host="localhost" Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.785 [INFO][4768] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" host="localhost" Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.785 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" host="localhost" Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.785 [INFO][4768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:44:26.848717 containerd[1497]: 2025-07-06 23:44:26.785 [INFO][4768] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" HandleID="k8s-pod-network.ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" Workload="localhost-k8s-csi--node--driver--m9sc4-eth0" Jul 6 23:44:26.849539 containerd[1497]: 2025-07-06 23:44:26.794 [INFO][4701] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" Namespace="calico-system" Pod="csi-node-driver-m9sc4" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9sc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m9sc4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b809c8d-416a-43ae-b2d8-1c4702f886b2", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 44, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-m9sc4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali596c01b30d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:26.849539 containerd[1497]: 2025-07-06 23:44:26.794 [INFO][4701] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" Namespace="calico-system" Pod="csi-node-driver-m9sc4" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9sc4-eth0" Jul 6 23:44:26.849539 containerd[1497]: 2025-07-06 23:44:26.795 [INFO][4701] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali596c01b30d4 ContainerID="ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" Namespace="calico-system" Pod="csi-node-driver-m9sc4" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9sc4-eth0" Jul 6 23:44:26.849539 containerd[1497]: 2025-07-06 23:44:26.809 [INFO][4701] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" Namespace="calico-system" Pod="csi-node-driver-m9sc4" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9sc4-eth0" Jul 6 23:44:26.849539 containerd[1497]: 2025-07-06 23:44:26.811 [INFO][4701] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" Namespace="calico-system" Pod="csi-node-driver-m9sc4" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9sc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m9sc4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b809c8d-416a-43ae-b2d8-1c4702f886b2", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 44, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee", Pod:"csi-node-driver-m9sc4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali596c01b30d4", MAC:"82:bd:bd:41:09:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:44:26.849539 containerd[1497]: 2025-07-06 23:44:26.839 [INFO][4701] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" Namespace="calico-system" Pod="csi-node-driver-m9sc4" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9sc4-eth0" Jul 6 23:44:26.887437 containerd[1497]: time="2025-07-06T23:44:26.887336354Z" level=info msg="connecting to shim ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee" address="unix:///run/containerd/s/265c021937c74ec989394ba520b88a2b18c139f1ce470dab7105db55d5d84cf0" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:26.914699 containerd[1497]: time="2025-07-06T23:44:26.914620032Z" level=info msg="StartContainer for \"7764727b61be2d07eb9aa39bfcfc45f0ac2b25b34177651d3f83c464dbc41b89\" returns successfully" Jul 6 23:44:26.927619 systemd[1]: Started cri-containerd-ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee.scope - libcontainer container ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee. Jul 6 23:44:26.949656 systemd[1]: Started cri-containerd-fb388f6bf1122005e3e10c9a60ec19b088b8678664829ace5f4d2571cb3d4e43.scope - libcontainer container fb388f6bf1122005e3e10c9a60ec19b088b8678664829ace5f4d2571cb3d4e43. Jul 6 23:44:26.967677 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:44:27.010674 containerd[1497]: time="2025-07-06T23:44:27.010625902Z" level=info msg="StartContainer for \"fb388f6bf1122005e3e10c9a60ec19b088b8678664829ace5f4d2571cb3d4e43\" returns successfully" Jul 6 23:44:27.011765 containerd[1497]: time="2025-07-06T23:44:27.011633103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9sc4,Uid:4b809c8d-416a-43ae-b2d8-1c4702f886b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee\"" Jul 6 23:44:27.359994 containerd[1497]: time="2025-07-06T23:44:27.358392752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:27.360564 containerd[1497]: time="2025-07-06T23:44:27.360481881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 6 23:44:27.361897 containerd[1497]: time="2025-07-06T23:44:27.361832990Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:27.364531 containerd[1497]: time="2025-07-06T23:44:27.364112094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:27.365324 containerd[1497]: time="2025-07-06T23:44:27.365286989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.752764591s" Jul 6 23:44:27.365419 containerd[1497]: time="2025-07-06T23:44:27.365332073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 6 23:44:27.367654 containerd[1497]: time="2025-07-06T23:44:27.366949884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:44:27.378970 containerd[1497]: time="2025-07-06T23:44:27.378670312Z" level=info msg="CreateContainer within sandbox \"2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:44:27.394051 containerd[1497]: time="2025-07-06T23:44:27.393900944Z" level=info msg="Container db10b66867c2985d26fb93227189f5479d463973bf1ba2c5c71012073a9a2e5a: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:27.409953 containerd[1497]: time="2025-07-06T23:44:27.409889877Z" level=info msg="CreateContainer within sandbox \"2eed2edeb9ba7845050268f4980d06992ac50e828ff9a08fd396bdd70f26426f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"db10b66867c2985d26fb93227189f5479d463973bf1ba2c5c71012073a9a2e5a\"" Jul 6 23:44:27.412372 containerd[1497]: time="2025-07-06T23:44:27.412331395Z" level=info msg="StartContainer for \"db10b66867c2985d26fb93227189f5479d463973bf1ba2c5c71012073a9a2e5a\"" Jul 6 23:44:27.413583 containerd[1497]: time="2025-07-06T23:44:27.413550813Z" level=info msg="connecting to shim db10b66867c2985d26fb93227189f5479d463973bf1ba2c5c71012073a9a2e5a" address="unix:///run/containerd/s/6febe6075cee98e963877ad4fc7d1ab5c7d95089f5e694fb28099ec146562b83" protocol=ttrpc version=3 Jul 6 23:44:27.450370 systemd[1]: Started cri-containerd-db10b66867c2985d26fb93227189f5479d463973bf1ba2c5c71012073a9a2e5a.scope - libcontainer container db10b66867c2985d26fb93227189f5479d463973bf1ba2c5c71012073a9a2e5a. Jul 6 23:44:27.515811 containerd[1497]: time="2025-07-06T23:44:27.515683995Z" level=info msg="StartContainer for \"db10b66867c2985d26fb93227189f5479d463973bf1ba2c5c71012073a9a2e5a\" returns successfully" Jul 6 23:44:27.634153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2182993356.mount: Deactivated successfully. Jul 6 23:44:27.657544 systemd-networkd[1433]: cali609528db4e4: Gained IPv6LL Jul 6 23:44:27.797945 kubelet[2620]: I0706 23:44:27.797627 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:44:27.917502 kubelet[2620]: I0706 23:44:27.917245 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qfcd4" podStartSLOduration=37.917225834 podStartE2EDuration="37.917225834s" podCreationTimestamp="2025-07-06 23:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:44:27.916854764 +0000 UTC m=+44.497206502" watchObservedRunningTime="2025-07-06 23:44:27.917225834 +0000 UTC m=+44.497577572" Jul 6 23:44:27.917502 kubelet[2620]: I0706 23:44:27.917338 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-k5kn7" podStartSLOduration=22.609872797 podStartE2EDuration="26.917333643s" podCreationTimestamp="2025-07-06 23:44:01 +0000 UTC" firstStartedPulling="2025-07-06 23:44:23.058848026 +0000 UTC m=+39.639199764" lastFinishedPulling="2025-07-06 23:44:27.366308872 +0000 UTC m=+43.946660610" observedRunningTime="2025-07-06 23:44:27.90245904 +0000 UTC m=+44.482810778" watchObservedRunningTime="2025-07-06 23:44:27.917333643 +0000 UTC m=+44.497685381" Jul 6 23:44:27.947670 kubelet[2620]: I0706 23:44:27.947614 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bzjfx" podStartSLOduration=37.947594971 podStartE2EDuration="37.947594971s" podCreationTimestamp="2025-07-06 23:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:44:27.93460824 +0000 UTC m=+44.514959978" watchObservedRunningTime="2025-07-06 23:44:27.947594971 +0000 UTC m=+44.527946709" Jul 6 23:44:28.297524 systemd-networkd[1433]: cali3423c998aaf: Gained IPv6LL Jul 6 23:44:28.425587 systemd-networkd[1433]: cali596c01b30d4: Gained IPv6LL Jul 6 23:44:28.749691 kubelet[2620]: I0706 23:44:28.749657 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:44:28.799782 kubelet[2620]: I0706 23:44:28.799745 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:44:28.931486 containerd[1497]: time="2025-07-06T23:44:28.931425117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1ebb5d107a896b6e60d8e643a1eb50da8020b7a85a73fdfeabe89332adc2773\" id:\"a9b71a36fa70631d4adb9da5da57c4fd567faf7ad86d6e4cedcd44754d2fab03\" pid:5022 exit_status:1 exited_at:{seconds:1751845468 nanos:928496968}" Jul 6 23:44:29.037012 containerd[1497]: time="2025-07-06T23:44:29.036643156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1ebb5d107a896b6e60d8e643a1eb50da8020b7a85a73fdfeabe89332adc2773\" id:\"ee917a49984218e8cf3214c8a6d304393710a290e370fe60623fa278796e5022\" pid:5049 exit_status:1 exited_at:{seconds:1751845469 nanos:35346538}" Jul 6 23:44:29.168482 containerd[1497]: time="2025-07-06T23:44:29.168283749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db10b66867c2985d26fb93227189f5479d463973bf1ba2c5c71012073a9a2e5a\" id:\"706f47b526991dc37b62818f32f4c40d0de16765a6c2e556338bfe958b880c07\" pid:5077 exited_at:{seconds:1751845469 nanos:159315468}" Jul 6 23:44:29.605448 containerd[1497]: time="2025-07-06T23:44:29.604071030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:29.605448 containerd[1497]: time="2025-07-06T23:44:29.604872971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 6 23:44:29.605699 containerd[1497]: time="2025-07-06T23:44:29.605668071Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:29.608552 containerd[1497]: time="2025-07-06T23:44:29.608501687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:29.609173 containerd[1497]: time="2025-07-06T23:44:29.609127054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.242138167s" Jul 6 23:44:29.609173 containerd[1497]: time="2025-07-06T23:44:29.609161537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 6 23:44:29.613233 containerd[1497]: time="2025-07-06T23:44:29.613185402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:44:29.622899 containerd[1497]: time="2025-07-06T23:44:29.622847256Z" level=info msg="CreateContainer within sandbox \"a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:44:29.631674 containerd[1497]: time="2025-07-06T23:44:29.631619321Z" level=info msg="Container 94e27e2dd5a8745e37334a20826557efc1f89de572a3dcd1c9580642a560f2fb: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:29.642111 containerd[1497]: time="2025-07-06T23:44:29.642049913Z" level=info msg="CreateContainer within sandbox \"a9eb70721343cfd0ea0cc3aebc5951ca88eb9ab3b56cf500824080e054de1c83\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"94e27e2dd5a8745e37334a20826557efc1f89de572a3dcd1c9580642a560f2fb\"" Jul 6 23:44:29.647347 containerd[1497]: time="2025-07-06T23:44:29.647266269Z" level=info msg="StartContainer for \"94e27e2dd5a8745e37334a20826557efc1f89de572a3dcd1c9580642a560f2fb\"" Jul 6 23:44:29.650306 containerd[1497]: time="2025-07-06T23:44:29.650252816Z" level=info msg="connecting to shim 94e27e2dd5a8745e37334a20826557efc1f89de572a3dcd1c9580642a560f2fb" address="unix:///run/containerd/s/679f6079b614daa77b6271df64580fdfcdb533c6a6badf4dc4df6984d3572a46" protocol=ttrpc version=3 Jul 6 23:44:29.683655 systemd[1]: Started cri-containerd-94e27e2dd5a8745e37334a20826557efc1f89de572a3dcd1c9580642a560f2fb.scope - libcontainer container 94e27e2dd5a8745e37334a20826557efc1f89de572a3dcd1c9580642a560f2fb. Jul 6 23:44:29.778609 containerd[1497]: time="2025-07-06T23:44:29.778568997Z" level=info msg="StartContainer for \"94e27e2dd5a8745e37334a20826557efc1f89de572a3dcd1c9580642a560f2fb\" returns successfully" Jul 6 23:44:29.811103 systemd[1]: Started sshd@8-10.0.0.127:22-10.0.0.1:60448.service - OpenSSH per-connection server daemon (10.0.0.1:60448). Jul 6 23:44:29.823199 kubelet[2620]: I0706 23:44:29.823140 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bf774cfd7-dkg74" podStartSLOduration=23.221480524 podStartE2EDuration="27.823124219s" podCreationTimestamp="2025-07-06 23:44:02 +0000 UTC" firstStartedPulling="2025-07-06 23:44:25.011356373 +0000 UTC m=+41.591708071" lastFinishedPulling="2025-07-06 23:44:29.613000028 +0000 UTC m=+46.193351766" observedRunningTime="2025-07-06 23:44:29.821524257 +0000 UTC m=+46.401875995" watchObservedRunningTime="2025-07-06 23:44:29.823124219 +0000 UTC m=+46.403476037" Jul 6 23:44:29.903732 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 60448 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:44:29.908188 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:29.914530 systemd-logind[1475]: New session 9 of user core. Jul 6 23:44:29.922634 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:44:30.112506 sshd[5158]: Connection closed by 10.0.0.1 port 60448 Jul 6 23:44:30.113188 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:30.116717 systemd[1]: sshd@8-10.0.0.127:22-10.0.0.1:60448.service: Deactivated successfully. Jul 6 23:44:30.122138 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:44:30.126242 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:44:30.127452 systemd-logind[1475]: Removed session 9. Jul 6 23:44:30.207251 kubelet[2620]: I0706 23:44:30.207208 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:44:30.662001 containerd[1497]: time="2025-07-06T23:44:30.661881563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:30.662593 containerd[1497]: time="2025-07-06T23:44:30.662545372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 6 23:44:30.663297 containerd[1497]: time="2025-07-06T23:44:30.663243104Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:30.665487 containerd[1497]: time="2025-07-06T23:44:30.665425904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:30.666109 containerd[1497]: time="2025-07-06T23:44:30.665938142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.052713177s" Jul 6 23:44:30.666109 containerd[1497]: time="2025-07-06T23:44:30.665979745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 6 23:44:30.668261 containerd[1497]: time="2025-07-06T23:44:30.668150745Z" level=info msg="CreateContainer within sandbox \"ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:44:30.677515 containerd[1497]: time="2025-07-06T23:44:30.675960879Z" level=info msg="Container 6b1a23c38ab678eb00528b70841d55e199bc86953ec21b6b4d5a46cc317c86de: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:30.685177 containerd[1497]: time="2025-07-06T23:44:30.685114552Z" level=info msg="CreateContainer within sandbox \"ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6b1a23c38ab678eb00528b70841d55e199bc86953ec21b6b4d5a46cc317c86de\"" Jul 6 23:44:30.686421 containerd[1497]: time="2025-07-06T23:44:30.685895849Z" level=info msg="StartContainer for \"6b1a23c38ab678eb00528b70841d55e199bc86953ec21b6b4d5a46cc317c86de\"" Jul 6 23:44:30.687643 containerd[1497]: time="2025-07-06T23:44:30.687613376Z" level=info msg="connecting to shim 6b1a23c38ab678eb00528b70841d55e199bc86953ec21b6b4d5a46cc317c86de" address="unix:///run/containerd/s/265c021937c74ec989394ba520b88a2b18c139f1ce470dab7105db55d5d84cf0" protocol=ttrpc version=3 Jul 6 23:44:30.715647 systemd[1]: Started cri-containerd-6b1a23c38ab678eb00528b70841d55e199bc86953ec21b6b4d5a46cc317c86de.scope - libcontainer container 6b1a23c38ab678eb00528b70841d55e199bc86953ec21b6b4d5a46cc317c86de. Jul 6 23:44:30.765343 containerd[1497]: time="2025-07-06T23:44:30.765302209Z" level=info msg="StartContainer for \"6b1a23c38ab678eb00528b70841d55e199bc86953ec21b6b4d5a46cc317c86de\" returns successfully" Jul 6 23:44:30.767641 containerd[1497]: time="2025-07-06T23:44:30.767606138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:44:30.884424 containerd[1497]: time="2025-07-06T23:44:30.884190632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94e27e2dd5a8745e37334a20826557efc1f89de572a3dcd1c9580642a560f2fb\" id:\"be0f25e583ef14a85b802f5cd3877e52e66cf6c7261de88cd1f6a2b18b56e253\" pid:5252 exited_at:{seconds:1751845470 nanos:883169877}" Jul 6 23:44:31.254317 systemd-networkd[1433]: vxlan.calico: Link UP Jul 6 23:44:31.254324 systemd-networkd[1433]: vxlan.calico: Gained carrier Jul 6 23:44:32.400696 containerd[1497]: time="2025-07-06T23:44:32.400634206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:32.401302 containerd[1497]: time="2025-07-06T23:44:32.401262490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 6 23:44:32.402581 containerd[1497]: time="2025-07-06T23:44:32.402549178Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:32.405090 containerd[1497]: time="2025-07-06T23:44:32.405026349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:32.405656 containerd[1497]: time="2025-07-06T23:44:32.405519983Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.637050142s" Jul 6 23:44:32.405656 containerd[1497]: time="2025-07-06T23:44:32.405560186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 6 23:44:32.421363 containerd[1497]: time="2025-07-06T23:44:32.421295592Z" level=info msg="CreateContainer within sandbox \"ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:44:32.444756 containerd[1497]: time="2025-07-06T23:44:32.443597131Z" level=info msg="Container dd0effb66d4d17560b6077354cf19d6867c7133a26f3fc5903e4e004e15d5169: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:32.462663 containerd[1497]: time="2025-07-06T23:44:32.462607803Z" level=info msg="CreateContainer within sandbox \"ccd153aeb018952b241e094e1ec3698c946d3bbd24e416b0c472d02c8099a7ee\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dd0effb66d4d17560b6077354cf19d6867c7133a26f3fc5903e4e004e15d5169\"" Jul 6 23:44:32.463326 containerd[1497]: time="2025-07-06T23:44:32.463145080Z" level=info msg="StartContainer for \"dd0effb66d4d17560b6077354cf19d6867c7133a26f3fc5903e4e004e15d5169\"" Jul 6 23:44:32.466045 containerd[1497]: time="2025-07-06T23:44:32.466005798Z" level=info msg="connecting to shim dd0effb66d4d17560b6077354cf19d6867c7133a26f3fc5903e4e004e15d5169" address="unix:///run/containerd/s/265c021937c74ec989394ba520b88a2b18c139f1ce470dab7105db55d5d84cf0" protocol=ttrpc version=3 Jul 6 23:44:32.489633 systemd[1]: Started cri-containerd-dd0effb66d4d17560b6077354cf19d6867c7133a26f3fc5903e4e004e15d5169.scope - libcontainer container dd0effb66d4d17560b6077354cf19d6867c7133a26f3fc5903e4e004e15d5169. Jul 6 23:44:32.535634 containerd[1497]: time="2025-07-06T23:44:32.535585480Z" level=info msg="StartContainer for \"dd0effb66d4d17560b6077354cf19d6867c7133a26f3fc5903e4e004e15d5169\" returns successfully" Jul 6 23:44:32.604451 kubelet[2620]: I0706 23:44:32.604373 2620 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:44:32.604451 kubelet[2620]: I0706 23:44:32.604450 2620 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:44:32.846454 kubelet[2620]: I0706 23:44:32.846269 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-m9sc4" podStartSLOduration=25.456302167 podStartE2EDuration="30.846251001s" podCreationTimestamp="2025-07-06 23:44:02 +0000 UTC" firstStartedPulling="2025-07-06 23:44:27.016215354 +0000 UTC m=+43.596567092" lastFinishedPulling="2025-07-06 23:44:32.406164188 +0000 UTC m=+48.986515926" observedRunningTime="2025-07-06 23:44:32.833734577 +0000 UTC m=+49.414086315" watchObservedRunningTime="2025-07-06 23:44:32.846251001 +0000 UTC m=+49.426602739" Jul 6 23:44:33.133009 kubelet[2620]: I0706 23:44:33.132897 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:44:33.161552 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Jul 6 23:44:33.321650 containerd[1497]: time="2025-07-06T23:44:33.321602157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db10b66867c2985d26fb93227189f5479d463973bf1ba2c5c71012073a9a2e5a\" id:\"8f6f3e1ed309c26449ece0d4d5277619f955845a346f5c5af3c2176505bdb4c0\" pid:5435 exited_at:{seconds:1751845473 nanos:321261134}" Jul 6 23:44:33.399569 containerd[1497]: time="2025-07-06T23:44:33.399273270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db10b66867c2985d26fb93227189f5479d463973bf1ba2c5c71012073a9a2e5a\" id:\"3b374fa9d6ee5a528417d665546de781660aa8dfa0a798f70b615dfde4852f6d\" pid:5458 exited_at:{seconds:1751845473 nanos:398549141}" Jul 6 23:44:35.130226 systemd[1]: Started sshd@9-10.0.0.127:22-10.0.0.1:45744.service - OpenSSH per-connection server daemon (10.0.0.1:45744). Jul 6 23:44:35.197840 sshd[5477]: Accepted publickey for core from 10.0.0.1 port 45744 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:44:35.199321 sshd-session[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:35.203170 systemd-logind[1475]: New session 10 of user core. Jul 6 23:44:35.208548 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:44:35.456163 sshd[5479]: Connection closed by 10.0.0.1 port 45744 Jul 6 23:44:35.456625 sshd-session[5477]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:35.470351 systemd[1]: sshd@9-10.0.0.127:22-10.0.0.1:45744.service: Deactivated successfully. Jul 6 23:44:35.472130 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:44:35.473020 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:44:35.476358 systemd[1]: Started sshd@10-10.0.0.127:22-10.0.0.1:45758.service - OpenSSH per-connection server daemon (10.0.0.1:45758). Jul 6 23:44:35.478291 systemd-logind[1475]: Removed session 10. Jul 6 23:44:35.540926 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 45758 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:44:35.543269 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:35.552247 systemd-logind[1475]: New session 11 of user core. Jul 6 23:44:35.562654 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:44:35.798445 sshd[5495]: Connection closed by 10.0.0.1 port 45758 Jul 6 23:44:35.798541 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:35.812279 systemd[1]: sshd@10-10.0.0.127:22-10.0.0.1:45758.service: Deactivated successfully. Jul 6 23:44:35.816487 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:44:35.817852 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:44:35.823303 systemd[1]: Started sshd@11-10.0.0.127:22-10.0.0.1:45766.service - OpenSSH per-connection server daemon (10.0.0.1:45766). Jul 6 23:44:35.828694 systemd-logind[1475]: Removed session 11. Jul 6 23:44:35.882493 sshd[5506]: Accepted publickey for core from 10.0.0.1 port 45766 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:44:35.883802 sshd-session[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:35.888120 systemd-logind[1475]: New session 12 of user core. Jul 6 23:44:35.905639 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:44:36.083359 sshd[5508]: Connection closed by 10.0.0.1 port 45766 Jul 6 23:44:36.083642 sshd-session[5506]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:36.087853 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:44:36.087974 systemd[1]: sshd@11-10.0.0.127:22-10.0.0.1:45766.service: Deactivated successfully. Jul 6 23:44:36.090133 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:44:36.091944 systemd-logind[1475]: Removed session 12. Jul 6 23:44:41.099818 systemd[1]: Started sshd@12-10.0.0.127:22-10.0.0.1:45912.service - OpenSSH per-connection server daemon (10.0.0.1:45912). Jul 6 23:44:41.163794 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 45912 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:44:41.165345 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:41.175870 systemd-logind[1475]: New session 13 of user core. Jul 6 23:44:41.186663 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:44:41.359758 sshd[5535]: Connection closed by 10.0.0.1 port 45912 Jul 6 23:44:41.360926 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:41.364375 systemd[1]: sshd@12-10.0.0.127:22-10.0.0.1:45912.service: Deactivated successfully. Jul 6 23:44:41.366454 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:44:41.367421 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:44:41.369182 systemd-logind[1475]: Removed session 13. Jul 6 23:44:45.819043 kubelet[2620]: I0706 23:44:45.818931 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:44:46.374545 systemd[1]: Started sshd@13-10.0.0.127:22-10.0.0.1:48938.service - OpenSSH per-connection server daemon (10.0.0.1:48938). Jul 6 23:44:46.449006 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 48938 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:44:46.450748 sshd-session[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:46.458980 systemd-logind[1475]: New session 14 of user core. Jul 6 23:44:46.473601 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:44:46.662239 sshd[5563]: Connection closed by 10.0.0.1 port 48938 Jul 6 23:44:46.662536 sshd-session[5561]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:46.666142 systemd[1]: sshd@13-10.0.0.127:22-10.0.0.1:48938.service: Deactivated successfully. Jul 6 23:44:46.668534 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:44:46.671089 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:44:46.672949 systemd-logind[1475]: Removed session 14. Jul 6 23:44:51.675545 systemd[1]: Started sshd@14-10.0.0.127:22-10.0.0.1:48946.service - OpenSSH per-connection server daemon (10.0.0.1:48946). Jul 6 23:44:51.742465 sshd[5588]: Accepted publickey for core from 10.0.0.1 port 48946 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:44:51.743642 sshd-session[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:51.748469 systemd-logind[1475]: New session 15 of user core. Jul 6 23:44:51.761605 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:44:51.946703 sshd[5590]: Connection closed by 10.0.0.1 port 48946 Jul 6 23:44:51.946971 sshd-session[5588]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:51.950917 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:44:51.951194 systemd[1]: sshd@14-10.0.0.127:22-10.0.0.1:48946.service: Deactivated successfully. Jul 6 23:44:51.953699 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:44:51.956282 systemd-logind[1475]: Removed session 15. Jul 6 23:44:56.967457 systemd[1]: Started sshd@15-10.0.0.127:22-10.0.0.1:53982.service - OpenSSH per-connection server daemon (10.0.0.1:53982). Jul 6 23:44:57.047860 sshd[5606]: Accepted publickey for core from 10.0.0.1 port 53982 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:44:57.049635 sshd-session[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:57.057716 systemd-logind[1475]: New session 16 of user core. Jul 6 23:44:57.073523 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:44:57.279272 sshd[5608]: Connection closed by 10.0.0.1 port 53982 Jul 6 23:44:57.280063 sshd-session[5606]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:57.292918 systemd[1]: sshd@15-10.0.0.127:22-10.0.0.1:53982.service: Deactivated successfully. Jul 6 23:44:57.295541 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:44:57.296699 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:44:57.300933 systemd[1]: Started sshd@16-10.0.0.127:22-10.0.0.1:53990.service - OpenSSH per-connection server daemon (10.0.0.1:53990). Jul 6 23:44:57.302611 systemd-logind[1475]: Removed session 16. Jul 6 23:44:57.373996 sshd[5622]: Accepted publickey for core from 10.0.0.1 port 53990 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:44:57.375733 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:57.381510 systemd-logind[1475]: New session 17 of user core. Jul 6 23:44:57.387629 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:44:57.622791 sshd[5624]: Connection closed by 10.0.0.1 port 53990 Jul 6 23:44:57.623649 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:57.639443 systemd[1]: sshd@16-10.0.0.127:22-10.0.0.1:53990.service: Deactivated successfully. Jul 6 23:44:57.642368 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:44:57.643333 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:44:57.647369 systemd[1]: Started sshd@17-10.0.0.127:22-10.0.0.1:54000.service - OpenSSH per-connection server daemon (10.0.0.1:54000). Jul 6 23:44:57.648303 systemd-logind[1475]: Removed session 17. Jul 6 23:44:57.710731 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 54000 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:44:57.712341 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:57.717430 systemd-logind[1475]: New session 18 of user core. Jul 6 23:44:57.724614 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:44:58.569326 containerd[1497]: time="2025-07-06T23:44:58.569277718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94e27e2dd5a8745e37334a20826557efc1f89de572a3dcd1c9580642a560f2fb\" id:\"51ef070905cfba92f34f2e6ba45fb0a17c76768d4bef9220139ea1ae3b8bc7f8\" pid:5662 exited_at:{seconds:1751845498 nanos:568712701}" Jul 6 23:44:58.848765 containerd[1497]: time="2025-07-06T23:44:58.848636764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1ebb5d107a896b6e60d8e643a1eb50da8020b7a85a73fdfeabe89332adc2773\" id:\"e5ec3899449f0558c9fcb482bfcae6ab4c3721f6816697544c4b44ee4aa844c4\" pid:5685 exited_at:{seconds:1751845498 nanos:848204951}" Jul 6 23:44:59.731094 sshd[5637]: Connection closed by 10.0.0.1 port 54000 Jul 6 23:44:59.732818 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:59.751828 systemd[1]: sshd@17-10.0.0.127:22-10.0.0.1:54000.service: Deactivated successfully. Jul 6 23:44:59.758325 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:44:59.758863 systemd[1]: session-18.scope: Consumed 607ms CPU time, 71.3M memory peak. Jul 6 23:44:59.760015 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:44:59.761869 systemd-logind[1475]: Removed session 18. Jul 6 23:44:59.763757 systemd[1]: Started sshd@18-10.0.0.127:22-10.0.0.1:54002.service - OpenSSH per-connection server daemon (10.0.0.1:54002). Jul 6 23:44:59.836586 sshd[5705]: Accepted publickey for core from 10.0.0.1 port 54002 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:44:59.838644 sshd-session[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:59.847438 systemd-logind[1475]: New session 19 of user core. Jul 6 23:44:59.861611 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:45:00.245764 sshd[5707]: Connection closed by 10.0.0.1 port 54002 Jul 6 23:45:00.246539 sshd-session[5705]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:00.257169 systemd[1]: sshd@18-10.0.0.127:22-10.0.0.1:54002.service: Deactivated successfully. Jul 6 23:45:00.259153 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:45:00.260548 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:45:00.265971 systemd[1]: Started sshd@19-10.0.0.127:22-10.0.0.1:54008.service - OpenSSH per-connection server daemon (10.0.0.1:54008). Jul 6 23:45:00.266859 systemd-logind[1475]: Removed session 19. Jul 6 23:45:00.327168 sshd[5719]: Accepted publickey for core from 10.0.0.1 port 54008 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:45:00.329012 sshd-session[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:00.334483 systemd-logind[1475]: New session 20 of user core. Jul 6 23:45:00.341614 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:45:00.594747 sshd[5721]: Connection closed by 10.0.0.1 port 54008 Jul 6 23:45:00.595538 sshd-session[5719]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:00.603755 systemd[1]: sshd@19-10.0.0.127:22-10.0.0.1:54008.service: Deactivated successfully. Jul 6 23:45:00.605894 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:45:00.606770 systemd-logind[1475]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:45:00.608231 systemd-logind[1475]: Removed session 20. Jul 6 23:45:03.266582 containerd[1497]: time="2025-07-06T23:45:03.266426188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db10b66867c2985d26fb93227189f5479d463973bf1ba2c5c71012073a9a2e5a\" id:\"b345f4a65e405630ad84c2503631d42ccf4e1c06c88be0f56e070e99081099cc\" pid:5747 exited_at:{seconds:1751845503 nanos:266069299}" Jul 6 23:45:05.616027 systemd[1]: Started sshd@20-10.0.0.127:22-10.0.0.1:50650.service - OpenSSH per-connection server daemon (10.0.0.1:50650). Jul 6 23:45:05.708896 sshd[5763]: Accepted publickey for core from 10.0.0.1 port 50650 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:45:05.710751 sshd-session[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:05.715987 systemd-logind[1475]: New session 21 of user core. Jul 6 23:45:05.720616 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:45:05.901566 sshd[5765]: Connection closed by 10.0.0.1 port 50650 Jul 6 23:45:05.901896 sshd-session[5763]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:05.906231 systemd[1]: sshd@20-10.0.0.127:22-10.0.0.1:50650.service: Deactivated successfully. Jul 6 23:45:05.908185 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:45:05.909240 systemd-logind[1475]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:45:05.912568 systemd-logind[1475]: Removed session 21. Jul 6 23:45:10.920875 systemd[1]: Started sshd@21-10.0.0.127:22-10.0.0.1:50798.service - OpenSSH per-connection server daemon (10.0.0.1:50798). Jul 6 23:45:10.985422 sshd[5778]: Accepted publickey for core from 10.0.0.1 port 50798 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:45:10.988240 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:10.993654 systemd-logind[1475]: New session 22 of user core. Jul 6 23:45:11.007662 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:45:11.153941 sshd[5780]: Connection closed by 10.0.0.1 port 50798 Jul 6 23:45:11.154506 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:11.159542 systemd[1]: sshd@21-10.0.0.127:22-10.0.0.1:50798.service: Deactivated successfully. Jul 6 23:45:11.161772 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:45:11.162758 systemd-logind[1475]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:45:11.164009 systemd-logind[1475]: Removed session 22. Jul 6 23:45:15.509256 kubelet[2620]: E0706 23:45:15.509204 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:16.170012 systemd[1]: Started sshd@22-10.0.0.127:22-10.0.0.1:51452.service - OpenSSH per-connection server daemon (10.0.0.1:51452). Jul 6 23:45:16.230634 sshd[5804]: Accepted publickey for core from 10.0.0.1 port 51452 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:45:16.235181 sshd-session[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:16.241774 systemd-logind[1475]: New session 23 of user core. Jul 6 23:45:16.248598 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:45:16.397094 sshd[5806]: Connection closed by 10.0.0.1 port 51452 Jul 6 23:45:16.398021 sshd-session[5804]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:16.402471 systemd-logind[1475]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:45:16.402574 systemd[1]: sshd@22-10.0.0.127:22-10.0.0.1:51452.service: Deactivated successfully. Jul 6 23:45:16.404644 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:45:16.407788 systemd-logind[1475]: Removed session 23. Jul 6 23:45:16.506093 kubelet[2620]: E0706 23:45:16.505990 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"