Jul 12 00:14:02.783104 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:14:02.783124 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Jul 11 22:25:21 -00 2025 Jul 12 00:14:02.783134 kernel: KASLR enabled Jul 12 00:14:02.783139 kernel: efi: EFI v2.7 by EDK II Jul 12 00:14:02.783145 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 12 00:14:02.783150 kernel: random: crng init done Jul 12 00:14:02.783157 kernel: secureboot: Secure boot disabled Jul 12 00:14:02.783163 kernel: ACPI: Early table checksum verification disabled Jul 12 00:14:02.783169 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 12 00:14:02.783176 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:14:02.783187 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:02.783193 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:02.783199 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:02.783205 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:02.783212 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:02.783219 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:02.783226 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:02.783232 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:02.783238 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:02.783244 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 00:14:02.783250 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 12 00:14:02.783256 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:14:02.783264 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 12 00:14:02.783270 kernel: Zone ranges: Jul 12 00:14:02.783276 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:14:02.783283 kernel: DMA32 empty Jul 12 00:14:02.783289 kernel: Normal empty Jul 12 00:14:02.783295 kernel: Device empty Jul 12 00:14:02.783301 kernel: Movable zone start for each node Jul 12 00:14:02.783307 kernel: Early memory node ranges Jul 12 00:14:02.783313 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 12 00:14:02.783319 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 12 00:14:02.783325 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 12 00:14:02.783331 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 12 00:14:02.783337 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 12 00:14:02.783342 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 12 00:14:02.783349 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 12 00:14:02.783356 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 12 00:14:02.783362 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 12 00:14:02.783368 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 12 00:14:02.783377 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 12 00:14:02.783383 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 12 00:14:02.783389 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 12 00:14:02.783397 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:14:02.783404 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 00:14:02.783410 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 12 00:14:02.783417 kernel: psci: probing for conduit method from ACPI. Jul 12 00:14:02.783423 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:14:02.783429 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:14:02.783435 kernel: psci: Trusted OS migration not required Jul 12 00:14:02.783442 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:14:02.783449 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:14:02.783455 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 12 00:14:02.783463 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 12 00:14:02.783469 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 00:14:02.783476 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:14:02.783482 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:14:02.783489 kernel: CPU features: detected: Spectre-v4 Jul 12 00:14:02.783495 kernel: CPU features: detected: Spectre-BHB Jul 12 00:14:02.783501 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:14:02.783508 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:14:02.783527 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:14:02.783534 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:14:02.783540 kernel: alternatives: applying boot alternatives Jul 12 00:14:02.783548 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1c509b75c415e22efa5694a6e5c4fcdf8d966f80945a2324c5fe054f1c97cb94 Jul 12 00:14:02.783557 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:14:02.783563 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:14:02.783570 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:14:02.783576 kernel: Fallback order for Node 0: 0 Jul 12 00:14:02.783583 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 12 00:14:02.783589 kernel: Policy zone: DMA Jul 12 00:14:02.783596 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:14:02.783602 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 12 00:14:02.783609 kernel: software IO TLB: area num 4. Jul 12 00:14:02.783622 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 12 00:14:02.783630 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 12 00:14:02.783638 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:14:02.783645 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:14:02.783652 kernel: rcu: RCU event tracing is enabled. Jul 12 00:14:02.783658 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:14:02.783665 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:14:02.783672 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:14:02.783678 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:14:02.783685 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:14:02.783691 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:14:02.783698 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:14:02.783704 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:14:02.783712 kernel: GICv3: 256 SPIs implemented Jul 12 00:14:02.783718 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:14:02.783724 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:14:02.783731 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:14:02.783737 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 12 00:14:02.783743 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:14:02.783750 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:14:02.783756 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:14:02.783763 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:14:02.783769 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 12 00:14:02.783775 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 12 00:14:02.783782 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:14:02.783790 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:14:02.783796 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:14:02.783803 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:14:02.783809 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:14:02.783816 kernel: arm-pv: using stolen time PV Jul 12 00:14:02.783822 kernel: Console: colour dummy device 80x25 Jul 12 00:14:02.783829 kernel: ACPI: Core revision 20240827 Jul 12 00:14:02.783836 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:14:02.783842 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:14:02.783849 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 12 00:14:02.783857 kernel: landlock: Up and running. Jul 12 00:14:02.783863 kernel: SELinux: Initializing. Jul 12 00:14:02.783869 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:14:02.783876 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:14:02.783882 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:14:02.783889 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:14:02.783896 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 12 00:14:02.783902 kernel: Remapping and enabling EFI services. Jul 12 00:14:02.783909 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:14:02.783921 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:14:02.783928 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:14:02.783935 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 12 00:14:02.783944 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:14:02.783951 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:14:02.783957 kernel: Detected PIPT I-cache on CPU2 Jul 12 00:14:02.783965 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 00:14:02.783972 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 12 00:14:02.783980 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:14:02.783987 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 00:14:02.783994 kernel: Detected PIPT I-cache on CPU3 Jul 12 00:14:02.784000 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 00:14:02.784007 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 12 00:14:02.784014 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:14:02.784021 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 00:14:02.784028 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:14:02.784035 kernel: SMP: Total of 4 processors activated. Jul 12 00:14:02.784043 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:14:02.784051 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:14:02.784058 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:14:02.784065 kernel: CPU features: detected: Common not Private translations Jul 12 00:14:02.784071 kernel: CPU features: detected: CRC32 instructions Jul 12 00:14:02.784078 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 00:14:02.784085 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:14:02.784100 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:14:02.784107 kernel: CPU features: detected: Privileged Access Never Jul 12 00:14:02.784114 kernel: CPU features: detected: RAS Extension Support Jul 12 00:14:02.784122 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:14:02.784129 kernel: alternatives: applying system-wide alternatives Jul 12 00:14:02.784136 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 12 00:14:02.784143 kernel: Memory: 2423968K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 125984K reserved, 16384K cma-reserved) Jul 12 00:14:02.784150 kernel: devtmpfs: initialized Jul 12 00:14:02.784157 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:14:02.784164 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:14:02.784171 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:14:02.784179 kernel: 0 pages in range for non-PLT usage Jul 12 00:14:02.784186 kernel: 508432 pages in range for PLT usage Jul 12 00:14:02.784193 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:14:02.784200 kernel: SMBIOS 3.0.0 present. Jul 12 00:14:02.784207 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 12 00:14:02.784214 kernel: DMI: Memory slots populated: 1/1 Jul 12 00:14:02.784220 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:14:02.784227 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:14:02.784235 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:14:02.784312 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:14:02.784321 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:14:02.784328 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 12 00:14:02.784335 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:14:02.784342 kernel: cpuidle: using governor menu Jul 12 00:14:02.784349 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:14:02.784356 kernel: ASID allocator initialised with 32768 entries Jul 12 00:14:02.784363 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:14:02.784370 kernel: Serial: AMBA PL011 UART driver Jul 12 00:14:02.784380 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:14:02.784387 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:14:02.784394 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:14:02.784401 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:14:02.784408 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:14:02.784415 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:14:02.784422 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:14:02.784428 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:14:02.784435 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:14:02.784442 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:14:02.784451 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:14:02.784457 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:14:02.784464 kernel: ACPI: Interpreter enabled Jul 12 00:14:02.784472 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:14:02.784479 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:14:02.784487 kernel: ACPI: CPU0 has been hot-added Jul 12 00:14:02.784494 kernel: ACPI: CPU1 has been hot-added Jul 12 00:14:02.784501 kernel: ACPI: CPU2 has been hot-added Jul 12 00:14:02.784508 kernel: ACPI: CPU3 has been hot-added Jul 12 00:14:02.784525 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:14:02.784547 kernel: printk: legacy console [ttyAMA0] enabled Jul 12 00:14:02.784554 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:14:02.784706 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:14:02.784775 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:14:02.784834 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:14:02.784895 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:14:02.784954 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:14:02.784963 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:14:02.784970 kernel: PCI host bridge to bus 0000:00 Jul 12 00:14:02.785035 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:14:02.785089 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:14:02.785142 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:14:02.785730 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:14:02.785835 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 12 00:14:02.785911 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 12 00:14:02.785972 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 12 00:14:02.786031 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 12 00:14:02.786089 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:14:02.786147 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 12 00:14:02.786206 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 12 00:14:02.786268 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 12 00:14:02.786323 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:14:02.786376 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:14:02.786429 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:14:02.786438 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:14:02.786445 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:14:02.786452 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:14:02.786461 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:14:02.786468 kernel: iommu: Default domain type: Translated Jul 12 00:14:02.786475 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:14:02.786481 kernel: efivars: Registered efivars operations Jul 12 00:14:02.786488 kernel: vgaarb: loaded Jul 12 00:14:02.786495 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:14:02.786502 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:14:02.786532 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:14:02.786541 kernel: pnp: PnP ACPI init Jul 12 00:14:02.786628 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:14:02.786639 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:14:02.786646 kernel: NET: Registered PF_INET protocol family Jul 12 00:14:02.786653 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:14:02.786660 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:14:02.786668 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:14:02.786675 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:14:02.786683 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:14:02.786692 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:14:02.786700 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:14:02.786707 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:14:02.786713 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:14:02.786720 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:14:02.786727 kernel: kvm [1]: HYP mode not available Jul 12 00:14:02.786734 kernel: Initialise system trusted keyrings Jul 12 00:14:02.786741 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:14:02.786748 kernel: Key type asymmetric registered Jul 12 00:14:02.786754 kernel: Asymmetric key parser 'x509' registered Jul 12 00:14:02.786763 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 12 00:14:02.786770 kernel: io scheduler mq-deadline registered Jul 12 00:14:02.786777 kernel: io scheduler kyber registered Jul 12 00:14:02.786784 kernel: io scheduler bfq registered Jul 12 00:14:02.786791 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:14:02.786797 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:14:02.786805 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:14:02.786869 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 00:14:02.786878 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:14:02.786887 kernel: thunder_xcv, ver 1.0 Jul 12 00:14:02.786894 kernel: thunder_bgx, ver 1.0 Jul 12 00:14:02.786901 kernel: nicpf, ver 1.0 Jul 12 00:14:02.786907 kernel: nicvf, ver 1.0 Jul 12 00:14:02.786993 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:14:02.787053 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:14:02 UTC (1752279242) Jul 12 00:14:02.787062 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:14:02.787070 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 12 00:14:02.787079 kernel: watchdog: NMI not fully supported Jul 12 00:14:02.787086 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:14:02.787093 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:14:02.787100 kernel: Segment Routing with IPv6 Jul 12 00:14:02.787107 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:14:02.787114 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:14:02.787121 kernel: Key type dns_resolver registered Jul 12 00:14:02.787128 kernel: registered taskstats version 1 Jul 12 00:14:02.787134 kernel: Loading compiled-in X.509 certificates Jul 12 00:14:02.787143 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: c7f0ead7774c9f3ce89138919383855e2b9ff735' Jul 12 00:14:02.787150 kernel: Demotion targets for Node 0: null Jul 12 00:14:02.787157 kernel: Key type .fscrypt registered Jul 12 00:14:02.787164 kernel: Key type fscrypt-provisioning registered Jul 12 00:14:02.787171 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:14:02.787177 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:14:02.787184 kernel: ima: No architecture policies found Jul 12 00:14:02.787191 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:14:02.787198 kernel: clk: Disabling unused clocks Jul 12 00:14:02.787207 kernel: PM: genpd: Disabling unused power domains Jul 12 00:14:02.787214 kernel: Warning: unable to open an initial console. Jul 12 00:14:02.787221 kernel: Freeing unused kernel memory: 39488K Jul 12 00:14:02.787228 kernel: Run /init as init process Jul 12 00:14:02.787235 kernel: with arguments: Jul 12 00:14:02.787242 kernel: /init Jul 12 00:14:02.787249 kernel: with environment: Jul 12 00:14:02.787256 kernel: HOME=/ Jul 12 00:14:02.787263 kernel: TERM=linux Jul 12 00:14:02.787271 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:14:02.787278 systemd[1]: Successfully made /usr/ read-only. Jul 12 00:14:02.787288 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 00:14:02.787297 systemd[1]: Detected virtualization kvm. Jul 12 00:14:02.787304 systemd[1]: Detected architecture arm64. Jul 12 00:14:02.787312 systemd[1]: Running in initrd. Jul 12 00:14:02.787319 systemd[1]: No hostname configured, using default hostname. Jul 12 00:14:02.787328 systemd[1]: Hostname set to . Jul 12 00:14:02.787336 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:14:02.787344 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:14:02.787351 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:14:02.787359 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:14:02.787370 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:14:02.787384 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:14:02.787397 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:14:02.787414 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:14:02.787428 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:14:02.787442 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:14:02.787452 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:14:02.787460 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:14:02.787467 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:14:02.787475 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:14:02.787489 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:14:02.787497 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:14:02.787505 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:14:02.787522 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:14:02.787530 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:14:02.787538 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 12 00:14:02.787546 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:14:02.787554 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:14:02.787563 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:14:02.787571 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:14:02.787581 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:14:02.787589 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:14:02.787597 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:14:02.787605 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 12 00:14:02.787618 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:14:02.787626 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:14:02.787633 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:14:02.787643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:14:02.787651 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:14:02.787659 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:14:02.787666 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:14:02.787675 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:14:02.787703 systemd-journald[245]: Collecting audit messages is disabled. Jul 12 00:14:02.787723 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:14:02.787731 systemd-journald[245]: Journal started Jul 12 00:14:02.787751 systemd-journald[245]: Runtime Journal (/run/log/journal/9df9d7736a454c7697fe055f5d9f917b) is 6M, max 48.5M, 42.4M free. Jul 12 00:14:02.777364 systemd-modules-load[246]: Inserted module 'overlay' Jul 12 00:14:02.793029 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:14:02.793401 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:14:02.796533 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:14:02.797967 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:14:02.799404 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:14:02.802545 kernel: Bridge firewalling registered Jul 12 00:14:02.800198 systemd-modules-load[246]: Inserted module 'br_netfilter' Jul 12 00:14:02.810257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:14:02.811370 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:14:02.813624 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:14:02.818543 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 12 00:14:02.822112 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:14:02.823366 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:14:02.828109 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:14:02.831168 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:14:02.832142 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:14:02.834304 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:14:02.858529 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1c509b75c415e22efa5694a6e5c4fcdf8d966f80945a2324c5fe054f1c97cb94 Jul 12 00:14:02.872168 systemd-resolved[288]: Positive Trust Anchors: Jul 12 00:14:02.872188 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:14:02.872220 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:14:02.876998 systemd-resolved[288]: Defaulting to hostname 'linux'. Jul 12 00:14:02.877983 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:14:02.880024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:14:02.935560 kernel: SCSI subsystem initialized Jul 12 00:14:02.939548 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:14:02.947558 kernel: iscsi: registered transport (tcp) Jul 12 00:14:02.959552 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:14:02.959585 kernel: QLogic iSCSI HBA Driver Jul 12 00:14:02.977732 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:14:02.994375 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:14:02.995738 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:14:03.042160 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:14:03.044305 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:14:03.104547 kernel: raid6: neonx8 gen() 15745 MB/s Jul 12 00:14:03.121524 kernel: raid6: neonx4 gen() 15796 MB/s Jul 12 00:14:03.138525 kernel: raid6: neonx2 gen() 13155 MB/s Jul 12 00:14:03.155526 kernel: raid6: neonx1 gen() 10422 MB/s Jul 12 00:14:03.172533 kernel: raid6: int64x8 gen() 6878 MB/s Jul 12 00:14:03.189532 kernel: raid6: int64x4 gen() 7302 MB/s Jul 12 00:14:03.206539 kernel: raid6: int64x2 gen() 6092 MB/s Jul 12 00:14:03.223529 kernel: raid6: int64x1 gen() 5027 MB/s Jul 12 00:14:03.223550 kernel: raid6: using algorithm neonx4 gen() 15796 MB/s Jul 12 00:14:03.240530 kernel: raid6: .... xor() 12313 MB/s, rmw enabled Jul 12 00:14:03.240546 kernel: raid6: using neon recovery algorithm Jul 12 00:14:03.247818 kernel: xor: measuring software checksum speed Jul 12 00:14:03.247840 kernel: 8regs : 21613 MB/sec Jul 12 00:14:03.247849 kernel: 32regs : 21710 MB/sec Jul 12 00:14:03.248738 kernel: arm64_neon : 28099 MB/sec Jul 12 00:14:03.248751 kernel: xor: using function: arm64_neon (28099 MB/sec) Jul 12 00:14:03.306349 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:14:03.315441 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:14:03.318666 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:14:03.353199 systemd-udevd[498]: Using default interface naming scheme 'v255'. Jul 12 00:14:03.357449 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:14:03.359234 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:14:03.388330 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Jul 12 00:14:03.411873 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:14:03.413871 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:14:03.468907 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:14:03.472695 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:14:03.517284 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 12 00:14:03.517530 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:14:03.527536 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:14:03.527586 kernel: GPT:9289727 != 19775487 Jul 12 00:14:03.527603 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:14:03.527622 kernel: GPT:9289727 != 19775487 Jul 12 00:14:03.527633 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:14:03.528816 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:14:03.537650 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:14:03.537777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:14:03.540461 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:14:03.542473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:14:03.571454 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 00:14:03.572683 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:14:03.574236 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:14:03.582567 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 00:14:03.590353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:14:03.596649 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 00:14:03.597504 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 00:14:03.599263 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:14:03.601329 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:14:03.602946 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:14:03.605074 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:14:03.606580 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:14:03.628381 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:14:03.699849 disk-uuid[593]: Primary Header is updated. Jul 12 00:14:03.699849 disk-uuid[593]: Secondary Entries is updated. Jul 12 00:14:03.699849 disk-uuid[593]: Secondary Header is updated. Jul 12 00:14:03.702529 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:14:04.725566 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:14:04.726196 disk-uuid[601]: The operation has completed successfully. Jul 12 00:14:04.751478 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:14:04.752506 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:14:04.778013 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:14:04.799483 sh[613]: Success Jul 12 00:14:04.813813 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:14:04.813863 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:14:04.813874 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 12 00:14:04.822612 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 12 00:14:04.848118 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:14:04.850641 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:14:04.871235 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:14:04.878243 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 12 00:14:04.879529 kernel: BTRFS: device fsid 4023d6e9-59be-468e-b77b-f288583bd7a2 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (625) Jul 12 00:14:04.884997 kernel: BTRFS info (device dm-0): first mount of filesystem 4023d6e9-59be-468e-b77b-f288583bd7a2 Jul 12 00:14:04.885010 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:14:04.885020 kernel: BTRFS info (device dm-0): using free-space-tree Jul 12 00:14:04.892342 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:14:04.893416 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 12 00:14:04.894534 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:14:04.895317 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:14:04.896661 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:14:04.925528 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (656) Jul 12 00:14:04.927985 kernel: BTRFS info (device vda6): first mount of filesystem 071a7f34-e2fb-4b74-8190-8a0f2c41c1d5 Jul 12 00:14:04.928022 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:14:04.928032 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 00:14:04.933534 kernel: BTRFS info (device vda6): last unmount of filesystem 071a7f34-e2fb-4b74-8190-8a0f2c41c1d5 Jul 12 00:14:04.934576 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:14:04.936281 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:14:05.006175 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:14:05.009027 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:14:05.047916 systemd-networkd[797]: lo: Link UP Jul 12 00:14:05.047926 systemd-networkd[797]: lo: Gained carrier Jul 12 00:14:05.048701 systemd-networkd[797]: Enumeration completed Jul 12 00:14:05.049503 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:14:05.049506 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:14:05.050389 systemd-networkd[797]: eth0: Link UP Jul 12 00:14:05.050392 systemd-networkd[797]: eth0: Gained carrier Jul 12 00:14:05.050400 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:14:05.052015 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:14:05.053028 systemd[1]: Reached target network.target - Network. Jul 12 00:14:05.077582 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.143/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:14:05.093053 ignition[697]: Ignition 2.21.0 Jul 12 00:14:05.093360 ignition[697]: Stage: fetch-offline Jul 12 00:14:05.093717 ignition[697]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:14:05.093727 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:14:05.094007 ignition[697]: parsed url from cmdline: "" Jul 12 00:14:05.094011 ignition[697]: no config URL provided Jul 12 00:14:05.094015 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:14:05.094023 ignition[697]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:14:05.094047 ignition[697]: op(1): [started] loading QEMU firmware config module Jul 12 00:14:05.094051 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:14:05.104336 ignition[697]: op(1): [finished] loading QEMU firmware config module Jul 12 00:14:05.142432 ignition[697]: parsing config with SHA512: 8be25c07d11e8b677de97a64a7bf6837c113e1a9320194090562cb2ca558fedb9051202f004b7715176f38207f385710d6d86ef777049df21f84f03eecbb5871 Jul 12 00:14:05.148344 unknown[697]: fetched base config from "system" Jul 12 00:14:05.148356 unknown[697]: fetched user config from "qemu" Jul 12 00:14:05.148762 ignition[697]: fetch-offline: fetch-offline passed Jul 12 00:14:05.148891 systemd-resolved[288]: Detected conflict on linux IN A 10.0.0.143 Jul 12 00:14:05.148816 ignition[697]: Ignition finished successfully Jul 12 00:14:05.148898 systemd-resolved[288]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Jul 12 00:14:05.151913 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:14:05.153851 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:14:05.154789 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:14:05.185802 ignition[812]: Ignition 2.21.0 Jul 12 00:14:05.185821 ignition[812]: Stage: kargs Jul 12 00:14:05.185958 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:14:05.185967 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:14:05.188092 ignition[812]: kargs: kargs passed Jul 12 00:14:05.188146 ignition[812]: Ignition finished successfully Jul 12 00:14:05.191233 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:14:05.193021 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:14:05.212012 ignition[820]: Ignition 2.21.0 Jul 12 00:14:05.212030 ignition[820]: Stage: disks Jul 12 00:14:05.212234 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:14:05.212245 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:14:05.213699 ignition[820]: disks: disks passed Jul 12 00:14:05.213751 ignition[820]: Ignition finished successfully Jul 12 00:14:05.216237 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:14:05.217435 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:14:05.218616 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:14:05.220052 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:14:05.221429 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:14:05.222704 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:14:05.224889 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:14:05.242829 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 12 00:14:05.271633 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:14:05.273630 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:14:05.350532 kernel: EXT4-fs (vda9): mounted filesystem 756da7b6-6dd1-4c13-8d18-4b4d7a20148d r/w with ordered data mode. Quota mode: none. Jul 12 00:14:05.350551 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:14:05.351597 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:14:05.353555 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:14:05.354920 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:14:05.355704 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:14:05.355742 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:14:05.355766 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:14:05.368154 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:14:05.370474 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:14:05.373915 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (839) Jul 12 00:14:05.373946 kernel: BTRFS info (device vda6): first mount of filesystem 071a7f34-e2fb-4b74-8190-8a0f2c41c1d5 Jul 12 00:14:05.373956 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:14:05.373966 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 00:14:05.377599 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:14:05.416857 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:14:05.420278 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:14:05.424417 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:14:05.428324 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:14:05.588264 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:14:05.592741 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:14:05.594487 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:14:05.633573 kernel: BTRFS info (device vda6): last unmount of filesystem 071a7f34-e2fb-4b74-8190-8a0f2c41c1d5 Jul 12 00:14:05.652782 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:14:05.670440 ignition[954]: INFO : Ignition 2.21.0 Jul 12 00:14:05.670440 ignition[954]: INFO : Stage: mount Jul 12 00:14:05.672552 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:14:05.672552 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:14:05.674045 ignition[954]: INFO : mount: mount passed Jul 12 00:14:05.674045 ignition[954]: INFO : Ignition finished successfully Jul 12 00:14:05.675869 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:14:05.681719 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:14:05.877665 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:14:05.883966 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:14:05.916478 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (966) Jul 12 00:14:05.918813 kernel: BTRFS info (device vda6): first mount of filesystem 071a7f34-e2fb-4b74-8190-8a0f2c41c1d5 Jul 12 00:14:05.918854 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:14:05.918865 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 00:14:05.925211 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:14:05.979845 ignition[983]: INFO : Ignition 2.21.0 Jul 12 00:14:05.979845 ignition[983]: INFO : Stage: files Jul 12 00:14:05.983329 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:14:05.983329 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:14:05.985500 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:14:05.986327 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:14:05.986327 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:14:05.989375 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:14:05.990413 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:14:05.990413 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:14:05.990281 unknown[983]: wrote ssh authorized keys file for user: core Jul 12 00:14:05.993842 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:14:05.993842 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:14:06.050278 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:14:06.255627 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:14:06.255627 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:14:06.258466 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:14:06.258466 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:14:06.258466 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:14:06.258466 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:14:06.258466 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:14:06.258466 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:14:06.258466 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:14:06.269406 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:14:06.269406 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:14:06.269406 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:14:06.269406 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:14:06.269406 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:14:06.269406 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:14:06.664583 systemd-networkd[797]: eth0: Gained IPv6LL Jul 12 00:14:06.787722 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 00:14:07.177864 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:14:07.177864 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 00:14:07.184101 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:14:07.190366 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:14:07.190366 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 00:14:07.190366 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 12 00:14:07.190366 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:14:07.195843 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:14:07.195843 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 12 00:14:07.195843 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:14:07.228412 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:14:07.232221 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:14:07.234912 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:14:07.234912 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:14:07.234912 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:14:07.234912 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:14:07.234912 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:14:07.234912 ignition[983]: INFO : files: files passed Jul 12 00:14:07.234912 ignition[983]: INFO : Ignition finished successfully Jul 12 00:14:07.235565 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:14:07.240364 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:14:07.259014 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:14:07.267660 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:14:07.269559 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:14:07.272300 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 00:14:07.276063 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:14:07.276063 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:14:07.278776 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:14:07.278555 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:14:07.280184 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:14:07.284137 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:14:07.342731 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:14:07.342881 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:14:07.344709 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:14:07.346177 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:14:07.347605 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:14:07.348572 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:14:07.382837 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:14:07.385040 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:14:07.407108 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:14:07.408073 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:14:07.409502 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:14:07.410909 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:14:07.411041 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:14:07.412978 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:14:07.414388 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:14:07.415579 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:14:07.416818 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:14:07.418201 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:14:07.419586 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 12 00:14:07.420997 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:14:07.422290 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:14:07.423708 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:14:07.425125 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:14:07.426353 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:14:07.427442 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:14:07.427580 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:14:07.429541 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:14:07.431025 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:14:07.432411 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:14:07.435765 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:14:07.436730 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:14:07.436860 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:14:07.438929 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:14:07.439040 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:14:07.440441 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:14:07.441622 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:14:07.441744 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:14:07.443178 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:14:07.444326 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:14:07.445630 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:14:07.445726 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:14:07.447354 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:14:07.447429 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:14:07.448662 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:14:07.448781 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:14:07.450057 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:14:07.450155 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:14:07.452167 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:14:07.453032 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:14:07.453148 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:14:07.455403 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:14:07.456818 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:14:07.456940 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:14:07.458323 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:14:07.458432 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:14:07.463233 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:14:07.464700 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:14:07.478267 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:14:07.483007 ignition[1039]: INFO : Ignition 2.21.0 Jul 12 00:14:07.483007 ignition[1039]: INFO : Stage: umount Jul 12 00:14:07.484334 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:14:07.484334 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:14:07.484334 ignition[1039]: INFO : umount: umount passed Jul 12 00:14:07.484334 ignition[1039]: INFO : Ignition finished successfully Jul 12 00:14:07.486078 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:14:07.486183 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:14:07.489980 systemd[1]: Stopped target network.target - Network. Jul 12 00:14:07.491148 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:14:07.491265 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:14:07.492394 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:14:07.492444 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:14:07.493787 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:14:07.493844 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:14:07.495079 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:14:07.495121 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:14:07.496663 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:14:07.497951 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:14:07.502733 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:14:07.502859 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:14:07.506947 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 12 00:14:07.507507 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:14:07.507618 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:14:07.511638 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 12 00:14:07.511911 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:14:07.512038 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:14:07.515644 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 12 00:14:07.516134 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 12 00:14:07.517180 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:14:07.517223 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:14:07.520099 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:14:07.521079 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:14:07.521141 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:14:07.522920 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:14:07.522969 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:14:07.524582 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:14:07.524639 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:14:07.526052 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:14:07.532454 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:14:07.536093 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:14:07.536203 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:14:07.539745 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:14:07.539843 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:14:07.544235 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:14:07.556863 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:14:07.558134 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:14:07.558173 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:14:07.559937 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:14:07.559969 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:14:07.561292 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:14:07.561334 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:14:07.563495 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:14:07.563557 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:14:07.565465 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:14:07.565571 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:14:07.568621 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:14:07.569424 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 12 00:14:07.569479 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:14:07.572166 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:14:07.572207 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:14:07.574965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:14:07.575011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:14:07.578325 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:14:07.586683 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:14:07.592167 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:14:07.592275 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:14:07.594272 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:14:07.596641 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:14:07.614995 systemd[1]: Switching root. Jul 12 00:14:07.640856 systemd-journald[245]: Journal stopped Jul 12 00:14:08.445468 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jul 12 00:14:08.445547 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:14:08.445564 kernel: SELinux: policy capability open_perms=1 Jul 12 00:14:08.445573 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:14:08.445586 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:14:08.445610 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:14:08.445625 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:14:08.445634 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:14:08.445644 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:14:08.445652 kernel: SELinux: policy capability userspace_initial_context=0 Jul 12 00:14:08.445661 kernel: audit: type=1403 audit(1752279247.811:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:14:08.445672 systemd[1]: Successfully loaded SELinux policy in 52.697ms. Jul 12 00:14:08.445688 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.171ms. Jul 12 00:14:08.445706 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 00:14:08.445733 systemd[1]: Detected virtualization kvm. Jul 12 00:14:08.445743 systemd[1]: Detected architecture arm64. Jul 12 00:14:08.445752 systemd[1]: Detected first boot. Jul 12 00:14:08.445762 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:14:08.445773 zram_generator::config[1083]: No configuration found. Jul 12 00:14:08.445784 kernel: NET: Registered PF_VSOCK protocol family Jul 12 00:14:08.445793 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:14:08.445804 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 12 00:14:08.445816 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:14:08.445826 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:14:08.445836 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:14:08.445847 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:14:08.445856 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:14:08.445868 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:14:08.445878 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:14:08.445888 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:14:08.445900 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:14:08.445910 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:14:08.445920 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:14:08.445929 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:14:08.445939 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:14:08.445949 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:14:08.445959 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:14:08.445969 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:14:08.445978 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:14:08.445992 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:14:08.446002 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:14:08.446012 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:14:08.446021 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:14:08.446031 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:14:08.446040 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:14:08.446051 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:14:08.446062 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:14:08.446072 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:14:08.446083 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:14:08.446093 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:14:08.446103 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:14:08.446113 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:14:08.446123 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 12 00:14:08.446132 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:14:08.446142 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:14:08.446152 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:14:08.446163 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:14:08.446174 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:14:08.446183 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:14:08.446193 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:14:08.446203 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:14:08.446212 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:14:08.446222 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:14:08.446232 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:14:08.446243 systemd[1]: Reached target machines.target - Containers. Jul 12 00:14:08.446264 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:14:08.446273 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:14:08.446287 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:14:08.446297 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:14:08.446307 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:14:08.446317 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:14:08.446327 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:14:08.446337 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:14:08.446349 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:14:08.446359 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:14:08.446369 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:14:08.446379 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:14:08.446389 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:14:08.446399 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:14:08.446409 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:14:08.446419 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:14:08.446430 kernel: loop: module loaded Jul 12 00:14:08.446440 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:14:08.446449 kernel: ACPI: bus type drm_connector registered Jul 12 00:14:08.446458 kernel: fuse: init (API version 7.41) Jul 12 00:14:08.446468 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:14:08.446478 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:14:08.446487 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 12 00:14:08.446497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:14:08.446587 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:14:08.446608 systemd[1]: Stopped verity-setup.service. Jul 12 00:14:08.446619 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:14:08.446629 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:14:08.446639 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:14:08.446649 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:14:08.446661 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:14:08.446670 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:14:08.446705 systemd-journald[1158]: Collecting audit messages is disabled. Jul 12 00:14:08.446729 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:14:08.446740 systemd-journald[1158]: Journal started Jul 12 00:14:08.446761 systemd-journald[1158]: Runtime Journal (/run/log/journal/9df9d7736a454c7697fe055f5d9f917b) is 6M, max 48.5M, 42.4M free. Jul 12 00:14:08.221712 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:14:08.242097 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 00:14:08.242532 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:14:08.449539 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:14:08.450146 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:14:08.451338 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:14:08.451576 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:14:08.452685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:14:08.452852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:14:08.454001 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:14:08.454161 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:14:08.455223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:14:08.455389 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:14:08.456723 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:14:08.456889 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:14:08.457913 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:14:08.458069 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:14:08.459301 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:14:08.460423 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:14:08.461805 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:14:08.462961 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 12 00:14:08.474232 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:14:08.476535 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:14:08.478367 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:14:08.479246 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:14:08.479275 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:14:08.481019 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 12 00:14:08.489824 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:14:08.490938 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:14:08.492631 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:14:08.495689 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:14:08.496678 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:14:08.498689 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:14:08.499568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:14:08.500638 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:14:08.504700 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:14:08.508465 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:14:08.511679 systemd-journald[1158]: Time spent on flushing to /var/log/journal/9df9d7736a454c7697fe055f5d9f917b is 20.326ms for 884 entries. Jul 12 00:14:08.511679 systemd-journald[1158]: System Journal (/var/log/journal/9df9d7736a454c7697fe055f5d9f917b) is 8M, max 195.6M, 187.6M free. Jul 12 00:14:08.551713 systemd-journald[1158]: Received client request to flush runtime journal. Jul 12 00:14:08.551812 kernel: loop0: detected capacity change from 0 to 107312 Jul 12 00:14:08.551841 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:14:08.513629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:14:08.516621 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:14:08.517952 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:14:08.521459 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:14:08.525272 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:14:08.531769 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 12 00:14:08.554729 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:14:08.557819 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:14:08.563616 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 12 00:14:08.567075 kernel: loop1: detected capacity change from 0 to 138376 Jul 12 00:14:08.577015 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:14:08.580592 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:14:08.595607 kernel: loop2: detected capacity change from 0 to 203944 Jul 12 00:14:08.611472 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 12 00:14:08.611492 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 12 00:14:08.615733 kernel: loop3: detected capacity change from 0 to 107312 Jul 12 00:14:08.616181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:14:08.625559 kernel: loop4: detected capacity change from 0 to 138376 Jul 12 00:14:08.634982 kernel: loop5: detected capacity change from 0 to 203944 Jul 12 00:14:08.637133 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 00:14:08.637550 (sd-merge)[1221]: Merged extensions into '/usr'. Jul 12 00:14:08.642483 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:14:08.642498 systemd[1]: Reloading... Jul 12 00:14:08.699542 zram_generator::config[1252]: No configuration found. Jul 12 00:14:08.769806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:14:08.817815 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:14:08.834856 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:14:08.835236 systemd[1]: Reloading finished in 192 ms. Jul 12 00:14:08.859065 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:14:08.860241 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:14:08.877886 systemd[1]: Starting ensure-sysext.service... Jul 12 00:14:08.879506 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:14:08.894535 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 12 00:14:08.894570 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 12 00:14:08.894798 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:14:08.894987 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:14:08.895499 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:14:08.895539 systemd[1]: Reloading... Jul 12 00:14:08.895624 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:14:08.895830 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 12 00:14:08.895881 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 12 00:14:08.898911 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:14:08.898927 systemd-tmpfiles[1284]: Skipping /boot Jul 12 00:14:08.908153 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:14:08.908171 systemd-tmpfiles[1284]: Skipping /boot Jul 12 00:14:08.942601 zram_generator::config[1312]: No configuration found. Jul 12 00:14:09.008222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:14:09.070572 systemd[1]: Reloading finished in 174 ms. Jul 12 00:14:09.090939 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:14:09.096214 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:14:09.104504 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 00:14:09.106602 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:14:09.108783 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:14:09.111677 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:14:09.113397 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:14:09.118113 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:14:09.127278 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:14:09.129997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:14:09.136140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:14:09.140698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:14:09.141578 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:14:09.141713 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:14:09.144561 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:14:09.145982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:14:09.146136 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:14:09.148293 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:14:09.148455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:14:09.150359 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:14:09.150506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:14:09.154444 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Jul 12 00:14:09.159364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:14:09.160968 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:14:09.163081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:14:09.165111 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:14:09.167751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:14:09.168434 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:14:09.171066 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:14:09.173664 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:14:09.176015 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:14:09.180023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:14:09.180636 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:14:09.182567 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:14:09.182762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:14:09.184601 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:14:09.184849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:14:09.194704 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:14:09.196443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:14:09.201484 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:14:09.204768 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:14:09.207966 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:14:09.208939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:14:09.209064 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:14:09.214554 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:14:09.216204 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:14:09.217186 augenrules[1392]: No rules Jul 12 00:14:09.219188 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:14:09.220706 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:14:09.220902 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 00:14:09.222336 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:14:09.222486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:14:09.223879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:14:09.224028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:14:09.225928 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:14:09.226102 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:14:09.230105 systemd[1]: Finished ensure-sysext.service. Jul 12 00:14:09.232489 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:14:09.234997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:14:09.235646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:14:09.261765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:14:09.262635 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:14:09.262718 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:14:09.264632 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:14:09.265455 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:14:09.343040 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 00:14:09.377833 systemd-resolved[1351]: Positive Trust Anchors: Jul 12 00:14:09.377852 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:14:09.377885 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:14:09.379552 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:14:09.382631 systemd-networkd[1437]: lo: Link UP Jul 12 00:14:09.382642 systemd-networkd[1437]: lo: Gained carrier Jul 12 00:14:09.383403 systemd-networkd[1437]: Enumeration completed Jul 12 00:14:09.383741 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:14:09.384652 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:14:09.385347 systemd-resolved[1351]: Defaulting to hostname 'linux'. Jul 12 00:14:09.391211 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:14:09.391223 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:14:09.391717 systemd-networkd[1437]: eth0: Link UP Jul 12 00:14:09.391836 systemd-networkd[1437]: eth0: Gained carrier Jul 12 00:14:09.391849 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:14:09.393960 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 12 00:14:09.396726 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:14:09.397690 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:14:09.399132 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:14:09.400475 systemd[1]: Reached target network.target - Network. Jul 12 00:14:09.401188 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:14:09.402244 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:14:09.403100 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:14:09.403988 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:14:09.404964 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:14:09.405847 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:14:09.405874 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:14:09.406481 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:14:09.407369 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:14:09.408239 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:14:09.409353 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:14:09.411012 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:14:09.413061 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:14:09.415925 systemd-networkd[1437]: eth0: DHCPv4 address 10.0.0.143/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:14:09.416053 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 12 00:14:09.417145 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 12 00:14:09.417204 systemd-timesyncd[1438]: Network configuration changed, trying to establish connection. Jul 12 00:14:09.418126 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 12 00:14:09.419930 systemd-timesyncd[1438]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:14:09.420076 systemd-timesyncd[1438]: Initial clock synchronization to Sat 2025-07-12 00:14:09.763697 UTC. Jul 12 00:14:09.420849 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:14:09.422044 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 12 00:14:09.423835 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:14:09.424918 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:14:09.426422 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:14:09.427269 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:14:09.428050 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:14:09.428083 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:14:09.437769 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:14:09.440101 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:14:09.443177 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:14:09.449417 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:14:09.451701 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:14:09.452434 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:14:09.454718 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:14:09.465711 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:14:09.467709 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:14:09.471777 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:14:09.478774 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:14:09.481441 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:14:09.481877 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:14:09.485219 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:14:09.488684 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:14:09.491529 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 12 00:14:09.497522 extend-filesystems[1468]: Found /dev/vda6 Jul 12 00:14:09.499389 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:14:09.500970 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:14:09.501576 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:14:09.502871 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:14:09.503050 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:14:09.503283 jq[1489]: true Jul 12 00:14:09.517300 extend-filesystems[1468]: Found /dev/vda9 Jul 12 00:14:09.520239 jq[1467]: false Jul 12 00:14:09.522390 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:14:09.522813 extend-filesystems[1468]: Checking size of /dev/vda9 Jul 12 00:14:09.524139 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:14:09.528099 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:14:09.530208 jq[1493]: true Jul 12 00:14:09.562679 extend-filesystems[1468]: Resized partition /dev/vda9 Jul 12 00:14:09.577153 dbus-daemon[1461]: [system] SELinux support is enabled Jul 12 00:14:09.577807 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:14:09.580403 extend-filesystems[1517]: resize2fs 1.47.2 (1-Jan-2025) Jul 12 00:14:09.581315 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:14:09.584720 update_engine[1484]: I20250712 00:14:09.584246 1484 main.cc:92] Flatcar Update Engine starting Jul 12 00:14:09.581344 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:14:09.582488 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:14:09.582505 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:14:09.589294 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:14:09.590961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:14:09.592318 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:14:09.640540 tar[1499]: linux-arm64/helm Jul 12 00:14:09.640866 update_engine[1484]: I20250712 00:14:09.602641 1484 update_check_scheduler.cc:74] Next update check in 7m24s Jul 12 00:14:09.597850 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:14:09.642117 systemd-logind[1478]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:14:09.642311 systemd-logind[1478]: New seat seat0. Jul 12 00:14:09.642977 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:14:09.669534 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:14:09.670657 bash[1531]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:14:09.672219 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:14:09.675818 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:14:09.692029 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:14:09.692949 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:14:09.692949 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:14:09.692949 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:14:09.699348 extend-filesystems[1468]: Resized filesystem in /dev/vda9 Jul 12 00:14:09.696899 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:14:09.697265 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:14:09.714066 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:14:09.777459 containerd[1494]: time="2025-07-12T00:14:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 12 00:14:09.778212 containerd[1494]: time="2025-07-12T00:14:09.778173680Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 12 00:14:09.788226 containerd[1494]: time="2025-07-12T00:14:09.788175360Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.96µs" Jul 12 00:14:09.788226 containerd[1494]: time="2025-07-12T00:14:09.788216360Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 12 00:14:09.788322 containerd[1494]: time="2025-07-12T00:14:09.788234720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 12 00:14:09.788418 containerd[1494]: time="2025-07-12T00:14:09.788396200Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 12 00:14:09.788444 containerd[1494]: time="2025-07-12T00:14:09.788417040Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 12 00:14:09.788462 containerd[1494]: time="2025-07-12T00:14:09.788442040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 00:14:09.788518 containerd[1494]: time="2025-07-12T00:14:09.788490720Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 00:14:09.788550 containerd[1494]: time="2025-07-12T00:14:09.788507320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 00:14:09.788778 containerd[1494]: time="2025-07-12T00:14:09.788749760Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 00:14:09.788778 containerd[1494]: time="2025-07-12T00:14:09.788770920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 00:14:09.788823 containerd[1494]: time="2025-07-12T00:14:09.788782640Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 00:14:09.788823 containerd[1494]: time="2025-07-12T00:14:09.788790600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 12 00:14:09.789361 containerd[1494]: time="2025-07-12T00:14:09.788863240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 12 00:14:09.789361 containerd[1494]: time="2025-07-12T00:14:09.789054480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 00:14:09.789361 containerd[1494]: time="2025-07-12T00:14:09.789081600Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 00:14:09.789361 containerd[1494]: time="2025-07-12T00:14:09.789091080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 12 00:14:09.789766 containerd[1494]: time="2025-07-12T00:14:09.789732720Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 12 00:14:09.791218 containerd[1494]: time="2025-07-12T00:14:09.791173960Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 12 00:14:09.791318 containerd[1494]: time="2025-07-12T00:14:09.791292120Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:14:09.795391 containerd[1494]: time="2025-07-12T00:14:09.795354000Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 12 00:14:09.795446 containerd[1494]: time="2025-07-12T00:14:09.795411480Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 12 00:14:09.795446 containerd[1494]: time="2025-07-12T00:14:09.795427080Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 12 00:14:09.795446 containerd[1494]: time="2025-07-12T00:14:09.795439920Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 12 00:14:09.795528 containerd[1494]: time="2025-07-12T00:14:09.795452400Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 12 00:14:09.795528 containerd[1494]: time="2025-07-12T00:14:09.795463600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 12 00:14:09.795528 containerd[1494]: time="2025-07-12T00:14:09.795474600Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 12 00:14:09.795528 containerd[1494]: time="2025-07-12T00:14:09.795486760Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 12 00:14:09.795610 containerd[1494]: time="2025-07-12T00:14:09.795532040Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 12 00:14:09.795610 containerd[1494]: time="2025-07-12T00:14:09.795549640Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 12 00:14:09.795610 containerd[1494]: time="2025-07-12T00:14:09.795560200Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 12 00:14:09.795610 containerd[1494]: time="2025-07-12T00:14:09.795572880Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 12 00:14:09.795725 containerd[1494]: time="2025-07-12T00:14:09.795701320Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 12 00:14:09.795749 containerd[1494]: time="2025-07-12T00:14:09.795729840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 12 00:14:09.795749 containerd[1494]: time="2025-07-12T00:14:09.795745760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 12 00:14:09.795786 containerd[1494]: time="2025-07-12T00:14:09.795757080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 12 00:14:09.795786 containerd[1494]: time="2025-07-12T00:14:09.795767600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 12 00:14:09.795819 containerd[1494]: time="2025-07-12T00:14:09.795784560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 12 00:14:09.795819 containerd[1494]: time="2025-07-12T00:14:09.795797440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 12 00:14:09.795819 containerd[1494]: time="2025-07-12T00:14:09.795810160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 12 00:14:09.795874 containerd[1494]: time="2025-07-12T00:14:09.795824080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 12 00:14:09.795874 containerd[1494]: time="2025-07-12T00:14:09.795835240Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 12 00:14:09.795874 containerd[1494]: time="2025-07-12T00:14:09.795847200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 12 00:14:09.796422 containerd[1494]: time="2025-07-12T00:14:09.796024880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 12 00:14:09.796422 containerd[1494]: time="2025-07-12T00:14:09.796043720Z" level=info msg="Start snapshots syncer" Jul 12 00:14:09.796422 containerd[1494]: time="2025-07-12T00:14:09.796071680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 12 00:14:09.796487 containerd[1494]: time="2025-07-12T00:14:09.796267280Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 12 00:14:09.796487 containerd[1494]: time="2025-07-12T00:14:09.796317240Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 12 00:14:09.796487 containerd[1494]: time="2025-07-12T00:14:09.796390800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 12 00:14:09.796627 containerd[1494]: time="2025-07-12T00:14:09.796497000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 12 00:14:09.796627 containerd[1494]: time="2025-07-12T00:14:09.796544040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 12 00:14:09.796627 containerd[1494]: time="2025-07-12T00:14:09.796563760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 12 00:14:09.796627 containerd[1494]: time="2025-07-12T00:14:09.796574600Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 12 00:14:09.796627 containerd[1494]: time="2025-07-12T00:14:09.796597600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 12 00:14:09.796627 containerd[1494]: time="2025-07-12T00:14:09.796611880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 12 00:14:09.796627 containerd[1494]: time="2025-07-12T00:14:09.796622080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 12 00:14:09.796763 containerd[1494]: time="2025-07-12T00:14:09.796647760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 12 00:14:09.796763 containerd[1494]: time="2025-07-12T00:14:09.796660520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 12 00:14:09.796763 containerd[1494]: time="2025-07-12T00:14:09.796672680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 12 00:14:09.796763 containerd[1494]: time="2025-07-12T00:14:09.796714760Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 00:14:09.796763 containerd[1494]: time="2025-07-12T00:14:09.796730440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 00:14:09.796763 containerd[1494]: time="2025-07-12T00:14:09.796739040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 00:14:09.796763 containerd[1494]: time="2025-07-12T00:14:09.796747880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 00:14:09.796763 containerd[1494]: time="2025-07-12T00:14:09.796756560Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 12 00:14:09.799695 containerd[1494]: time="2025-07-12T00:14:09.796781760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 12 00:14:09.799695 containerd[1494]: time="2025-07-12T00:14:09.796793760Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 12 00:14:09.799695 containerd[1494]: time="2025-07-12T00:14:09.796871520Z" level=info msg="runtime interface created" Jul 12 00:14:09.799695 containerd[1494]: time="2025-07-12T00:14:09.796876880Z" level=info msg="created NRI interface" Jul 12 00:14:09.799695 containerd[1494]: time="2025-07-12T00:14:09.796886880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 12 00:14:09.799695 containerd[1494]: time="2025-07-12T00:14:09.796897800Z" level=info msg="Connect containerd service" Jul 12 00:14:09.799695 containerd[1494]: time="2025-07-12T00:14:09.796924040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:14:09.799695 containerd[1494]: time="2025-07-12T00:14:09.797629200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:14:09.912035 containerd[1494]: time="2025-07-12T00:14:09.911913280Z" level=info msg="Start subscribing containerd event" Jul 12 00:14:09.912035 containerd[1494]: time="2025-07-12T00:14:09.912030200Z" level=info msg="Start recovering state" Jul 12 00:14:09.912139 containerd[1494]: time="2025-07-12T00:14:09.912129320Z" level=info msg="Start event monitor" Jul 12 00:14:09.912178 containerd[1494]: time="2025-07-12T00:14:09.912157920Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:14:09.912203 containerd[1494]: time="2025-07-12T00:14:09.912172600Z" level=info msg="Start streaming server" Jul 12 00:14:09.912236 containerd[1494]: time="2025-07-12T00:14:09.912202160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 12 00:14:09.912236 containerd[1494]: time="2025-07-12T00:14:09.912210320Z" level=info msg="runtime interface starting up..." Jul 12 00:14:09.912236 containerd[1494]: time="2025-07-12T00:14:09.912216160Z" level=info msg="starting plugins..." Jul 12 00:14:09.912236 containerd[1494]: time="2025-07-12T00:14:09.912231720Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 12 00:14:09.912310 containerd[1494]: time="2025-07-12T00:14:09.912268240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:14:09.912435 containerd[1494]: time="2025-07-12T00:14:09.912415120Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:14:09.912611 containerd[1494]: time="2025-07-12T00:14:09.912588200Z" level=info msg="containerd successfully booted in 0.135531s" Jul 12 00:14:09.912689 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:14:10.028872 tar[1499]: linux-arm64/LICENSE Jul 12 00:14:10.028971 tar[1499]: linux-arm64/README.md Jul 12 00:14:10.051604 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:14:10.886702 systemd-networkd[1437]: eth0: Gained IPv6LL Jul 12 00:14:10.890658 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:14:10.893186 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:14:10.896052 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 00:14:10.898889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:14:10.910619 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:14:10.925866 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 00:14:10.926098 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 00:14:10.927475 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:14:10.936422 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:14:11.061109 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:14:11.080984 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:14:11.083721 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:14:11.102643 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:14:11.102864 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:14:11.105484 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:14:11.136819 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:14:11.139500 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:14:11.141747 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:14:11.142761 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:14:11.488399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:14:11.489774 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:14:11.490724 systemd[1]: Startup finished in 2.090s (kernel) + 5.176s (initrd) + 3.736s (userspace) = 11.003s. Jul 12 00:14:11.492508 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:14:12.171801 kubelet[1613]: E0712 00:14:12.171707 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:14:12.174205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:14:12.174346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:14:12.174697 systemd[1]: kubelet.service: Consumed 842ms CPU time, 257.6M memory peak. Jul 12 00:14:16.194663 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:14:16.196298 systemd[1]: Started sshd@0-10.0.0.143:22-10.0.0.1:39740.service - OpenSSH per-connection server daemon (10.0.0.1:39740). Jul 12 00:14:16.264553 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 39740 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:14:16.266736 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:16.272923 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:14:16.273981 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:14:16.280715 systemd-logind[1478]: New session 1 of user core. Jul 12 00:14:16.298634 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:14:16.301692 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:14:16.322754 (systemd)[1630]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:14:16.324929 systemd-logind[1478]: New session c1 of user core. Jul 12 00:14:16.445277 systemd[1630]: Queued start job for default target default.target. Jul 12 00:14:16.463518 systemd[1630]: Created slice app.slice - User Application Slice. Jul 12 00:14:16.463574 systemd[1630]: Reached target paths.target - Paths. Jul 12 00:14:16.463613 systemd[1630]: Reached target timers.target - Timers. Jul 12 00:14:16.464874 systemd[1630]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:14:16.474033 systemd[1630]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:14:16.474092 systemd[1630]: Reached target sockets.target - Sockets. Jul 12 00:14:16.474130 systemd[1630]: Reached target basic.target - Basic System. Jul 12 00:14:16.474166 systemd[1630]: Reached target default.target - Main User Target. Jul 12 00:14:16.474200 systemd[1630]: Startup finished in 143ms. Jul 12 00:14:16.474350 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:14:16.476223 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:14:16.535165 systemd[1]: Started sshd@1-10.0.0.143:22-10.0.0.1:39744.service - OpenSSH per-connection server daemon (10.0.0.1:39744). Jul 12 00:14:16.577612 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 39744 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:14:16.578953 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:16.583895 systemd-logind[1478]: New session 2 of user core. Jul 12 00:14:16.593718 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:14:16.648733 sshd[1643]: Connection closed by 10.0.0.1 port 39744 Jul 12 00:14:16.649595 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:16.662527 systemd[1]: sshd@1-10.0.0.143:22-10.0.0.1:39744.service: Deactivated successfully. Jul 12 00:14:16.667311 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:14:16.668403 systemd-logind[1478]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:14:16.673725 systemd[1]: Started sshd@2-10.0.0.143:22-10.0.0.1:39756.service - OpenSSH per-connection server daemon (10.0.0.1:39756). Jul 12 00:14:16.674293 systemd-logind[1478]: Removed session 2. Jul 12 00:14:16.732658 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 39756 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:14:16.734350 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:16.739077 systemd-logind[1478]: New session 3 of user core. Jul 12 00:14:16.754766 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:14:16.805200 sshd[1651]: Connection closed by 10.0.0.1 port 39756 Jul 12 00:14:16.807207 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:16.816507 systemd[1]: sshd@2-10.0.0.143:22-10.0.0.1:39756.service: Deactivated successfully. Jul 12 00:14:16.818355 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:14:16.819099 systemd-logind[1478]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:14:16.822773 systemd[1]: Started sshd@3-10.0.0.143:22-10.0.0.1:39768.service - OpenSSH per-connection server daemon (10.0.0.1:39768). Jul 12 00:14:16.823339 systemd-logind[1478]: Removed session 3. Jul 12 00:14:16.874099 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 39768 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:14:16.875384 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:16.879599 systemd-logind[1478]: New session 4 of user core. Jul 12 00:14:16.893732 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:14:16.945978 sshd[1659]: Connection closed by 10.0.0.1 port 39768 Jul 12 00:14:16.946326 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:16.958795 systemd[1]: sshd@3-10.0.0.143:22-10.0.0.1:39768.service: Deactivated successfully. Jul 12 00:14:16.960315 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:14:16.961621 systemd-logind[1478]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:14:16.963517 systemd[1]: Started sshd@4-10.0.0.143:22-10.0.0.1:39776.service - OpenSSH per-connection server daemon (10.0.0.1:39776). Jul 12 00:14:16.964391 systemd-logind[1478]: Removed session 4. Jul 12 00:14:17.010147 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 39776 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:14:17.011520 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:17.016597 systemd-logind[1478]: New session 5 of user core. Jul 12 00:14:17.025766 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:14:17.088022 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:14:17.088295 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:14:17.102109 sudo[1668]: pam_unix(sudo:session): session closed for user root Jul 12 00:14:17.103524 sshd[1667]: Connection closed by 10.0.0.1 port 39776 Jul 12 00:14:17.104230 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:17.119867 systemd[1]: sshd@4-10.0.0.143:22-10.0.0.1:39776.service: Deactivated successfully. Jul 12 00:14:17.122587 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:14:17.123716 systemd-logind[1478]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:14:17.129838 systemd[1]: Started sshd@5-10.0.0.143:22-10.0.0.1:39782.service - OpenSSH per-connection server daemon (10.0.0.1:39782). Jul 12 00:14:17.130653 systemd-logind[1478]: Removed session 5. Jul 12 00:14:17.193009 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 39782 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:14:17.194128 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:17.198284 systemd-logind[1478]: New session 6 of user core. Jul 12 00:14:17.209754 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:14:17.260689 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:14:17.261485 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:14:17.434584 sudo[1678]: pam_unix(sudo:session): session closed for user root Jul 12 00:14:17.439645 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 12 00:14:17.439899 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:14:17.447812 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 00:14:17.479381 augenrules[1700]: No rules Jul 12 00:14:17.480460 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:14:17.480763 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 00:14:17.482708 sudo[1677]: pam_unix(sudo:session): session closed for user root Jul 12 00:14:17.484288 sshd[1676]: Connection closed by 10.0.0.1 port 39782 Jul 12 00:14:17.484175 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:17.493471 systemd[1]: sshd@5-10.0.0.143:22-10.0.0.1:39782.service: Deactivated successfully. Jul 12 00:14:17.494924 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:14:17.497001 systemd-logind[1478]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:14:17.499295 systemd[1]: Started sshd@6-10.0.0.143:22-10.0.0.1:39792.service - OpenSSH per-connection server daemon (10.0.0.1:39792). Jul 12 00:14:17.499850 systemd-logind[1478]: Removed session 6. Jul 12 00:14:17.546893 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 39792 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:14:17.547978 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:17.552155 systemd-logind[1478]: New session 7 of user core. Jul 12 00:14:17.563671 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:14:17.614093 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:14:17.614672 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:14:17.999602 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:14:18.023901 (dockerd)[1733]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:14:18.318804 dockerd[1733]: time="2025-07-12T00:14:18.318655056Z" level=info msg="Starting up" Jul 12 00:14:18.320176 dockerd[1733]: time="2025-07-12T00:14:18.320131106Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 12 00:14:18.521874 systemd[1]: var-lib-docker-metacopy\x2dcheck4283975775-merged.mount: Deactivated successfully. Jul 12 00:14:18.625284 dockerd[1733]: time="2025-07-12T00:14:18.625169473Z" level=info msg="Loading containers: start." Jul 12 00:14:18.658422 kernel: Initializing XFRM netlink socket Jul 12 00:14:18.956741 systemd-networkd[1437]: docker0: Link UP Jul 12 00:14:18.977269 dockerd[1733]: time="2025-07-12T00:14:18.977174743Z" level=info msg="Loading containers: done." Jul 12 00:14:19.008952 dockerd[1733]: time="2025-07-12T00:14:19.008904703Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:14:19.009145 dockerd[1733]: time="2025-07-12T00:14:19.008999635Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 12 00:14:19.009145 dockerd[1733]: time="2025-07-12T00:14:19.009140431Z" level=info msg="Initializing buildkit" Jul 12 00:14:19.132449 dockerd[1733]: time="2025-07-12T00:14:19.132406378Z" level=info msg="Completed buildkit initialization" Jul 12 00:14:19.144583 dockerd[1733]: time="2025-07-12T00:14:19.144525404Z" level=info msg="Daemon has completed initialization" Jul 12 00:14:19.146066 dockerd[1733]: time="2025-07-12T00:14:19.144594809Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:14:19.145092 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:14:19.749555 containerd[1494]: time="2025-07-12T00:14:19.749483247Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:14:20.923848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943400279.mount: Deactivated successfully. Jul 12 00:14:22.424748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:14:22.426709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:14:22.477303 containerd[1494]: time="2025-07-12T00:14:22.477256466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:22.514779 containerd[1494]: time="2025-07-12T00:14:22.514728233Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 12 00:14:22.584089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:14:22.587765 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:14:22.597237 containerd[1494]: time="2025-07-12T00:14:22.597171237Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:22.602548 containerd[1494]: time="2025-07-12T00:14:22.602276510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:22.604015 containerd[1494]: time="2025-07-12T00:14:22.603954558Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.854405024s" Jul 12 00:14:22.604113 containerd[1494]: time="2025-07-12T00:14:22.604097021Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:14:22.606265 containerd[1494]: time="2025-07-12T00:14:22.606236349Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:14:22.630419 kubelet[2004]: E0712 00:14:22.630364 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:14:22.633514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:14:22.633672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:14:22.635623 systemd[1]: kubelet.service: Consumed 153ms CPU time, 106.8M memory peak. Jul 12 00:14:24.227020 containerd[1494]: time="2025-07-12T00:14:24.226977243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:24.227719 containerd[1494]: time="2025-07-12T00:14:24.227694608Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 12 00:14:24.229020 containerd[1494]: time="2025-07-12T00:14:24.228563693Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:24.232076 containerd[1494]: time="2025-07-12T00:14:24.232046111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:24.232709 containerd[1494]: time="2025-07-12T00:14:24.232681818Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.626412441s" Jul 12 00:14:24.232767 containerd[1494]: time="2025-07-12T00:14:24.232709521Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:14:24.233692 containerd[1494]: time="2025-07-12T00:14:24.233660263Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:14:25.421220 containerd[1494]: time="2025-07-12T00:14:25.421170049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:25.422636 containerd[1494]: time="2025-07-12T00:14:25.422603638Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 12 00:14:25.423486 containerd[1494]: time="2025-07-12T00:14:25.423462995Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:25.426371 containerd[1494]: time="2025-07-12T00:14:25.426341077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:25.427265 containerd[1494]: time="2025-07-12T00:14:25.427236562Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.193540954s" Jul 12 00:14:25.427371 containerd[1494]: time="2025-07-12T00:14:25.427343700Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:14:25.427860 containerd[1494]: time="2025-07-12T00:14:25.427836382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:14:26.401072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223465058.mount: Deactivated successfully. Jul 12 00:14:26.739874 containerd[1494]: time="2025-07-12T00:14:26.739763772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:26.740993 containerd[1494]: time="2025-07-12T00:14:26.740787864Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 12 00:14:26.741746 containerd[1494]: time="2025-07-12T00:14:26.741709841Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:26.743937 containerd[1494]: time="2025-07-12T00:14:26.743912406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:26.744740 containerd[1494]: time="2025-07-12T00:14:26.744626935Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.316760063s" Jul 12 00:14:26.744740 containerd[1494]: time="2025-07-12T00:14:26.744660223Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:14:26.745244 containerd[1494]: time="2025-07-12T00:14:26.745210563Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:14:27.293612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686348593.mount: Deactivated successfully. Jul 12 00:14:27.937233 containerd[1494]: time="2025-07-12T00:14:27.937180072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:27.939240 containerd[1494]: time="2025-07-12T00:14:27.939204866Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 12 00:14:27.941121 containerd[1494]: time="2025-07-12T00:14:27.941083575Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:27.944426 containerd[1494]: time="2025-07-12T00:14:27.944388831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:27.945912 containerd[1494]: time="2025-07-12T00:14:27.945877977Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.200633126s" Jul 12 00:14:27.945951 containerd[1494]: time="2025-07-12T00:14:27.945912489Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:14:27.946875 containerd[1494]: time="2025-07-12T00:14:27.946678597Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:14:28.368333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount61474524.mount: Deactivated successfully. Jul 12 00:14:28.372378 containerd[1494]: time="2025-07-12T00:14:28.371877376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:14:28.372495 containerd[1494]: time="2025-07-12T00:14:28.372395341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 12 00:14:28.373628 containerd[1494]: time="2025-07-12T00:14:28.373595427Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:14:28.375444 containerd[1494]: time="2025-07-12T00:14:28.375388048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:14:28.375992 containerd[1494]: time="2025-07-12T00:14:28.375956930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 429.243945ms" Jul 12 00:14:28.376049 containerd[1494]: time="2025-07-12T00:14:28.375991705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:14:28.376535 containerd[1494]: time="2025-07-12T00:14:28.376469755Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:14:28.909810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount511231545.mount: Deactivated successfully. Jul 12 00:14:31.002070 containerd[1494]: time="2025-07-12T00:14:31.002018217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:31.003127 containerd[1494]: time="2025-07-12T00:14:31.003041835Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 12 00:14:31.004337 containerd[1494]: time="2025-07-12T00:14:31.003864652Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:31.125610 containerd[1494]: time="2025-07-12T00:14:31.125555322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:31.126859 containerd[1494]: time="2025-07-12T00:14:31.126821451Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.750314442s" Jul 12 00:14:31.126859 containerd[1494]: time="2025-07-12T00:14:31.126859951Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:14:32.884149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:14:32.886763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:14:33.014227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:14:33.025008 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:14:33.059468 kubelet[2169]: E0712 00:14:33.059409 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:14:33.061983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:14:33.062116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:14:33.062679 systemd[1]: kubelet.service: Consumed 137ms CPU time, 106.8M memory peak. Jul 12 00:14:36.931800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:14:36.931944 systemd[1]: kubelet.service: Consumed 137ms CPU time, 106.8M memory peak. Jul 12 00:14:36.934215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:14:36.954559 systemd[1]: Reload requested from client PID 2183 ('systemctl') (unit session-7.scope)... Jul 12 00:14:36.954579 systemd[1]: Reloading... Jul 12 00:14:37.017595 zram_generator::config[2230]: No configuration found. Jul 12 00:14:37.141556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:14:37.228884 systemd[1]: Reloading finished in 274 ms. Jul 12 00:14:37.293218 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:14:37.293312 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:14:37.293592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:14:37.293645 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.1M memory peak. Jul 12 00:14:37.295348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:14:37.408934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:14:37.413147 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:14:37.450210 kubelet[2272]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:14:37.450210 kubelet[2272]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:14:37.450210 kubelet[2272]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:14:37.450591 kubelet[2272]: I0712 00:14:37.450266 2272 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:14:37.965402 kubelet[2272]: I0712 00:14:37.965347 2272 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:14:37.965402 kubelet[2272]: I0712 00:14:37.965381 2272 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:14:37.965738 kubelet[2272]: I0712 00:14:37.965708 2272 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:14:38.021520 kubelet[2272]: E0712 00:14:38.021472 2272 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:14:38.022284 kubelet[2272]: I0712 00:14:38.022267 2272 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:14:38.031123 kubelet[2272]: I0712 00:14:38.031093 2272 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 00:14:38.035023 kubelet[2272]: I0712 00:14:38.034981 2272 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:14:38.035726 kubelet[2272]: I0712 00:14:38.035699 2272 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:14:38.035911 kubelet[2272]: I0712 00:14:38.035880 2272 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:14:38.036082 kubelet[2272]: I0712 00:14:38.035912 2272 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:14:38.036161 kubelet[2272]: I0712 00:14:38.036151 2272 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:14:38.036232 kubelet[2272]: I0712 00:14:38.036162 2272 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:14:38.036415 kubelet[2272]: I0712 00:14:38.036399 2272 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:14:38.039076 kubelet[2272]: I0712 00:14:38.038422 2272 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:14:38.039076 kubelet[2272]: I0712 00:14:38.038460 2272 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:14:38.039076 kubelet[2272]: I0712 00:14:38.038481 2272 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:14:38.039076 kubelet[2272]: I0712 00:14:38.038497 2272 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:14:38.042971 kubelet[2272]: W0712 00:14:38.042801 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Jul 12 00:14:38.043366 kubelet[2272]: W0712 00:14:38.043304 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Jul 12 00:14:38.043420 kubelet[2272]: E0712 00:14:38.043374 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:14:38.043604 kubelet[2272]: E0712 00:14:38.043576 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:14:38.043895 kubelet[2272]: I0712 00:14:38.043863 2272 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 12 00:14:38.044790 kubelet[2272]: I0712 00:14:38.044760 2272 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:14:38.044902 kubelet[2272]: W0712 00:14:38.044885 2272 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:14:38.046364 kubelet[2272]: I0712 00:14:38.046339 2272 server.go:1274] "Started kubelet" Jul 12 00:14:38.051394 kubelet[2272]: I0712 00:14:38.049819 2272 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:14:38.051394 kubelet[2272]: I0712 00:14:38.050230 2272 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:14:38.051394 kubelet[2272]: I0712 00:14:38.050407 2272 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:14:38.051394 kubelet[2272]: I0712 00:14:38.050872 2272 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:14:38.051822 kubelet[2272]: I0712 00:14:38.051798 2272 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:14:38.052818 kubelet[2272]: I0712 00:14:38.052265 2272 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:14:38.054309 kubelet[2272]: E0712 00:14:38.054275 2272 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:14:38.054411 kubelet[2272]: E0712 00:14:38.054367 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:14:38.054451 kubelet[2272]: I0712 00:14:38.054421 2272 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:14:38.054692 kubelet[2272]: I0712 00:14:38.054666 2272 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:14:38.054740 kubelet[2272]: I0712 00:14:38.054729 2272 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:14:38.054972 kubelet[2272]: E0712 00:14:38.054934 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="200ms" Jul 12 00:14:38.055227 kubelet[2272]: I0712 00:14:38.055203 2272 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:14:38.055349 kubelet[2272]: I0712 00:14:38.055325 2272 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:14:38.055389 kubelet[2272]: W0712 00:14:38.055335 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Jul 12 00:14:38.055425 kubelet[2272]: E0712 00:14:38.055386 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:14:38.056256 kubelet[2272]: E0712 00:14:38.053479 2272 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.143:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.143:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185158b82b3a7b44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:14:38.046313284 +0000 UTC m=+0.630000556,LastTimestamp:2025-07-12 00:14:38.046313284 +0000 UTC m=+0.630000556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:14:38.056875 kubelet[2272]: I0712 00:14:38.056848 2272 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:14:38.067464 kubelet[2272]: I0712 00:14:38.067439 2272 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:14:38.067464 kubelet[2272]: I0712 00:14:38.067456 2272 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:14:38.067709 kubelet[2272]: I0712 00:14:38.067489 2272 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:14:38.068486 kubelet[2272]: I0712 00:14:38.067927 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:14:38.069081 kubelet[2272]: I0712 00:14:38.069046 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:14:38.069081 kubelet[2272]: I0712 00:14:38.069073 2272 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:14:38.069135 kubelet[2272]: I0712 00:14:38.069091 2272 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:14:38.069156 kubelet[2272]: E0712 00:14:38.069131 2272 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:14:38.075711 kubelet[2272]: W0712 00:14:38.075628 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Jul 12 00:14:38.075711 kubelet[2272]: E0712 00:14:38.075706 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:14:38.154838 kubelet[2272]: E0712 00:14:38.154770 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:14:38.170092 kubelet[2272]: E0712 00:14:38.170044 2272 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:14:38.175518 kubelet[2272]: I0712 00:14:38.175476 2272 policy_none.go:49] "None policy: Start" Jul 12 00:14:38.176201 kubelet[2272]: I0712 00:14:38.176185 2272 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:14:38.176260 kubelet[2272]: I0712 00:14:38.176246 2272 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:14:38.182263 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:14:38.193129 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:14:38.196666 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:14:38.218946 kubelet[2272]: I0712 00:14:38.218601 2272 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:14:38.218946 kubelet[2272]: I0712 00:14:38.218847 2272 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:14:38.218946 kubelet[2272]: I0712 00:14:38.218858 2272 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:14:38.219629 kubelet[2272]: I0712 00:14:38.219108 2272 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:14:38.220832 kubelet[2272]: E0712 00:14:38.220734 2272 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:14:38.255340 kubelet[2272]: E0712 00:14:38.255286 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="400ms" Jul 12 00:14:38.320856 kubelet[2272]: I0712 00:14:38.320740 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:14:38.321470 kubelet[2272]: E0712 00:14:38.321440 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Jul 12 00:14:38.380130 systemd[1]: Created slice kubepods-burstable-pod7faea27eb1de2d16921241302e45dd32.slice - libcontainer container kubepods-burstable-pod7faea27eb1de2d16921241302e45dd32.slice. Jul 12 00:14:38.392899 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 12 00:14:38.403101 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 12 00:14:38.455891 kubelet[2272]: I0712 00:14:38.455849 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:14:38.455891 kubelet[2272]: I0712 00:14:38.455883 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:14:38.455891 kubelet[2272]: I0712 00:14:38.455903 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:14:38.456288 kubelet[2272]: I0712 00:14:38.455922 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7faea27eb1de2d16921241302e45dd32-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7faea27eb1de2d16921241302e45dd32\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:14:38.456288 kubelet[2272]: I0712 00:14:38.455936 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:14:38.456288 kubelet[2272]: I0712 00:14:38.455951 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:14:38.456288 kubelet[2272]: I0712 00:14:38.455966 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:14:38.456288 kubelet[2272]: I0712 00:14:38.456006 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7faea27eb1de2d16921241302e45dd32-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7faea27eb1de2d16921241302e45dd32\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:14:38.456391 kubelet[2272]: I0712 00:14:38.456051 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7faea27eb1de2d16921241302e45dd32-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7faea27eb1de2d16921241302e45dd32\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:14:38.523783 kubelet[2272]: I0712 00:14:38.523680 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:14:38.524058 kubelet[2272]: E0712 00:14:38.524029 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Jul 12 00:14:38.656303 kubelet[2272]: E0712 00:14:38.656230 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="800ms" Jul 12 00:14:38.692259 containerd[1494]: time="2025-07-12T00:14:38.692208308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7faea27eb1de2d16921241302e45dd32,Namespace:kube-system,Attempt:0,}" Jul 12 00:14:38.701932 containerd[1494]: time="2025-07-12T00:14:38.701877585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 12 00:14:38.705872 containerd[1494]: time="2025-07-12T00:14:38.705784495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 12 00:14:38.710292 containerd[1494]: time="2025-07-12T00:14:38.710205411Z" level=info msg="connecting to shim 126f8ad8060041ff7d56fffa2f5b7d58b16c15c232d7c5b083a1da1a5a8b4ad6" address="unix:///run/containerd/s/c21a0eff31902dac28f916675813b7bb8ee0735f08702ab852cd50422631a63d" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:14:38.733474 containerd[1494]: time="2025-07-12T00:14:38.733258999Z" level=info msg="connecting to shim 8f6b1c52a59c4f1252ad0b1b6a8dc1c13f7eabd60a3f7065f47327cc9ab4d360" address="unix:///run/containerd/s/4818fbd52107cc3bc001f60b9017cd420e8450969be43f49f1edbbc84fbfaf33" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:14:38.740746 systemd[1]: Started cri-containerd-126f8ad8060041ff7d56fffa2f5b7d58b16c15c232d7c5b083a1da1a5a8b4ad6.scope - libcontainer container 126f8ad8060041ff7d56fffa2f5b7d58b16c15c232d7c5b083a1da1a5a8b4ad6. Jul 12 00:14:38.743458 containerd[1494]: time="2025-07-12T00:14:38.742060109Z" level=info msg="connecting to shim b933b142c8d509180dd97970a33582ff71cea7ef846bdb09df2139b12ec62c9f" address="unix:///run/containerd/s/bdb72073227ff21c673c7e913fd74723603a392277230e74894b8095ad8991c2" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:14:38.758761 systemd[1]: Started cri-containerd-8f6b1c52a59c4f1252ad0b1b6a8dc1c13f7eabd60a3f7065f47327cc9ab4d360.scope - libcontainer container 8f6b1c52a59c4f1252ad0b1b6a8dc1c13f7eabd60a3f7065f47327cc9ab4d360. Jul 12 00:14:38.767425 systemd[1]: Started cri-containerd-b933b142c8d509180dd97970a33582ff71cea7ef846bdb09df2139b12ec62c9f.scope - libcontainer container b933b142c8d509180dd97970a33582ff71cea7ef846bdb09df2139b12ec62c9f. Jul 12 00:14:38.797500 containerd[1494]: time="2025-07-12T00:14:38.797272265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7faea27eb1de2d16921241302e45dd32,Namespace:kube-system,Attempt:0,} returns sandbox id \"126f8ad8060041ff7d56fffa2f5b7d58b16c15c232d7c5b083a1da1a5a8b4ad6\"" Jul 12 00:14:38.803240 containerd[1494]: time="2025-07-12T00:14:38.803192032Z" level=info msg="CreateContainer within sandbox \"126f8ad8060041ff7d56fffa2f5b7d58b16c15c232d7c5b083a1da1a5a8b4ad6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:14:38.809435 containerd[1494]: time="2025-07-12T00:14:38.809374426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f6b1c52a59c4f1252ad0b1b6a8dc1c13f7eabd60a3f7065f47327cc9ab4d360\"" Jul 12 00:14:38.811328 containerd[1494]: time="2025-07-12T00:14:38.811226999Z" level=info msg="Container 7160cea88fbc21a90f07d9d0e11b5e1b626d7680d4dec3d7e0f3d5c500568529: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:14:38.814191 containerd[1494]: time="2025-07-12T00:14:38.814144859Z" level=info msg="CreateContainer within sandbox \"8f6b1c52a59c4f1252ad0b1b6a8dc1c13f7eabd60a3f7065f47327cc9ab4d360\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:14:38.819039 containerd[1494]: time="2025-07-12T00:14:38.818507515Z" level=info msg="CreateContainer within sandbox \"126f8ad8060041ff7d56fffa2f5b7d58b16c15c232d7c5b083a1da1a5a8b4ad6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7160cea88fbc21a90f07d9d0e11b5e1b626d7680d4dec3d7e0f3d5c500568529\"" Jul 12 00:14:38.819039 containerd[1494]: time="2025-07-12T00:14:38.818995133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"b933b142c8d509180dd97970a33582ff71cea7ef846bdb09df2139b12ec62c9f\"" Jul 12 00:14:38.819218 containerd[1494]: time="2025-07-12T00:14:38.819190453Z" level=info msg="StartContainer for \"7160cea88fbc21a90f07d9d0e11b5e1b626d7680d4dec3d7e0f3d5c500568529\"" Jul 12 00:14:38.820319 containerd[1494]: time="2025-07-12T00:14:38.820280246Z" level=info msg="connecting to shim 7160cea88fbc21a90f07d9d0e11b5e1b626d7680d4dec3d7e0f3d5c500568529" address="unix:///run/containerd/s/c21a0eff31902dac28f916675813b7bb8ee0735f08702ab852cd50422631a63d" protocol=ttrpc version=3 Jul 12 00:14:38.822931 containerd[1494]: time="2025-07-12T00:14:38.822879221Z" level=info msg="CreateContainer within sandbox \"b933b142c8d509180dd97970a33582ff71cea7ef846bdb09df2139b12ec62c9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:14:38.824529 containerd[1494]: time="2025-07-12T00:14:38.824165134Z" level=info msg="Container 465cf65019726d46f5c1196b18c68a4e60014a67e0186c65773d05407ffc8192: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:14:38.833057 containerd[1494]: time="2025-07-12T00:14:38.833016536Z" level=info msg="Container 6d2595691aff96987aef4c155accc467f1090b6996a2c60b665cbc3fdf20af71: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:14:38.835551 containerd[1494]: time="2025-07-12T00:14:38.835327696Z" level=info msg="CreateContainer within sandbox \"8f6b1c52a59c4f1252ad0b1b6a8dc1c13f7eabd60a3f7065f47327cc9ab4d360\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"465cf65019726d46f5c1196b18c68a4e60014a67e0186c65773d05407ffc8192\"" Jul 12 00:14:38.835762 containerd[1494]: time="2025-07-12T00:14:38.835725102Z" level=info msg="StartContainer for \"465cf65019726d46f5c1196b18c68a4e60014a67e0186c65773d05407ffc8192\"" Jul 12 00:14:38.837683 containerd[1494]: time="2025-07-12T00:14:38.837640298Z" level=info msg="connecting to shim 465cf65019726d46f5c1196b18c68a4e60014a67e0186c65773d05407ffc8192" address="unix:///run/containerd/s/4818fbd52107cc3bc001f60b9017cd420e8450969be43f49f1edbbc84fbfaf33" protocol=ttrpc version=3 Jul 12 00:14:38.839946 systemd[1]: Started cri-containerd-7160cea88fbc21a90f07d9d0e11b5e1b626d7680d4dec3d7e0f3d5c500568529.scope - libcontainer container 7160cea88fbc21a90f07d9d0e11b5e1b626d7680d4dec3d7e0f3d5c500568529. Jul 12 00:14:38.844546 containerd[1494]: time="2025-07-12T00:14:38.844224544Z" level=info msg="CreateContainer within sandbox \"b933b142c8d509180dd97970a33582ff71cea7ef846bdb09df2139b12ec62c9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6d2595691aff96987aef4c155accc467f1090b6996a2c60b665cbc3fdf20af71\"" Jul 12 00:14:38.846135 containerd[1494]: time="2025-07-12T00:14:38.844884098Z" level=info msg="StartContainer for \"6d2595691aff96987aef4c155accc467f1090b6996a2c60b665cbc3fdf20af71\"" Jul 12 00:14:38.846135 containerd[1494]: time="2025-07-12T00:14:38.845899775Z" level=info msg="connecting to shim 6d2595691aff96987aef4c155accc467f1090b6996a2c60b665cbc3fdf20af71" address="unix:///run/containerd/s/bdb72073227ff21c673c7e913fd74723603a392277230e74894b8095ad8991c2" protocol=ttrpc version=3 Jul 12 00:14:38.861796 systemd[1]: Started cri-containerd-465cf65019726d46f5c1196b18c68a4e60014a67e0186c65773d05407ffc8192.scope - libcontainer container 465cf65019726d46f5c1196b18c68a4e60014a67e0186c65773d05407ffc8192. Jul 12 00:14:38.866386 systemd[1]: Started cri-containerd-6d2595691aff96987aef4c155accc467f1090b6996a2c60b665cbc3fdf20af71.scope - libcontainer container 6d2595691aff96987aef4c155accc467f1090b6996a2c60b665cbc3fdf20af71. Jul 12 00:14:38.892884 containerd[1494]: time="2025-07-12T00:14:38.892835357Z" level=info msg="StartContainer for \"7160cea88fbc21a90f07d9d0e11b5e1b626d7680d4dec3d7e0f3d5c500568529\" returns successfully" Jul 12 00:14:38.914681 containerd[1494]: time="2025-07-12T00:14:38.914621730Z" level=info msg="StartContainer for \"6d2595691aff96987aef4c155accc467f1090b6996a2c60b665cbc3fdf20af71\" returns successfully" Jul 12 00:14:38.919570 containerd[1494]: time="2025-07-12T00:14:38.918562636Z" level=info msg="StartContainer for \"465cf65019726d46f5c1196b18c68a4e60014a67e0186c65773d05407ffc8192\" returns successfully" Jul 12 00:14:38.926319 kubelet[2272]: I0712 00:14:38.926042 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:14:38.926717 kubelet[2272]: E0712 00:14:38.926688 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Jul 12 00:14:38.944143 kubelet[2272]: W0712 00:14:38.944037 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Jul 12 00:14:38.944343 kubelet[2272]: E0712 00:14:38.944306 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:14:39.035487 kubelet[2272]: W0712 00:14:39.035395 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Jul 12 00:14:39.035702 kubelet[2272]: E0712 00:14:39.035665 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.143:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:14:39.728478 kubelet[2272]: I0712 00:14:39.728445 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:14:40.429214 kubelet[2272]: E0712 00:14:40.429158 2272 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 00:14:40.510253 kubelet[2272]: I0712 00:14:40.510040 2272 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:14:40.510253 kubelet[2272]: E0712 00:14:40.510086 2272 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 00:14:40.534233 kubelet[2272]: E0712 00:14:40.533970 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:14:40.634879 kubelet[2272]: E0712 00:14:40.634840 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:14:40.735954 kubelet[2272]: E0712 00:14:40.735559 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:14:40.836151 kubelet[2272]: E0712 00:14:40.836100 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:14:41.039741 kubelet[2272]: I0712 00:14:41.039629 2272 apiserver.go:52] "Watching apiserver" Jul 12 00:14:41.055261 kubelet[2272]: I0712 00:14:41.055225 2272 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:14:42.374569 systemd[1]: Reload requested from client PID 2545 ('systemctl') (unit session-7.scope)... Jul 12 00:14:42.374587 systemd[1]: Reloading... Jul 12 00:14:42.438634 zram_generator::config[2588]: No configuration found. Jul 12 00:14:42.509858 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:14:42.609382 systemd[1]: Reloading finished in 234 ms. Jul 12 00:14:42.643319 kubelet[2272]: I0712 00:14:42.643184 2272 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:14:42.643795 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:14:42.662766 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:14:42.663043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:14:42.663104 systemd[1]: kubelet.service: Consumed 1.039s CPU time, 126.6M memory peak. Jul 12 00:14:42.665034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:14:42.805469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:14:42.815881 (kubelet)[2630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:14:42.863041 kubelet[2630]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:14:42.863041 kubelet[2630]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:14:42.863041 kubelet[2630]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:14:42.863415 kubelet[2630]: I0712 00:14:42.863110 2630 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:14:42.870552 kubelet[2630]: I0712 00:14:42.870485 2630 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:14:42.870552 kubelet[2630]: I0712 00:14:42.870547 2630 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:14:42.870868 kubelet[2630]: I0712 00:14:42.870849 2630 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:14:42.872575 kubelet[2630]: I0712 00:14:42.872499 2630 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:14:42.874920 kubelet[2630]: I0712 00:14:42.874886 2630 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:14:42.879649 kubelet[2630]: I0712 00:14:42.879600 2630 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 00:14:42.882763 kubelet[2630]: I0712 00:14:42.882729 2630 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:14:42.883075 kubelet[2630]: I0712 00:14:42.883058 2630 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:14:42.883306 kubelet[2630]: I0712 00:14:42.883274 2630 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:14:42.883590 kubelet[2630]: I0712 00:14:42.883373 2630 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:14:42.883743 kubelet[2630]: I0712 00:14:42.883726 2630 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:14:42.883806 kubelet[2630]: I0712 00:14:42.883798 2630 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:14:42.883903 kubelet[2630]: I0712 00:14:42.883884 2630 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:14:42.884079 kubelet[2630]: I0712 00:14:42.884065 2630 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:14:42.884187 kubelet[2630]: I0712 00:14:42.884172 2630 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:14:42.884274 kubelet[2630]: I0712 00:14:42.884264 2630 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:14:42.884599 kubelet[2630]: I0712 00:14:42.884572 2630 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:14:42.887575 kubelet[2630]: I0712 00:14:42.886268 2630 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 12 00:14:42.887575 kubelet[2630]: I0712 00:14:42.886865 2630 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:14:42.887575 kubelet[2630]: I0712 00:14:42.887342 2630 server.go:1274] "Started kubelet" Jul 12 00:14:42.889864 kubelet[2630]: I0712 00:14:42.889792 2630 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:14:42.889963 kubelet[2630]: I0712 00:14:42.889889 2630 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:14:42.890439 kubelet[2630]: I0712 00:14:42.890406 2630 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:14:42.891764 kubelet[2630]: I0712 00:14:42.891738 2630 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:14:42.892131 kubelet[2630]: I0712 00:14:42.892101 2630 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:14:42.893397 kubelet[2630]: I0712 00:14:42.893298 2630 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:14:42.897723 kubelet[2630]: I0712 00:14:42.897679 2630 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:14:42.898046 kubelet[2630]: E0712 00:14:42.897999 2630 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:14:42.899982 kubelet[2630]: I0712 00:14:42.899927 2630 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:14:42.900092 kubelet[2630]: I0712 00:14:42.900075 2630 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:14:42.900258 kubelet[2630]: I0712 00:14:42.900239 2630 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:14:42.900462 kubelet[2630]: I0712 00:14:42.900440 2630 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:14:42.915396 kubelet[2630]: I0712 00:14:42.915354 2630 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:14:42.917541 kubelet[2630]: I0712 00:14:42.917492 2630 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:14:42.917679 kubelet[2630]: I0712 00:14:42.917667 2630 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:14:42.917766 kubelet[2630]: I0712 00:14:42.917755 2630 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:14:42.917901 kubelet[2630]: E0712 00:14:42.917884 2630 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:14:42.919252 kubelet[2630]: E0712 00:14:42.919223 2630 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:14:42.920807 kubelet[2630]: I0712 00:14:42.920678 2630 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:14:42.950980 kubelet[2630]: I0712 00:14:42.950954 2630 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:14:42.951204 kubelet[2630]: I0712 00:14:42.951188 2630 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:14:42.951282 kubelet[2630]: I0712 00:14:42.951273 2630 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:14:42.951496 kubelet[2630]: I0712 00:14:42.951479 2630 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:14:42.951624 kubelet[2630]: I0712 00:14:42.951595 2630 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:14:42.951676 kubelet[2630]: I0712 00:14:42.951668 2630 policy_none.go:49] "None policy: Start" Jul 12 00:14:42.952422 kubelet[2630]: I0712 00:14:42.952403 2630 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:14:42.952483 kubelet[2630]: I0712 00:14:42.952427 2630 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:14:42.952687 kubelet[2630]: I0712 00:14:42.952646 2630 state_mem.go:75] "Updated machine memory state" Jul 12 00:14:42.957548 kubelet[2630]: I0712 00:14:42.957498 2630 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:14:42.957794 kubelet[2630]: I0712 00:14:42.957724 2630 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:14:42.957839 kubelet[2630]: I0712 00:14:42.957774 2630 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:14:42.958071 kubelet[2630]: I0712 00:14:42.958035 2630 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:14:43.026941 kubelet[2630]: E0712 00:14:43.026900 2630 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:14:43.028076 kubelet[2630]: E0712 00:14:43.028021 2630 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:14:43.059544 kubelet[2630]: I0712 00:14:43.059499 2630 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:14:43.067553 kubelet[2630]: I0712 00:14:43.067388 2630 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 12 00:14:43.067786 kubelet[2630]: I0712 00:14:43.067772 2630 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:14:43.201235 kubelet[2630]: I0712 00:14:43.201059 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7faea27eb1de2d16921241302e45dd32-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7faea27eb1de2d16921241302e45dd32\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:14:43.201235 kubelet[2630]: I0712 00:14:43.201107 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:14:43.201235 kubelet[2630]: I0712 00:14:43.201131 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:14:43.201235 kubelet[2630]: I0712 00:14:43.201149 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:14:43.201235 kubelet[2630]: I0712 00:14:43.201285 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:14:43.201528 kubelet[2630]: I0712 00:14:43.201308 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7faea27eb1de2d16921241302e45dd32-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7faea27eb1de2d16921241302e45dd32\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:14:43.201699 kubelet[2630]: I0712 00:14:43.201576 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7faea27eb1de2d16921241302e45dd32-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7faea27eb1de2d16921241302e45dd32\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:14:43.201699 kubelet[2630]: I0712 00:14:43.201604 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:14:43.201699 kubelet[2630]: I0712 00:14:43.201623 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:14:43.885904 kubelet[2630]: I0712 00:14:43.885856 2630 apiserver.go:52] "Watching apiserver" Jul 12 00:14:43.901040 kubelet[2630]: I0712 00:14:43.900998 2630 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:14:43.927454 kubelet[2630]: I0712 00:14:43.926717 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.926699342 podStartE2EDuration="2.926699342s" podCreationTimestamp="2025-07-12 00:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:14:43.926495343 +0000 UTC m=+1.105903386" watchObservedRunningTime="2025-07-12 00:14:43.926699342 +0000 UTC m=+1.106107425" Jul 12 00:14:43.942714 kubelet[2630]: E0712 00:14:43.942615 2630 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:14:43.979882 kubelet[2630]: I0712 00:14:43.979811 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.979792254 podStartE2EDuration="979.792254ms" podCreationTimestamp="2025-07-12 00:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:14:43.938782722 +0000 UTC m=+1.118190765" watchObservedRunningTime="2025-07-12 00:14:43.979792254 +0000 UTC m=+1.159200337" Jul 12 00:14:43.999369 kubelet[2630]: I0712 00:14:43.999301 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.99928349 podStartE2EDuration="2.99928349s" podCreationTimestamp="2025-07-12 00:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:14:43.979932763 +0000 UTC m=+1.159340846" watchObservedRunningTime="2025-07-12 00:14:43.99928349 +0000 UTC m=+1.178691573" Jul 12 00:14:47.911175 kubelet[2630]: I0712 00:14:47.911133 2630 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:14:47.912043 containerd[1494]: time="2025-07-12T00:14:47.911675762Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:14:47.913016 kubelet[2630]: I0712 00:14:47.912368 2630 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:14:49.022948 systemd[1]: Created slice kubepods-besteffort-pode88f34a7_7778_4a87_887a_ef2ff36801e7.slice - libcontainer container kubepods-besteffort-pode88f34a7_7778_4a87_887a_ef2ff36801e7.slice. Jul 12 00:14:49.037714 kubelet[2630]: I0712 00:14:49.037688 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e88f34a7-7778-4a87-887a-ef2ff36801e7-lib-modules\") pod \"kube-proxy-hdv6c\" (UID: \"e88f34a7-7778-4a87-887a-ef2ff36801e7\") " pod="kube-system/kube-proxy-hdv6c" Jul 12 00:14:49.037982 kubelet[2630]: I0712 00:14:49.037721 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e88f34a7-7778-4a87-887a-ef2ff36801e7-kube-proxy\") pod \"kube-proxy-hdv6c\" (UID: \"e88f34a7-7778-4a87-887a-ef2ff36801e7\") " pod="kube-system/kube-proxy-hdv6c" Jul 12 00:14:49.037982 kubelet[2630]: I0712 00:14:49.037738 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e88f34a7-7778-4a87-887a-ef2ff36801e7-xtables-lock\") pod \"kube-proxy-hdv6c\" (UID: \"e88f34a7-7778-4a87-887a-ef2ff36801e7\") " pod="kube-system/kube-proxy-hdv6c" Jul 12 00:14:49.037982 kubelet[2630]: I0712 00:14:49.037754 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72gmx\" (UniqueName: \"kubernetes.io/projected/e88f34a7-7778-4a87-887a-ef2ff36801e7-kube-api-access-72gmx\") pod \"kube-proxy-hdv6c\" (UID: \"e88f34a7-7778-4a87-887a-ef2ff36801e7\") " pod="kube-system/kube-proxy-hdv6c" Jul 12 00:14:49.106791 systemd[1]: Created slice kubepods-besteffort-podf5be09ff_3b62_4ab4_9167_e0c8b147ab0a.slice - libcontainer container kubepods-besteffort-podf5be09ff_3b62_4ab4_9167_e0c8b147ab0a.slice. Jul 12 00:14:49.138315 kubelet[2630]: I0712 00:14:49.138272 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5be09ff-3b62-4ab4-9167-e0c8b147ab0a-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-7x82w\" (UID: \"f5be09ff-3b62-4ab4-9167-e0c8b147ab0a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-7x82w" Jul 12 00:14:49.138508 kubelet[2630]: I0712 00:14:49.138486 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrf2x\" (UniqueName: \"kubernetes.io/projected/f5be09ff-3b62-4ab4-9167-e0c8b147ab0a-kube-api-access-nrf2x\") pod \"tigera-operator-5bf8dfcb4-7x82w\" (UID: \"f5be09ff-3b62-4ab4-9167-e0c8b147ab0a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-7x82w" Jul 12 00:14:49.341169 containerd[1494]: time="2025-07-12T00:14:49.341119515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdv6c,Uid:e88f34a7-7778-4a87-887a-ef2ff36801e7,Namespace:kube-system,Attempt:0,}" Jul 12 00:14:49.354877 containerd[1494]: time="2025-07-12T00:14:49.354786954Z" level=info msg="connecting to shim 2c00d397757b24283b2a304dff05351d581b9e2b698e2ae7580893aa065d6dce" address="unix:///run/containerd/s/6f92e63ab47bad8a66c16ac73a8eb3a3cf9d650ca5a6ab1a45783b106d9bd8eb" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:14:49.387727 systemd[1]: Started cri-containerd-2c00d397757b24283b2a304dff05351d581b9e2b698e2ae7580893aa065d6dce.scope - libcontainer container 2c00d397757b24283b2a304dff05351d581b9e2b698e2ae7580893aa065d6dce. Jul 12 00:14:49.409785 containerd[1494]: time="2025-07-12T00:14:49.409725378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdv6c,Uid:e88f34a7-7778-4a87-887a-ef2ff36801e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c00d397757b24283b2a304dff05351d581b9e2b698e2ae7580893aa065d6dce\"" Jul 12 00:14:49.412723 containerd[1494]: time="2025-07-12T00:14:49.412694877Z" level=info msg="CreateContainer within sandbox \"2c00d397757b24283b2a304dff05351d581b9e2b698e2ae7580893aa065d6dce\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:14:49.413819 containerd[1494]: time="2025-07-12T00:14:49.413786768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-7x82w,Uid:f5be09ff-3b62-4ab4-9167-e0c8b147ab0a,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:14:49.428433 containerd[1494]: time="2025-07-12T00:14:49.428392210Z" level=info msg="Container 213ef8d117834216e7f1f1d39ad0a1d2e725ccb41757ffe6ae992ac263906858: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:14:49.435524 containerd[1494]: time="2025-07-12T00:14:49.435469526Z" level=info msg="CreateContainer within sandbox \"2c00d397757b24283b2a304dff05351d581b9e2b698e2ae7580893aa065d6dce\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"213ef8d117834216e7f1f1d39ad0a1d2e725ccb41757ffe6ae992ac263906858\"" Jul 12 00:14:49.436234 containerd[1494]: time="2025-07-12T00:14:49.436119369Z" level=info msg="StartContainer for \"213ef8d117834216e7f1f1d39ad0a1d2e725ccb41757ffe6ae992ac263906858\"" Jul 12 00:14:49.437439 containerd[1494]: time="2025-07-12T00:14:49.437408609Z" level=info msg="connecting to shim 213ef8d117834216e7f1f1d39ad0a1d2e725ccb41757ffe6ae992ac263906858" address="unix:///run/containerd/s/6f92e63ab47bad8a66c16ac73a8eb3a3cf9d650ca5a6ab1a45783b106d9bd8eb" protocol=ttrpc version=3 Jul 12 00:14:49.439090 containerd[1494]: time="2025-07-12T00:14:49.439051888Z" level=info msg="connecting to shim 043bdaa713539c621e7a663a66d65d934ad3d205f69bc7c52e2d722570ed414a" address="unix:///run/containerd/s/d3a3bafdc49fb60b862e5dcc384c4baa9998534b2f9c18a330d78af4c117060b" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:14:49.460692 systemd[1]: Started cri-containerd-213ef8d117834216e7f1f1d39ad0a1d2e725ccb41757ffe6ae992ac263906858.scope - libcontainer container 213ef8d117834216e7f1f1d39ad0a1d2e725ccb41757ffe6ae992ac263906858. Jul 12 00:14:49.463843 systemd[1]: Started cri-containerd-043bdaa713539c621e7a663a66d65d934ad3d205f69bc7c52e2d722570ed414a.scope - libcontainer container 043bdaa713539c621e7a663a66d65d934ad3d205f69bc7c52e2d722570ed414a. Jul 12 00:14:49.496889 containerd[1494]: time="2025-07-12T00:14:49.496854473Z" level=info msg="StartContainer for \"213ef8d117834216e7f1f1d39ad0a1d2e725ccb41757ffe6ae992ac263906858\" returns successfully" Jul 12 00:14:49.501742 containerd[1494]: time="2025-07-12T00:14:49.501706744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-7x82w,Uid:f5be09ff-3b62-4ab4-9167-e0c8b147ab0a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"043bdaa713539c621e7a663a66d65d934ad3d205f69bc7c52e2d722570ed414a\"" Jul 12 00:14:49.504904 containerd[1494]: time="2025-07-12T00:14:49.504544170Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:14:51.242327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289874785.mount: Deactivated successfully. Jul 12 00:14:51.655336 kubelet[2630]: I0712 00:14:51.655179 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hdv6c" podStartSLOduration=2.655161384 podStartE2EDuration="2.655161384s" podCreationTimestamp="2025-07-12 00:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:14:49.959899819 +0000 UTC m=+7.139307902" watchObservedRunningTime="2025-07-12 00:14:51.655161384 +0000 UTC m=+8.834569467" Jul 12 00:14:51.673887 containerd[1494]: time="2025-07-12T00:14:51.673841408Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:51.674945 containerd[1494]: time="2025-07-12T00:14:51.674915147Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 00:14:51.675751 containerd[1494]: time="2025-07-12T00:14:51.675728556Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:51.677969 containerd[1494]: time="2025-07-12T00:14:51.677934624Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:51.678640 containerd[1494]: time="2025-07-12T00:14:51.678611964Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.174037178s" Jul 12 00:14:51.678690 containerd[1494]: time="2025-07-12T00:14:51.678640859Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:14:51.681822 containerd[1494]: time="2025-07-12T00:14:51.681791962Z" level=info msg="CreateContainer within sandbox \"043bdaa713539c621e7a663a66d65d934ad3d205f69bc7c52e2d722570ed414a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:14:51.687954 containerd[1494]: time="2025-07-12T00:14:51.687921001Z" level=info msg="Container 01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:14:51.692901 containerd[1494]: time="2025-07-12T00:14:51.692867966Z" level=info msg="CreateContainer within sandbox \"043bdaa713539c621e7a663a66d65d934ad3d205f69bc7c52e2d722570ed414a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df\"" Jul 12 00:14:51.693507 containerd[1494]: time="2025-07-12T00:14:51.693460664Z" level=info msg="StartContainer for \"01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df\"" Jul 12 00:14:51.694397 containerd[1494]: time="2025-07-12T00:14:51.694369200Z" level=info msg="connecting to shim 01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df" address="unix:///run/containerd/s/d3a3bafdc49fb60b862e5dcc384c4baa9998534b2f9c18a330d78af4c117060b" protocol=ttrpc version=3 Jul 12 00:14:51.717670 systemd[1]: Started cri-containerd-01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df.scope - libcontainer container 01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df. Jul 12 00:14:51.742339 containerd[1494]: time="2025-07-12T00:14:51.742293794Z" level=info msg="StartContainer for \"01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df\" returns successfully" Jul 12 00:14:51.970261 kubelet[2630]: I0712 00:14:51.970092 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-7x82w" podStartSLOduration=0.794818366 podStartE2EDuration="2.970076379s" podCreationTimestamp="2025-07-12 00:14:49 +0000 UTC" firstStartedPulling="2025-07-12 00:14:49.504178005 +0000 UTC m=+6.683586088" lastFinishedPulling="2025-07-12 00:14:51.679436018 +0000 UTC m=+8.858844101" observedRunningTime="2025-07-12 00:14:51.969662611 +0000 UTC m=+9.149070694" watchObservedRunningTime="2025-07-12 00:14:51.970076379 +0000 UTC m=+9.149484462" Jul 12 00:14:53.935076 systemd[1]: cri-containerd-01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df.scope: Deactivated successfully. Jul 12 00:14:53.981409 containerd[1494]: time="2025-07-12T00:14:53.981208579Z" level=info msg="received exit event container_id:\"01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df\" id:\"01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df\" pid:2961 exit_status:1 exited_at:{seconds:1752279293 nanos:966091656}" Jul 12 00:14:53.992236 containerd[1494]: time="2025-07-12T00:14:53.992177104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df\" id:\"01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df\" pid:2961 exit_status:1 exited_at:{seconds:1752279293 nanos:966091656}" Jul 12 00:14:54.055091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df-rootfs.mount: Deactivated successfully. Jul 12 00:14:54.488044 update_engine[1484]: I20250712 00:14:54.487982 1484 update_attempter.cc:509] Updating boot flags... Jul 12 00:14:54.978544 kubelet[2630]: I0712 00:14:54.977715 2630 scope.go:117] "RemoveContainer" containerID="01913e072a16b67083382a07c51904e19f0f3675dfe7be779d78fea7480ca9df" Jul 12 00:14:54.987239 containerd[1494]: time="2025-07-12T00:14:54.987201751Z" level=info msg="CreateContainer within sandbox \"043bdaa713539c621e7a663a66d65d934ad3d205f69bc7c52e2d722570ed414a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 12 00:14:54.999059 containerd[1494]: time="2025-07-12T00:14:54.998999545Z" level=info msg="Container baf5e87f567efdeeacf434f3d6d6764a5ff539eb956a90227828c59a177bbfff: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:14:55.002726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4125354730.mount: Deactivated successfully. Jul 12 00:14:55.004764 containerd[1494]: time="2025-07-12T00:14:55.004730681Z" level=info msg="CreateContainer within sandbox \"043bdaa713539c621e7a663a66d65d934ad3d205f69bc7c52e2d722570ed414a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"baf5e87f567efdeeacf434f3d6d6764a5ff539eb956a90227828c59a177bbfff\"" Jul 12 00:14:55.005529 containerd[1494]: time="2025-07-12T00:14:55.005488191Z" level=info msg="StartContainer for \"baf5e87f567efdeeacf434f3d6d6764a5ff539eb956a90227828c59a177bbfff\"" Jul 12 00:14:55.007907 containerd[1494]: time="2025-07-12T00:14:55.007730187Z" level=info msg="connecting to shim baf5e87f567efdeeacf434f3d6d6764a5ff539eb956a90227828c59a177bbfff" address="unix:///run/containerd/s/d3a3bafdc49fb60b862e5dcc384c4baa9998534b2f9c18a330d78af4c117060b" protocol=ttrpc version=3 Jul 12 00:14:55.030696 systemd[1]: Started cri-containerd-baf5e87f567efdeeacf434f3d6d6764a5ff539eb956a90227828c59a177bbfff.scope - libcontainer container baf5e87f567efdeeacf434f3d6d6764a5ff539eb956a90227828c59a177bbfff. Jul 12 00:14:55.056952 containerd[1494]: time="2025-07-12T00:14:55.056916102Z" level=info msg="StartContainer for \"baf5e87f567efdeeacf434f3d6d6764a5ff539eb956a90227828c59a177bbfff\" returns successfully" Jul 12 00:14:57.030138 sudo[1712]: pam_unix(sudo:session): session closed for user root Jul 12 00:14:57.032678 sshd[1711]: Connection closed by 10.0.0.1 port 39792 Jul 12 00:14:57.032983 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:57.036692 systemd[1]: sshd@6-10.0.0.143:22-10.0.0.1:39792.service: Deactivated successfully. Jul 12 00:14:57.038500 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:14:57.038722 systemd[1]: session-7.scope: Consumed 7.870s CPU time, 231.8M memory peak. Jul 12 00:14:57.039645 systemd-logind[1478]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:14:57.041133 systemd-logind[1478]: Removed session 7. Jul 12 00:15:03.857858 systemd[1]: Created slice kubepods-besteffort-pod2d19cbd8_d24c_4d40_912a_59555693cc4a.slice - libcontainer container kubepods-besteffort-pod2d19cbd8_d24c_4d40_912a_59555693cc4a.slice. Jul 12 00:15:04.022880 systemd[1]: Created slice kubepods-besteffort-pod2e6e1a2f_5b49_4142_8f70_2990f336ad41.slice - libcontainer container kubepods-besteffort-pod2e6e1a2f_5b49_4142_8f70_2990f336ad41.slice. Jul 12 00:15:04.037591 kubelet[2630]: I0712 00:15:04.037460 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2d19cbd8-d24c-4d40-912a-59555693cc4a-typha-certs\") pod \"calico-typha-6dfb8d889c-glrn6\" (UID: \"2d19cbd8-d24c-4d40-912a-59555693cc4a\") " pod="calico-system/calico-typha-6dfb8d889c-glrn6" Jul 12 00:15:04.037938 kubelet[2630]: I0712 00:15:04.037598 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxl7s\" (UniqueName: \"kubernetes.io/projected/2d19cbd8-d24c-4d40-912a-59555693cc4a-kube-api-access-gxl7s\") pod \"calico-typha-6dfb8d889c-glrn6\" (UID: \"2d19cbd8-d24c-4d40-912a-59555693cc4a\") " pod="calico-system/calico-typha-6dfb8d889c-glrn6" Jul 12 00:15:04.037938 kubelet[2630]: I0712 00:15:04.037624 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d19cbd8-d24c-4d40-912a-59555693cc4a-tigera-ca-bundle\") pod \"calico-typha-6dfb8d889c-glrn6\" (UID: \"2d19cbd8-d24c-4d40-912a-59555693cc4a\") " pod="calico-system/calico-typha-6dfb8d889c-glrn6" Jul 12 00:15:04.138623 kubelet[2630]: I0712 00:15:04.138341 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e6e1a2f-5b49-4142-8f70-2990f336ad41-tigera-ca-bundle\") pod \"calico-node-vj658\" (UID: \"2e6e1a2f-5b49-4142-8f70-2990f336ad41\") " pod="calico-system/calico-node-vj658" Jul 12 00:15:04.138623 kubelet[2630]: I0712 00:15:04.138385 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2e6e1a2f-5b49-4142-8f70-2990f336ad41-cni-bin-dir\") pod \"calico-node-vj658\" (UID: \"2e6e1a2f-5b49-4142-8f70-2990f336ad41\") " pod="calico-system/calico-node-vj658" Jul 12 00:15:04.138623 kubelet[2630]: I0712 00:15:04.138408 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2e6e1a2f-5b49-4142-8f70-2990f336ad41-flexvol-driver-host\") pod \"calico-node-vj658\" (UID: \"2e6e1a2f-5b49-4142-8f70-2990f336ad41\") " pod="calico-system/calico-node-vj658" Jul 12 00:15:04.138623 kubelet[2630]: I0712 00:15:04.138427 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2e6e1a2f-5b49-4142-8f70-2990f336ad41-node-certs\") pod \"calico-node-vj658\" (UID: \"2e6e1a2f-5b49-4142-8f70-2990f336ad41\") " pod="calico-system/calico-node-vj658" Jul 12 00:15:04.138623 kubelet[2630]: I0712 00:15:04.138442 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2e6e1a2f-5b49-4142-8f70-2990f336ad41-var-run-calico\") pod \"calico-node-vj658\" (UID: \"2e6e1a2f-5b49-4142-8f70-2990f336ad41\") " pod="calico-system/calico-node-vj658" Jul 12 00:15:04.138811 kubelet[2630]: I0712 00:15:04.138457 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bhqg\" (UniqueName: \"kubernetes.io/projected/2e6e1a2f-5b49-4142-8f70-2990f336ad41-kube-api-access-6bhqg\") pod \"calico-node-vj658\" (UID: \"2e6e1a2f-5b49-4142-8f70-2990f336ad41\") " pod="calico-system/calico-node-vj658" Jul 12 00:15:04.138811 kubelet[2630]: I0712 00:15:04.138472 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e6e1a2f-5b49-4142-8f70-2990f336ad41-lib-modules\") pod \"calico-node-vj658\" (UID: \"2e6e1a2f-5b49-4142-8f70-2990f336ad41\") " pod="calico-system/calico-node-vj658" Jul 12 00:15:04.138811 kubelet[2630]: I0712 00:15:04.138486 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2e6e1a2f-5b49-4142-8f70-2990f336ad41-var-lib-calico\") pod \"calico-node-vj658\" (UID: \"2e6e1a2f-5b49-4142-8f70-2990f336ad41\") " pod="calico-system/calico-node-vj658" Jul 12 00:15:04.139690 kubelet[2630]: I0712 00:15:04.139636 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2e6e1a2f-5b49-4142-8f70-2990f336ad41-cni-net-dir\") pod \"calico-node-vj658\" (UID: \"2e6e1a2f-5b49-4142-8f70-2990f336ad41\") " pod="calico-system/calico-node-vj658" Jul 12 00:15:04.139690 kubelet[2630]: I0712 00:15:04.139673 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2e6e1a2f-5b49-4142-8f70-2990f336ad41-policysync\") pod \"calico-node-vj658\" (UID: \"2e6e1a2f-5b49-4142-8f70-2990f336ad41\") " pod="calico-system/calico-node-vj658" Jul 12 00:15:04.139780 kubelet[2630]: I0712 00:15:04.139703 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2e6e1a2f-5b49-4142-8f70-2990f336ad41-cni-log-dir\") pod \"calico-node-vj658\" (UID: \"2e6e1a2f-5b49-4142-8f70-2990f336ad41\") " pod="calico-system/calico-node-vj658" Jul 12 00:15:04.139780 kubelet[2630]: I0712 00:15:04.139719 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e6e1a2f-5b49-4142-8f70-2990f336ad41-xtables-lock\") pod \"calico-node-vj658\" (UID: \"2e6e1a2f-5b49-4142-8f70-2990f336ad41\") " pod="calico-system/calico-node-vj658" Jul 12 00:15:04.165403 containerd[1494]: time="2025-07-12T00:15:04.165360627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dfb8d889c-glrn6,Uid:2d19cbd8-d24c-4d40-912a-59555693cc4a,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:04.208924 kubelet[2630]: E0712 00:15:04.208781 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jncsr" podUID="0f56b660-36f9-4f11-961a-175fd72ba31b" Jul 12 00:15:04.239124 containerd[1494]: time="2025-07-12T00:15:04.239067031Z" level=info msg="connecting to shim a1c3b681b1c755279add63395ede619d2a02a516b49fd71103f441bfd7599dff" address="unix:///run/containerd/s/b9999c5a812e251ea29f4d140b4f510c300869e489f15b8157a0bbca42fcc88e" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:04.263427 kubelet[2630]: E0712 00:15:04.256568 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.263427 kubelet[2630]: W0712 00:15:04.256595 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.263427 kubelet[2630]: E0712 00:15:04.256619 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.263427 kubelet[2630]: E0712 00:15:04.256835 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.263427 kubelet[2630]: W0712 00:15:04.256845 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.263427 kubelet[2630]: E0712 00:15:04.256855 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.263427 kubelet[2630]: E0712 00:15:04.257003 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.263427 kubelet[2630]: W0712 00:15:04.257012 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.263427 kubelet[2630]: E0712 00:15:04.257021 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.264542 kubelet[2630]: E0712 00:15:04.263978 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.264542 kubelet[2630]: W0712 00:15:04.264000 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.264542 kubelet[2630]: E0712 00:15:04.264016 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.265549 kubelet[2630]: E0712 00:15:04.265423 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.265549 kubelet[2630]: W0712 00:15:04.265443 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.265549 kubelet[2630]: E0712 00:15:04.265462 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.265806 kubelet[2630]: E0712 00:15:04.265791 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.265976 kubelet[2630]: W0712 00:15:04.265860 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.265976 kubelet[2630]: E0712 00:15:04.265884 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.266112 kubelet[2630]: E0712 00:15:04.266099 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.266162 kubelet[2630]: W0712 00:15:04.266152 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.266225 kubelet[2630]: E0712 00:15:04.266213 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.266448 kubelet[2630]: E0712 00:15:04.266433 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.266550 kubelet[2630]: W0712 00:15:04.266498 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.266698 kubelet[2630]: E0712 00:15:04.266683 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.267075 kubelet[2630]: E0712 00:15:04.266981 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.267075 kubelet[2630]: W0712 00:15:04.266993 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.267075 kubelet[2630]: E0712 00:15:04.267058 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.267362 kubelet[2630]: E0712 00:15:04.267347 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.267450 kubelet[2630]: W0712 00:15:04.267424 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.267556 kubelet[2630]: E0712 00:15:04.267531 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.268295 kubelet[2630]: E0712 00:15:04.268262 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.268481 kubelet[2630]: W0712 00:15:04.268375 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.268481 kubelet[2630]: E0712 00:15:04.268435 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.270317 kubelet[2630]: E0712 00:15:04.270282 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.270317 kubelet[2630]: W0712 00:15:04.270299 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.270528 kubelet[2630]: E0712 00:15:04.270506 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.270810 kubelet[2630]: E0712 00:15:04.270779 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.270810 kubelet[2630]: W0712 00:15:04.270793 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.271282 kubelet[2630]: E0712 00:15:04.271159 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.271787 kubelet[2630]: E0712 00:15:04.271652 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.271787 kubelet[2630]: W0712 00:15:04.271668 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.272491 kubelet[2630]: E0712 00:15:04.272247 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.272715 kubelet[2630]: E0712 00:15:04.272698 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.272778 kubelet[2630]: W0712 00:15:04.272765 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.272937 kubelet[2630]: E0712 00:15:04.272908 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.273164 kubelet[2630]: E0712 00:15:04.273151 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.273240 kubelet[2630]: W0712 00:15:04.273229 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.273326 kubelet[2630]: E0712 00:15:04.273314 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.274233 kubelet[2630]: E0712 00:15:04.274128 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.274579 kubelet[2630]: W0712 00:15:04.274429 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.275556 kubelet[2630]: E0712 00:15:04.275207 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.275556 kubelet[2630]: W0712 00:15:04.275226 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.275556 kubelet[2630]: E0712 00:15:04.275241 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.276682 kubelet[2630]: E0712 00:15:04.276468 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.276682 kubelet[2630]: W0712 00:15:04.276485 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.277612 kubelet[2630]: E0712 00:15:04.277201 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.277906 kubelet[2630]: E0712 00:15:04.277793 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.278729 kubelet[2630]: E0712 00:15:04.278621 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.279100 kubelet[2630]: W0712 00:15:04.279079 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.279250 kubelet[2630]: E0712 00:15:04.279236 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.280166 kubelet[2630]: E0712 00:15:04.280143 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.280248 kubelet[2630]: W0712 00:15:04.280234 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.280567 kubelet[2630]: E0712 00:15:04.280285 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.281742 systemd[1]: Started cri-containerd-a1c3b681b1c755279add63395ede619d2a02a516b49fd71103f441bfd7599dff.scope - libcontainer container a1c3b681b1c755279add63395ede619d2a02a516b49fd71103f441bfd7599dff. Jul 12 00:15:04.282648 kubelet[2630]: E0712 00:15:04.282538 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.282766 kubelet[2630]: W0712 00:15:04.282747 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.283158 kubelet[2630]: E0712 00:15:04.282825 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.283645 kubelet[2630]: E0712 00:15:04.283629 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.283891 kubelet[2630]: W0712 00:15:04.283822 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.284298 kubelet[2630]: E0712 00:15:04.284210 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.285128 kubelet[2630]: E0712 00:15:04.284983 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.285657 kubelet[2630]: W0712 00:15:04.285001 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.285657 kubelet[2630]: E0712 00:15:04.285293 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.286149 kubelet[2630]: E0712 00:15:04.286031 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.286723 kubelet[2630]: W0712 00:15:04.286654 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.287567 kubelet[2630]: E0712 00:15:04.287302 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.292287 kubelet[2630]: E0712 00:15:04.292270 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.293558 kubelet[2630]: W0712 00:15:04.293534 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.293558 kubelet[2630]: E0712 00:15:04.293596 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.294528 kubelet[2630]: E0712 00:15:04.294419 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.294825 kubelet[2630]: W0712 00:15:04.294728 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.294825 kubelet[2630]: E0712 00:15:04.294774 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.296560 kubelet[2630]: E0712 00:15:04.295468 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.296560 kubelet[2630]: W0712 00:15:04.295483 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.296751 kubelet[2630]: E0712 00:15:04.295496 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.296998 kubelet[2630]: E0712 00:15:04.296980 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.297168 kubelet[2630]: W0712 00:15:04.297054 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.297168 kubelet[2630]: E0712 00:15:04.297070 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.297293 kubelet[2630]: E0712 00:15:04.297280 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.297367 kubelet[2630]: W0712 00:15:04.297355 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.297422 kubelet[2630]: E0712 00:15:04.297405 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.298703 kubelet[2630]: E0712 00:15:04.298381 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.298703 kubelet[2630]: W0712 00:15:04.298482 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.298703 kubelet[2630]: E0712 00:15:04.298499 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.300361 kubelet[2630]: E0712 00:15:04.300329 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.300361 kubelet[2630]: W0712 00:15:04.300348 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.300361 kubelet[2630]: E0712 00:15:04.300364 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.327200 containerd[1494]: time="2025-07-12T00:15:04.327156029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vj658,Uid:2e6e1a2f-5b49-4142-8f70-2990f336ad41,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:04.341561 kubelet[2630]: E0712 00:15:04.341527 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.341561 kubelet[2630]: W0712 00:15:04.341549 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.341711 kubelet[2630]: E0712 00:15:04.341617 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.341711 kubelet[2630]: I0712 00:15:04.341651 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0f56b660-36f9-4f11-961a-175fd72ba31b-varrun\") pod \"csi-node-driver-jncsr\" (UID: \"0f56b660-36f9-4f11-961a-175fd72ba31b\") " pod="calico-system/csi-node-driver-jncsr" Jul 12 00:15:04.345372 kubelet[2630]: E0712 00:15:04.345316 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.345372 kubelet[2630]: W0712 00:15:04.345346 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.345372 kubelet[2630]: E0712 00:15:04.345376 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.345557 kubelet[2630]: I0712 00:15:04.345409 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f56b660-36f9-4f11-961a-175fd72ba31b-socket-dir\") pod \"csi-node-driver-jncsr\" (UID: \"0f56b660-36f9-4f11-961a-175fd72ba31b\") " pod="calico-system/csi-node-driver-jncsr" Jul 12 00:15:04.347009 kubelet[2630]: E0712 00:15:04.346625 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.347009 kubelet[2630]: W0712 00:15:04.346795 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.347253 kubelet[2630]: E0712 00:15:04.347158 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.347253 kubelet[2630]: I0712 00:15:04.347212 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f56b660-36f9-4f11-961a-175fd72ba31b-kubelet-dir\") pod \"csi-node-driver-jncsr\" (UID: \"0f56b660-36f9-4f11-961a-175fd72ba31b\") " pod="calico-system/csi-node-driver-jncsr" Jul 12 00:15:04.347679 kubelet[2630]: E0712 00:15:04.347614 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.347985 kubelet[2630]: W0712 00:15:04.347933 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.348344 kubelet[2630]: E0712 00:15:04.348324 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.348677 kubelet[2630]: E0712 00:15:04.348646 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.348677 kubelet[2630]: W0712 00:15:04.348662 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.349322 kubelet[2630]: E0712 00:15:04.349044 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.350587 kubelet[2630]: E0712 00:15:04.350483 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.350801 kubelet[2630]: W0712 00:15:04.350761 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.351247 kubelet[2630]: E0712 00:15:04.351126 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.351247 kubelet[2630]: I0712 00:15:04.351169 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f56b660-36f9-4f11-961a-175fd72ba31b-registration-dir\") pod \"csi-node-driver-jncsr\" (UID: \"0f56b660-36f9-4f11-961a-175fd72ba31b\") " pod="calico-system/csi-node-driver-jncsr" Jul 12 00:15:04.352925 kubelet[2630]: E0712 00:15:04.352887 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.353091 kubelet[2630]: W0712 00:15:04.352997 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.353284 kubelet[2630]: E0712 00:15:04.353224 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.355021 kubelet[2630]: E0712 00:15:04.354991 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.355179 kubelet[2630]: W0712 00:15:04.355105 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.355179 kubelet[2630]: E0712 00:15:04.355132 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.355583 kubelet[2630]: E0712 00:15:04.355540 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.355583 kubelet[2630]: W0712 00:15:04.355555 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.357168 kubelet[2630]: E0712 00:15:04.357146 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.357350 kubelet[2630]: E0712 00:15:04.357318 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.357398 kubelet[2630]: W0712 00:15:04.357332 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.357505 kubelet[2630]: E0712 00:15:04.357463 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.357745 kubelet[2630]: E0712 00:15:04.357727 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.357829 kubelet[2630]: W0712 00:15:04.357800 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.357829 kubelet[2630]: E0712 00:15:04.357816 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.358183 kubelet[2630]: E0712 00:15:04.358145 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.358183 kubelet[2630]: W0712 00:15:04.358158 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.358183 kubelet[2630]: E0712 00:15:04.358168 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.359037 kubelet[2630]: E0712 00:15:04.358552 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.359173 kubelet[2630]: W0712 00:15:04.359125 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.359173 kubelet[2630]: E0712 00:15:04.359149 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.359616 kubelet[2630]: E0712 00:15:04.359529 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.360023 kubelet[2630]: W0712 00:15:04.359966 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.360023 kubelet[2630]: E0712 00:15:04.359989 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.360274 kubelet[2630]: I0712 00:15:04.359908 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fxcv\" (UniqueName: \"kubernetes.io/projected/0f56b660-36f9-4f11-961a-175fd72ba31b-kube-api-access-2fxcv\") pod \"csi-node-driver-jncsr\" (UID: \"0f56b660-36f9-4f11-961a-175fd72ba31b\") " pod="calico-system/csi-node-driver-jncsr" Jul 12 00:15:04.361540 kubelet[2630]: E0712 00:15:04.360783 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.361540 kubelet[2630]: W0712 00:15:04.361058 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.361540 kubelet[2630]: E0712 00:15:04.361075 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.385100 containerd[1494]: time="2025-07-12T00:15:04.385040854Z" level=info msg="connecting to shim 512e83e4d883ecc8e26f8e2a3ea4fcbde9d40a910efda8795b4b7e3e6562e724" address="unix:///run/containerd/s/6bbb294dd001876ac8e7d7609f2e3eecbd1d02fba486cbbaa2e1d5134a095620" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:04.390588 containerd[1494]: time="2025-07-12T00:15:04.390336000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dfb8d889c-glrn6,Uid:2d19cbd8-d24c-4d40-912a-59555693cc4a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a1c3b681b1c755279add63395ede619d2a02a516b49fd71103f441bfd7599dff\"" Jul 12 00:15:04.400622 containerd[1494]: time="2025-07-12T00:15:04.400577717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:15:04.422745 systemd[1]: Started cri-containerd-512e83e4d883ecc8e26f8e2a3ea4fcbde9d40a910efda8795b4b7e3e6562e724.scope - libcontainer container 512e83e4d883ecc8e26f8e2a3ea4fcbde9d40a910efda8795b4b7e3e6562e724. Jul 12 00:15:04.456527 containerd[1494]: time="2025-07-12T00:15:04.456475327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vj658,Uid:2e6e1a2f-5b49-4142-8f70-2990f336ad41,Namespace:calico-system,Attempt:0,} returns sandbox id \"512e83e4d883ecc8e26f8e2a3ea4fcbde9d40a910efda8795b4b7e3e6562e724\"" Jul 12 00:15:04.463040 kubelet[2630]: E0712 00:15:04.462502 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.463040 kubelet[2630]: W0712 00:15:04.462857 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.463040 kubelet[2630]: E0712 00:15:04.462881 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.463939 kubelet[2630]: E0712 00:15:04.463873 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.463939 kubelet[2630]: W0712 00:15:04.463890 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.463939 kubelet[2630]: E0712 00:15:04.463911 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.464161 kubelet[2630]: E0712 00:15:04.464116 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.464161 kubelet[2630]: W0712 00:15:04.464132 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.464161 kubelet[2630]: E0712 00:15:04.464151 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.464353 kubelet[2630]: E0712 00:15:04.464341 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.464353 kubelet[2630]: W0712 00:15:04.464352 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.464405 kubelet[2630]: E0712 00:15:04.464369 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.464554 kubelet[2630]: E0712 00:15:04.464540 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.464592 kubelet[2630]: W0712 00:15:04.464580 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.464666 kubelet[2630]: E0712 00:15:04.464597 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.464798 kubelet[2630]: E0712 00:15:04.464784 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.464798 kubelet[2630]: W0712 00:15:04.464795 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.465021 kubelet[2630]: E0712 00:15:04.464805 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.465021 kubelet[2630]: E0712 00:15:04.464979 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.465021 kubelet[2630]: W0712 00:15:04.464988 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.465021 kubelet[2630]: E0712 00:15:04.464997 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.465180 kubelet[2630]: E0712 00:15:04.465146 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.465180 kubelet[2630]: W0712 00:15:04.465154 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.465180 kubelet[2630]: E0712 00:15:04.465162 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.465410 kubelet[2630]: E0712 00:15:04.465306 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.465410 kubelet[2630]: W0712 00:15:04.465313 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.465410 kubelet[2630]: E0712 00:15:04.465332 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.465526 kubelet[2630]: E0712 00:15:04.465467 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.465526 kubelet[2630]: W0712 00:15:04.465475 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.465612 kubelet[2630]: E0712 00:15:04.465559 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.465743 kubelet[2630]: E0712 00:15:04.465728 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.465743 kubelet[2630]: W0712 00:15:04.465742 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.465819 kubelet[2630]: E0712 00:15:04.465758 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.465923 kubelet[2630]: E0712 00:15:04.465906 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.465923 kubelet[2630]: W0712 00:15:04.465918 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.465981 kubelet[2630]: E0712 00:15:04.465932 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.466089 kubelet[2630]: E0712 00:15:04.466077 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.466089 kubelet[2630]: W0712 00:15:04.466087 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.466281 kubelet[2630]: E0712 00:15:04.466100 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.466281 kubelet[2630]: E0712 00:15:04.466264 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.466281 kubelet[2630]: W0712 00:15:04.466273 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.466281 kubelet[2630]: E0712 00:15:04.466287 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.466682 kubelet[2630]: E0712 00:15:04.466668 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.466829 kubelet[2630]: W0712 00:15:04.466755 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.466829 kubelet[2630]: E0712 00:15:04.466784 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.467110 kubelet[2630]: E0712 00:15:04.467096 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.467266 kubelet[2630]: W0712 00:15:04.467147 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.467266 kubelet[2630]: E0712 00:15:04.467178 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.467505 kubelet[2630]: E0712 00:15:04.467492 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.467505 kubelet[2630]: W0712 00:15:04.467559 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.467505 kubelet[2630]: E0712 00:15:04.467589 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.468111 kubelet[2630]: E0712 00:15:04.468007 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.468111 kubelet[2630]: W0712 00:15:04.468042 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.468111 kubelet[2630]: E0712 00:15:04.468090 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.468450 kubelet[2630]: E0712 00:15:04.468411 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.468639 kubelet[2630]: W0712 00:15:04.468561 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.468639 kubelet[2630]: E0712 00:15:04.468591 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.468888 kubelet[2630]: E0712 00:15:04.468866 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.468928 kubelet[2630]: W0712 00:15:04.468887 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.468928 kubelet[2630]: E0712 00:15:04.468908 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.469160 kubelet[2630]: E0712 00:15:04.469110 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.469160 kubelet[2630]: W0712 00:15:04.469129 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.469160 kubelet[2630]: E0712 00:15:04.469144 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.469337 kubelet[2630]: E0712 00:15:04.469323 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.469337 kubelet[2630]: W0712 00:15:04.469336 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.469470 kubelet[2630]: E0712 00:15:04.469396 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.469470 kubelet[2630]: E0712 00:15:04.469462 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.469470 kubelet[2630]: W0712 00:15:04.469471 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.469770 kubelet[2630]: E0712 00:15:04.469493 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.469770 kubelet[2630]: E0712 00:15:04.469685 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.469770 kubelet[2630]: W0712 00:15:04.469695 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.469770 kubelet[2630]: E0712 00:15:04.469706 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.471183 kubelet[2630]: E0712 00:15:04.471161 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.471183 kubelet[2630]: W0712 00:15:04.471182 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.471273 kubelet[2630]: E0712 00:15:04.471201 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:04.480758 kubelet[2630]: E0712 00:15:04.480729 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:04.480758 kubelet[2630]: W0712 00:15:04.480752 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:04.480758 kubelet[2630]: E0712 00:15:04.480771 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:05.647219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1625152106.mount: Deactivated successfully. Jul 12 00:15:05.922379 kubelet[2630]: E0712 00:15:05.922105 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jncsr" podUID="0f56b660-36f9-4f11-961a-175fd72ba31b" Jul 12 00:15:06.362298 containerd[1494]: time="2025-07-12T00:15:06.362235080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:06.366039 containerd[1494]: time="2025-07-12T00:15:06.365966285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 12 00:15:06.371081 containerd[1494]: time="2025-07-12T00:15:06.370998292Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:06.373360 containerd[1494]: time="2025-07-12T00:15:06.373324348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:06.374558 containerd[1494]: time="2025-07-12T00:15:06.374501600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.97387851s" Jul 12 00:15:06.374558 containerd[1494]: time="2025-07-12T00:15:06.374552452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:15:06.382626 containerd[1494]: time="2025-07-12T00:15:06.381745594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:15:06.402675 containerd[1494]: time="2025-07-12T00:15:06.402051185Z" level=info msg="CreateContainer within sandbox \"a1c3b681b1c755279add63395ede619d2a02a516b49fd71103f441bfd7599dff\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:15:06.425199 containerd[1494]: time="2025-07-12T00:15:06.425086252Z" level=info msg="Container fe2282b4bfcae49d6536a9a6937cdc0f212b4429c7746a7d1cdf1b59d5ac4f0f: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:06.449683 containerd[1494]: time="2025-07-12T00:15:06.449630453Z" level=info msg="CreateContainer within sandbox \"a1c3b681b1c755279add63395ede619d2a02a516b49fd71103f441bfd7599dff\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fe2282b4bfcae49d6536a9a6937cdc0f212b4429c7746a7d1cdf1b59d5ac4f0f\"" Jul 12 00:15:06.450395 containerd[1494]: time="2025-07-12T00:15:06.450364475Z" level=info msg="StartContainer for \"fe2282b4bfcae49d6536a9a6937cdc0f212b4429c7746a7d1cdf1b59d5ac4f0f\"" Jul 12 00:15:06.451817 containerd[1494]: time="2025-07-12T00:15:06.451701847Z" level=info msg="connecting to shim fe2282b4bfcae49d6536a9a6937cdc0f212b4429c7746a7d1cdf1b59d5ac4f0f" address="unix:///run/containerd/s/b9999c5a812e251ea29f4d140b4f510c300869e489f15b8157a0bbca42fcc88e" protocol=ttrpc version=3 Jul 12 00:15:06.476746 systemd[1]: Started cri-containerd-fe2282b4bfcae49d6536a9a6937cdc0f212b4429c7746a7d1cdf1b59d5ac4f0f.scope - libcontainer container fe2282b4bfcae49d6536a9a6937cdc0f212b4429c7746a7d1cdf1b59d5ac4f0f. Jul 12 00:15:06.542159 containerd[1494]: time="2025-07-12T00:15:06.542082639Z" level=info msg="StartContainer for \"fe2282b4bfcae49d6536a9a6937cdc0f212b4429c7746a7d1cdf1b59d5ac4f0f\" returns successfully" Jul 12 00:15:07.016956 kubelet[2630]: E0712 00:15:07.016826 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.016956 kubelet[2630]: W0712 00:15:07.016858 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.016956 kubelet[2630]: E0712 00:15:07.016880 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.017375 kubelet[2630]: E0712 00:15:07.017070 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.017375 kubelet[2630]: W0712 00:15:07.017078 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.017375 kubelet[2630]: E0712 00:15:07.017087 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.017375 kubelet[2630]: E0712 00:15:07.017247 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.017375 kubelet[2630]: W0712 00:15:07.017256 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.017375 kubelet[2630]: E0712 00:15:07.017265 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.017494 kubelet[2630]: E0712 00:15:07.017408 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.017494 kubelet[2630]: W0712 00:15:07.017416 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.017494 kubelet[2630]: E0712 00:15:07.017424 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.017607 kubelet[2630]: E0712 00:15:07.017576 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.017607 kubelet[2630]: W0712 00:15:07.017584 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.017607 kubelet[2630]: E0712 00:15:07.017603 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.017783 kubelet[2630]: E0712 00:15:07.017749 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.017783 kubelet[2630]: W0712 00:15:07.017763 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.017783 kubelet[2630]: E0712 00:15:07.017772 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.017920 kubelet[2630]: E0712 00:15:07.017909 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.017920 kubelet[2630]: W0712 00:15:07.017919 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.017987 kubelet[2630]: E0712 00:15:07.017927 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.018093 kubelet[2630]: E0712 00:15:07.018082 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.018119 kubelet[2630]: W0712 00:15:07.018093 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.018119 kubelet[2630]: E0712 00:15:07.018101 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.018325 kubelet[2630]: E0712 00:15:07.018308 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.018325 kubelet[2630]: W0712 00:15:07.018322 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.018412 kubelet[2630]: E0712 00:15:07.018336 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.018534 kubelet[2630]: E0712 00:15:07.018506 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.018572 kubelet[2630]: W0712 00:15:07.018535 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.018572 kubelet[2630]: E0712 00:15:07.018544 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.018744 kubelet[2630]: E0712 00:15:07.018712 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.018744 kubelet[2630]: W0712 00:15:07.018726 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.018744 kubelet[2630]: E0712 00:15:07.018739 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.018891 kubelet[2630]: E0712 00:15:07.018880 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.018919 kubelet[2630]: W0712 00:15:07.018890 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.018919 kubelet[2630]: E0712 00:15:07.018899 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.019067 kubelet[2630]: E0712 00:15:07.019055 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.019067 kubelet[2630]: W0712 00:15:07.019065 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.019136 kubelet[2630]: E0712 00:15:07.019074 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.019218 kubelet[2630]: E0712 00:15:07.019207 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.019218 kubelet[2630]: W0712 00:15:07.019217 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.019272 kubelet[2630]: E0712 00:15:07.019224 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.019361 kubelet[2630]: E0712 00:15:07.019350 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.019398 kubelet[2630]: W0712 00:15:07.019361 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.019398 kubelet[2630]: E0712 00:15:07.019372 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.037739 kubelet[2630]: I0712 00:15:07.037400 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dfb8d889c-glrn6" podStartSLOduration=2.050885182 podStartE2EDuration="4.037383914s" podCreationTimestamp="2025-07-12 00:15:03 +0000 UTC" firstStartedPulling="2025-07-12 00:15:04.394340638 +0000 UTC m=+21.573748721" lastFinishedPulling="2025-07-12 00:15:06.38083929 +0000 UTC m=+23.560247453" observedRunningTime="2025-07-12 00:15:07.036200233 +0000 UTC m=+24.215608316" watchObservedRunningTime="2025-07-12 00:15:07.037383914 +0000 UTC m=+24.216791997" Jul 12 00:15:07.085590 kubelet[2630]: E0712 00:15:07.085551 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.085590 kubelet[2630]: W0712 00:15:07.085585 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.085749 kubelet[2630]: E0712 00:15:07.085617 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.086033 kubelet[2630]: E0712 00:15:07.086014 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.086058 kubelet[2630]: W0712 00:15:07.086031 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.086058 kubelet[2630]: E0712 00:15:07.086051 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.086750 kubelet[2630]: E0712 00:15:07.086646 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.086750 kubelet[2630]: W0712 00:15:07.086668 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.086750 kubelet[2630]: E0712 00:15:07.086706 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.086969 kubelet[2630]: E0712 00:15:07.086948 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.086969 kubelet[2630]: W0712 00:15:07.086963 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.087021 kubelet[2630]: E0712 00:15:07.086982 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.087175 kubelet[2630]: E0712 00:15:07.087162 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.087175 kubelet[2630]: W0712 00:15:07.087173 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.087228 kubelet[2630]: E0712 00:15:07.087196 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.087425 kubelet[2630]: E0712 00:15:07.087386 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.087456 kubelet[2630]: W0712 00:15:07.087425 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.087488 kubelet[2630]: E0712 00:15:07.087460 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.087658 kubelet[2630]: E0712 00:15:07.087644 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.087658 kubelet[2630]: W0712 00:15:07.087656 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.087740 kubelet[2630]: E0712 00:15:07.087717 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.087825 kubelet[2630]: E0712 00:15:07.087813 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.087825 kubelet[2630]: W0712 00:15:07.087823 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.087872 kubelet[2630]: E0712 00:15:07.087838 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.088021 kubelet[2630]: E0712 00:15:07.088007 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.088021 kubelet[2630]: W0712 00:15:07.088019 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.088073 kubelet[2630]: E0712 00:15:07.088028 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.088224 kubelet[2630]: E0712 00:15:07.088213 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.088224 kubelet[2630]: W0712 00:15:07.088223 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.088271 kubelet[2630]: E0712 00:15:07.088235 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.088420 kubelet[2630]: E0712 00:15:07.088408 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.088444 kubelet[2630]: W0712 00:15:07.088419 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.088444 kubelet[2630]: E0712 00:15:07.088436 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.088704 kubelet[2630]: E0712 00:15:07.088690 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.088738 kubelet[2630]: W0712 00:15:07.088703 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.088738 kubelet[2630]: E0712 00:15:07.088720 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.088954 kubelet[2630]: E0712 00:15:07.088941 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.088983 kubelet[2630]: W0712 00:15:07.088954 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.088983 kubelet[2630]: E0712 00:15:07.088967 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.089124 kubelet[2630]: E0712 00:15:07.089111 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.089124 kubelet[2630]: W0712 00:15:07.089122 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.089171 kubelet[2630]: E0712 00:15:07.089139 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.089335 kubelet[2630]: E0712 00:15:07.089324 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.089357 kubelet[2630]: W0712 00:15:07.089335 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.089376 kubelet[2630]: E0712 00:15:07.089358 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.089489 kubelet[2630]: E0712 00:15:07.089476 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.089489 kubelet[2630]: W0712 00:15:07.089487 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.089671 kubelet[2630]: E0712 00:15:07.089528 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.089851 kubelet[2630]: E0712 00:15:07.089820 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.089851 kubelet[2630]: W0712 00:15:07.089832 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.089851 kubelet[2630]: E0712 00:15:07.089846 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.090369 kubelet[2630]: E0712 00:15:07.090332 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:07.090369 kubelet[2630]: W0712 00:15:07.090351 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:07.090369 kubelet[2630]: E0712 00:15:07.090366 2630 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:07.734726 containerd[1494]: time="2025-07-12T00:15:07.734640309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:07.736254 containerd[1494]: time="2025-07-12T00:15:07.736221045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 12 00:15:07.737158 containerd[1494]: time="2025-07-12T00:15:07.737131101Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:07.741793 containerd[1494]: time="2025-07-12T00:15:07.741732797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:07.742485 containerd[1494]: time="2025-07-12T00:15:07.742355065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.360574182s" Jul 12 00:15:07.742485 containerd[1494]: time="2025-07-12T00:15:07.742387353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:15:07.746431 containerd[1494]: time="2025-07-12T00:15:07.746391586Z" level=info msg="CreateContainer within sandbox \"512e83e4d883ecc8e26f8e2a3ea4fcbde9d40a910efda8795b4b7e3e6562e724\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:15:07.754568 containerd[1494]: time="2025-07-12T00:15:07.754266780Z" level=info msg="Container 77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:07.763346 containerd[1494]: time="2025-07-12T00:15:07.763290488Z" level=info msg="CreateContainer within sandbox \"512e83e4d883ecc8e26f8e2a3ea4fcbde9d40a910efda8795b4b7e3e6562e724\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790\"" Jul 12 00:15:07.763845 containerd[1494]: time="2025-07-12T00:15:07.763811812Z" level=info msg="StartContainer for \"77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790\"" Jul 12 00:15:07.766585 containerd[1494]: time="2025-07-12T00:15:07.765556347Z" level=info msg="connecting to shim 77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790" address="unix:///run/containerd/s/6bbb294dd001876ac8e7d7609f2e3eecbd1d02fba486cbbaa2e1d5134a095620" protocol=ttrpc version=3 Jul 12 00:15:07.796695 systemd[1]: Started cri-containerd-77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790.scope - libcontainer container 77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790. Jul 12 00:15:07.879386 containerd[1494]: time="2025-07-12T00:15:07.878870517Z" level=info msg="StartContainer for \"77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790\" returns successfully" Jul 12 00:15:07.919154 kubelet[2630]: E0712 00:15:07.919076 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jncsr" podUID="0f56b660-36f9-4f11-961a-175fd72ba31b" Jul 12 00:15:07.930489 systemd[1]: cri-containerd-77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790.scope: Deactivated successfully. Jul 12 00:15:07.931482 systemd[1]: cri-containerd-77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790.scope: Consumed 68ms CPU time, 8.4M memory peak, 6.2M read from disk. Jul 12 00:15:07.938060 containerd[1494]: time="2025-07-12T00:15:07.937601816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790\" id:\"77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790\" pid:3373 exited_at:{seconds:1752279307 nanos:936546724}" Jul 12 00:15:07.944689 containerd[1494]: time="2025-07-12T00:15:07.944593080Z" level=info msg="received exit event container_id:\"77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790\" id:\"77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790\" pid:3373 exited_at:{seconds:1752279307 nanos:936546724}" Jul 12 00:15:07.967073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77346fdc9d6bbeb7e6f0f57bb43136aea5e66e07f6a483aa5d5dc771961a6790-rootfs.mount: Deactivated successfully. Jul 12 00:15:08.099648 kubelet[2630]: I0712 00:15:08.099602 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:15:09.105192 containerd[1494]: time="2025-07-12T00:15:09.105097830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:15:09.918855 kubelet[2630]: E0712 00:15:09.918800 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jncsr" podUID="0f56b660-36f9-4f11-961a-175fd72ba31b" Jul 12 00:15:11.918071 kubelet[2630]: E0712 00:15:11.918032 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jncsr" podUID="0f56b660-36f9-4f11-961a-175fd72ba31b" Jul 12 00:15:12.419362 containerd[1494]: time="2025-07-12T00:15:12.419217169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:12.420777 containerd[1494]: time="2025-07-12T00:15:12.420151794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 00:15:12.421658 containerd[1494]: time="2025-07-12T00:15:12.421630766Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:12.423443 containerd[1494]: time="2025-07-12T00:15:12.423412918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:12.424247 containerd[1494]: time="2025-07-12T00:15:12.424215397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.319066477s" Jul 12 00:15:12.424247 containerd[1494]: time="2025-07-12T00:15:12.424248004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:15:12.426628 containerd[1494]: time="2025-07-12T00:15:12.426570303Z" level=info msg="CreateContainer within sandbox \"512e83e4d883ecc8e26f8e2a3ea4fcbde9d40a910efda8795b4b7e3e6562e724\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:15:12.435226 containerd[1494]: time="2025-07-12T00:15:12.435156520Z" level=info msg="Container 090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:12.448282 containerd[1494]: time="2025-07-12T00:15:12.448223383Z" level=info msg="CreateContainer within sandbox \"512e83e4d883ecc8e26f8e2a3ea4fcbde9d40a910efda8795b4b7e3e6562e724\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a\"" Jul 12 00:15:12.448888 containerd[1494]: time="2025-07-12T00:15:12.448854388Z" level=info msg="StartContainer for \"090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a\"" Jul 12 00:15:12.451941 containerd[1494]: time="2025-07-12T00:15:12.451906391Z" level=info msg="connecting to shim 090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a" address="unix:///run/containerd/s/6bbb294dd001876ac8e7d7609f2e3eecbd1d02fba486cbbaa2e1d5134a095620" protocol=ttrpc version=3 Jul 12 00:15:12.486738 systemd[1]: Started cri-containerd-090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a.scope - libcontainer container 090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a. Jul 12 00:15:12.525291 containerd[1494]: time="2025-07-12T00:15:12.525226726Z" level=info msg="StartContainer for \"090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a\" returns successfully" Jul 12 00:15:13.454941 systemd[1]: cri-containerd-090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a.scope: Deactivated successfully. Jul 12 00:15:13.455337 systemd[1]: cri-containerd-090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a.scope: Consumed 554ms CPU time, 174.1M memory peak, 2.4M read from disk, 165.8M written to disk. Jul 12 00:15:13.462453 containerd[1494]: time="2025-07-12T00:15:13.462302448Z" level=info msg="received exit event container_id:\"090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a\" id:\"090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a\" pid:3431 exited_at:{seconds:1752279313 nanos:462084966}" Jul 12 00:15:13.462453 containerd[1494]: time="2025-07-12T00:15:13.462408708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a\" id:\"090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a\" pid:3431 exited_at:{seconds:1752279313 nanos:462084966}" Jul 12 00:15:13.480460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-090758f11429f892f12ff2b40da65cd3b5089cbcddfd75d7726255daf5f0132a-rootfs.mount: Deactivated successfully. Jul 12 00:15:13.528426 kubelet[2630]: I0712 00:15:13.528399 2630 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:15:13.595170 systemd[1]: Created slice kubepods-burstable-pod3f85911b_183c_4145_9fa8_e10154ef1527.slice - libcontainer container kubepods-burstable-pod3f85911b_183c_4145_9fa8_e10154ef1527.slice. Jul 12 00:15:13.610337 systemd[1]: Created slice kubepods-besteffort-pod9a673f24_fcbe_43ff_be04_69618ccc52e5.slice - libcontainer container kubepods-besteffort-pod9a673f24_fcbe_43ff_be04_69618ccc52e5.slice. Jul 12 00:15:13.621485 systemd[1]: Created slice kubepods-besteffort-podb377b7d8_5882_4000_97ad_a6d7809971bf.slice - libcontainer container kubepods-besteffort-podb377b7d8_5882_4000_97ad_a6d7809971bf.slice. Jul 12 00:15:13.629926 systemd[1]: Created slice kubepods-besteffort-pod6f800ed1_bca5_4bb7_baaa_5a42060a9930.slice - libcontainer container kubepods-besteffort-pod6f800ed1_bca5_4bb7_baaa_5a42060a9930.slice. Jul 12 00:15:13.636831 systemd[1]: Created slice kubepods-burstable-pod856772f7_8fcf_4192_a9b1_3fef3da20bdd.slice - libcontainer container kubepods-burstable-pod856772f7_8fcf_4192_a9b1_3fef3da20bdd.slice. Jul 12 00:15:13.643829 systemd[1]: Created slice kubepods-besteffort-pod77467823_3d80_4fc0_a989_f5c2bc1de53f.slice - libcontainer container kubepods-besteffort-pod77467823_3d80_4fc0_a989_f5c2bc1de53f.slice. Jul 12 00:15:13.649298 systemd[1]: Created slice kubepods-besteffort-poda0c45a16_8020_41b3_88eb_e30a4cabdd4d.slice - libcontainer container kubepods-besteffort-poda0c45a16_8020_41b3_88eb_e30a4cabdd4d.slice. Jul 12 00:15:13.737094 kubelet[2630]: I0712 00:15:13.736930 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hmzr\" (UniqueName: \"kubernetes.io/projected/6f800ed1-bca5-4bb7-baaa-5a42060a9930-kube-api-access-5hmzr\") pod \"whisker-878f8db79-thtqc\" (UID: \"6f800ed1-bca5-4bb7-baaa-5a42060a9930\") " pod="calico-system/whisker-878f8db79-thtqc" Jul 12 00:15:13.737094 kubelet[2630]: I0712 00:15:13.737023 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n57ll\" (UniqueName: \"kubernetes.io/projected/856772f7-8fcf-4192-a9b1-3fef3da20bdd-kube-api-access-n57ll\") pod \"coredns-7c65d6cfc9-m2pbd\" (UID: \"856772f7-8fcf-4192-a9b1-3fef3da20bdd\") " pod="kube-system/coredns-7c65d6cfc9-m2pbd" Jul 12 00:15:13.737094 kubelet[2630]: I0712 00:15:13.737069 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0c45a16-8020-41b3-88eb-e30a4cabdd4d-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-qgn44\" (UID: \"a0c45a16-8020-41b3-88eb-e30a4cabdd4d\") " pod="calico-system/goldmane-58fd7646b9-qgn44" Jul 12 00:15:13.737253 kubelet[2630]: I0712 00:15:13.737161 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/856772f7-8fcf-4192-a9b1-3fef3da20bdd-config-volume\") pod \"coredns-7c65d6cfc9-m2pbd\" (UID: \"856772f7-8fcf-4192-a9b1-3fef3da20bdd\") " pod="kube-system/coredns-7c65d6cfc9-m2pbd" Jul 12 00:15:13.737253 kubelet[2630]: I0712 00:15:13.737213 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4tzb\" (UniqueName: \"kubernetes.io/projected/77467823-3d80-4fc0-a989-f5c2bc1de53f-kube-api-access-j4tzb\") pod \"calico-apiserver-5dc57fd98b-grl9c\" (UID: \"77467823-3d80-4fc0-a989-f5c2bc1de53f\") " pod="calico-apiserver/calico-apiserver-5dc57fd98b-grl9c" Jul 12 00:15:13.737253 kubelet[2630]: I0712 00:15:13.737232 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a0c45a16-8020-41b3-88eb-e30a4cabdd4d-goldmane-key-pair\") pod \"goldmane-58fd7646b9-qgn44\" (UID: \"a0c45a16-8020-41b3-88eb-e30a4cabdd4d\") " pod="calico-system/goldmane-58fd7646b9-qgn44" Jul 12 00:15:13.737253 kubelet[2630]: I0712 00:15:13.737250 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4c6r\" (UniqueName: \"kubernetes.io/projected/a0c45a16-8020-41b3-88eb-e30a4cabdd4d-kube-api-access-z4c6r\") pod \"goldmane-58fd7646b9-qgn44\" (UID: \"a0c45a16-8020-41b3-88eb-e30a4cabdd4d\") " pod="calico-system/goldmane-58fd7646b9-qgn44" Jul 12 00:15:13.737335 kubelet[2630]: I0712 00:15:13.737293 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9shn\" (UniqueName: \"kubernetes.io/projected/9a673f24-fcbe-43ff-be04-69618ccc52e5-kube-api-access-n9shn\") pod \"calico-kube-controllers-6bcc7cb945-6swrd\" (UID: \"9a673f24-fcbe-43ff-be04-69618ccc52e5\") " pod="calico-system/calico-kube-controllers-6bcc7cb945-6swrd" Jul 12 00:15:13.737335 kubelet[2630]: I0712 00:15:13.737322 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b377b7d8-5882-4000-97ad-a6d7809971bf-calico-apiserver-certs\") pod \"calico-apiserver-5dc57fd98b-9k6s6\" (UID: \"b377b7d8-5882-4000-97ad-a6d7809971bf\") " pod="calico-apiserver/calico-apiserver-5dc57fd98b-9k6s6" Jul 12 00:15:13.737561 kubelet[2630]: I0712 00:15:13.737366 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0c45a16-8020-41b3-88eb-e30a4cabdd4d-config\") pod \"goldmane-58fd7646b9-qgn44\" (UID: \"a0c45a16-8020-41b3-88eb-e30a4cabdd4d\") " pod="calico-system/goldmane-58fd7646b9-qgn44" Jul 12 00:15:13.737561 kubelet[2630]: I0712 00:15:13.737384 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcvmd\" (UniqueName: \"kubernetes.io/projected/b377b7d8-5882-4000-97ad-a6d7809971bf-kube-api-access-hcvmd\") pod \"calico-apiserver-5dc57fd98b-9k6s6\" (UID: \"b377b7d8-5882-4000-97ad-a6d7809971bf\") " pod="calico-apiserver/calico-apiserver-5dc57fd98b-9k6s6" Jul 12 00:15:13.737561 kubelet[2630]: I0712 00:15:13.737400 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f800ed1-bca5-4bb7-baaa-5a42060a9930-whisker-backend-key-pair\") pod \"whisker-878f8db79-thtqc\" (UID: \"6f800ed1-bca5-4bb7-baaa-5a42060a9930\") " pod="calico-system/whisker-878f8db79-thtqc" Jul 12 00:15:13.737561 kubelet[2630]: I0712 00:15:13.737455 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f800ed1-bca5-4bb7-baaa-5a42060a9930-whisker-ca-bundle\") pod \"whisker-878f8db79-thtqc\" (UID: \"6f800ed1-bca5-4bb7-baaa-5a42060a9930\") " pod="calico-system/whisker-878f8db79-thtqc" Jul 12 00:15:13.737561 kubelet[2630]: I0712 00:15:13.737488 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qh4c\" (UniqueName: \"kubernetes.io/projected/3f85911b-183c-4145-9fa8-e10154ef1527-kube-api-access-7qh4c\") pod \"coredns-7c65d6cfc9-r5x9g\" (UID: \"3f85911b-183c-4145-9fa8-e10154ef1527\") " pod="kube-system/coredns-7c65d6cfc9-r5x9g" Jul 12 00:15:13.737989 kubelet[2630]: I0712 00:15:13.737546 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a673f24-fcbe-43ff-be04-69618ccc52e5-tigera-ca-bundle\") pod \"calico-kube-controllers-6bcc7cb945-6swrd\" (UID: \"9a673f24-fcbe-43ff-be04-69618ccc52e5\") " pod="calico-system/calico-kube-controllers-6bcc7cb945-6swrd" Jul 12 00:15:13.737989 kubelet[2630]: I0712 00:15:13.737567 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77467823-3d80-4fc0-a989-f5c2bc1de53f-calico-apiserver-certs\") pod \"calico-apiserver-5dc57fd98b-grl9c\" (UID: \"77467823-3d80-4fc0-a989-f5c2bc1de53f\") " pod="calico-apiserver/calico-apiserver-5dc57fd98b-grl9c" Jul 12 00:15:13.737989 kubelet[2630]: I0712 00:15:13.737582 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f85911b-183c-4145-9fa8-e10154ef1527-config-volume\") pod \"coredns-7c65d6cfc9-r5x9g\" (UID: \"3f85911b-183c-4145-9fa8-e10154ef1527\") " pod="kube-system/coredns-7c65d6cfc9-r5x9g" Jul 12 00:15:13.901427 containerd[1494]: time="2025-07-12T00:15:13.901382741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r5x9g,Uid:3f85911b-183c-4145-9fa8-e10154ef1527,Namespace:kube-system,Attempt:0,}" Jul 12 00:15:13.915525 containerd[1494]: time="2025-07-12T00:15:13.915454030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bcc7cb945-6swrd,Uid:9a673f24-fcbe-43ff-be04-69618ccc52e5,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:13.929108 systemd[1]: Created slice kubepods-besteffort-pod0f56b660_36f9_4f11_961a_175fd72ba31b.slice - libcontainer container kubepods-besteffort-pod0f56b660_36f9_4f11_961a_175fd72ba31b.slice. Jul 12 00:15:13.930307 containerd[1494]: time="2025-07-12T00:15:13.930277542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc57fd98b-9k6s6,Uid:b377b7d8-5882-4000-97ad-a6d7809971bf,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:15:13.936310 containerd[1494]: time="2025-07-12T00:15:13.936270567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-878f8db79-thtqc,Uid:6f800ed1-bca5-4bb7-baaa-5a42060a9930,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:13.936445 containerd[1494]: time="2025-07-12T00:15:13.936418035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jncsr,Uid:0f56b660-36f9-4f11-961a-175fd72ba31b,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:13.946224 containerd[1494]: time="2025-07-12T00:15:13.943293909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m2pbd,Uid:856772f7-8fcf-4192-a9b1-3fef3da20bdd,Namespace:kube-system,Attempt:0,}" Jul 12 00:15:13.954609 containerd[1494]: time="2025-07-12T00:15:13.952681183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc57fd98b-grl9c,Uid:77467823-3d80-4fc0-a989-f5c2bc1de53f,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:15:13.963164 containerd[1494]: time="2025-07-12T00:15:13.961754476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qgn44,Uid:a0c45a16-8020-41b3-88eb-e30a4cabdd4d,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:14.147036 containerd[1494]: time="2025-07-12T00:15:14.144014811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:15:14.409814 containerd[1494]: time="2025-07-12T00:15:14.409661079Z" level=error msg="Failed to destroy network for sandbox \"f56cb2fab00a585fed76adcdfa03c1e5ad3c460e1f699b6ba44b389bf0c88999\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.411429 containerd[1494]: time="2025-07-12T00:15:14.411390519Z" level=error msg="Failed to destroy network for sandbox \"7a2e996e1caa1a2169af551cdaa03671917b34052097170b0e16aafe15f32ddb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.412068 containerd[1494]: time="2025-07-12T00:15:14.412034718Z" level=error msg="Failed to destroy network for sandbox \"c552690d617670a9dbb5638a25e8848737ce3f7a5adfa3a5ac63a68de089e84b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.412319 containerd[1494]: time="2025-07-12T00:15:14.412222673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc57fd98b-grl9c,Uid:77467823-3d80-4fc0-a989-f5c2bc1de53f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56cb2fab00a585fed76adcdfa03c1e5ad3c460e1f699b6ba44b389bf0c88999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.414478 kubelet[2630]: E0712 00:15:14.414315 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56cb2fab00a585fed76adcdfa03c1e5ad3c460e1f699b6ba44b389bf0c88999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.414674 kubelet[2630]: E0712 00:15:14.414541 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56cb2fab00a585fed76adcdfa03c1e5ad3c460e1f699b6ba44b389bf0c88999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc57fd98b-grl9c" Jul 12 00:15:14.414674 kubelet[2630]: E0712 00:15:14.414567 2630 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56cb2fab00a585fed76adcdfa03c1e5ad3c460e1f699b6ba44b389bf0c88999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc57fd98b-grl9c" Jul 12 00:15:14.414674 kubelet[2630]: E0712 00:15:14.414637 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dc57fd98b-grl9c_calico-apiserver(77467823-3d80-4fc0-a989-f5c2bc1de53f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dc57fd98b-grl9c_calico-apiserver(77467823-3d80-4fc0-a989-f5c2bc1de53f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f56cb2fab00a585fed76adcdfa03c1e5ad3c460e1f699b6ba44b389bf0c88999\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dc57fd98b-grl9c" podUID="77467823-3d80-4fc0-a989-f5c2bc1de53f" Jul 12 00:15:14.417113 containerd[1494]: time="2025-07-12T00:15:14.417044124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc57fd98b-9k6s6,Uid:b377b7d8-5882-4000-97ad-a6d7809971bf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a2e996e1caa1a2169af551cdaa03671917b34052097170b0e16aafe15f32ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.417365 kubelet[2630]: E0712 00:15:14.417318 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a2e996e1caa1a2169af551cdaa03671917b34052097170b0e16aafe15f32ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.417448 kubelet[2630]: E0712 00:15:14.417425 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a2e996e1caa1a2169af551cdaa03671917b34052097170b0e16aafe15f32ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc57fd98b-9k6s6" Jul 12 00:15:14.417478 kubelet[2630]: E0712 00:15:14.417451 2630 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a2e996e1caa1a2169af551cdaa03671917b34052097170b0e16aafe15f32ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc57fd98b-9k6s6" Jul 12 00:15:14.417553 kubelet[2630]: E0712 00:15:14.417492 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dc57fd98b-9k6s6_calico-apiserver(b377b7d8-5882-4000-97ad-a6d7809971bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dc57fd98b-9k6s6_calico-apiserver(b377b7d8-5882-4000-97ad-a6d7809971bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a2e996e1caa1a2169af551cdaa03671917b34052097170b0e16aafe15f32ddb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dc57fd98b-9k6s6" podUID="b377b7d8-5882-4000-97ad-a6d7809971bf" Jul 12 00:15:14.418056 containerd[1494]: time="2025-07-12T00:15:14.417971896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bcc7cb945-6swrd,Uid:9a673f24-fcbe-43ff-be04-69618ccc52e5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c552690d617670a9dbb5638a25e8848737ce3f7a5adfa3a5ac63a68de089e84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.418450 kubelet[2630]: E0712 00:15:14.418154 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c552690d617670a9dbb5638a25e8848737ce3f7a5adfa3a5ac63a68de089e84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.418450 kubelet[2630]: E0712 00:15:14.418191 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c552690d617670a9dbb5638a25e8848737ce3f7a5adfa3a5ac63a68de089e84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bcc7cb945-6swrd" Jul 12 00:15:14.418450 kubelet[2630]: E0712 00:15:14.418213 2630 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c552690d617670a9dbb5638a25e8848737ce3f7a5adfa3a5ac63a68de089e84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bcc7cb945-6swrd" Jul 12 00:15:14.418930 kubelet[2630]: E0712 00:15:14.418247 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bcc7cb945-6swrd_calico-system(9a673f24-fcbe-43ff-be04-69618ccc52e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bcc7cb945-6swrd_calico-system(9a673f24-fcbe-43ff-be04-69618ccc52e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c552690d617670a9dbb5638a25e8848737ce3f7a5adfa3a5ac63a68de089e84b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bcc7cb945-6swrd" podUID="9a673f24-fcbe-43ff-be04-69618ccc52e5" Jul 12 00:15:14.419383 containerd[1494]: time="2025-07-12T00:15:14.419340749Z" level=error msg="Failed to destroy network for sandbox \"0db09835497fea8e7a5a60d36623913caadba0fc1b7bf9289618aa8812e3431c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.419799 containerd[1494]: time="2025-07-12T00:15:14.419712137Z" level=error msg="Failed to destroy network for sandbox \"295fcd64d5198db4bd554202371c0c1fe7982496f73682b2924329538b81a947\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.421552 containerd[1494]: time="2025-07-12T00:15:14.421497587Z" level=error msg="Failed to destroy network for sandbox \"952f7fdfc16a7ed2fec27c80f7119170dc543b67a71990c2076a9c7c9f89f9e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.425791 containerd[1494]: time="2025-07-12T00:15:14.425692163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-878f8db79-thtqc,Uid:6f800ed1-bca5-4bb7-baaa-5a42060a9930,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db09835497fea8e7a5a60d36623913caadba0fc1b7bf9289618aa8812e3431c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.426098 kubelet[2630]: E0712 00:15:14.426034 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db09835497fea8e7a5a60d36623913caadba0fc1b7bf9289618aa8812e3431c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.426171 kubelet[2630]: E0712 00:15:14.426101 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db09835497fea8e7a5a60d36623913caadba0fc1b7bf9289618aa8812e3431c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-878f8db79-thtqc" Jul 12 00:15:14.426171 kubelet[2630]: E0712 00:15:14.426128 2630 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db09835497fea8e7a5a60d36623913caadba0fc1b7bf9289618aa8812e3431c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-878f8db79-thtqc" Jul 12 00:15:14.426231 kubelet[2630]: E0712 00:15:14.426198 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-878f8db79-thtqc_calico-system(6f800ed1-bca5-4bb7-baaa-5a42060a9930)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-878f8db79-thtqc_calico-system(6f800ed1-bca5-4bb7-baaa-5a42060a9930)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0db09835497fea8e7a5a60d36623913caadba0fc1b7bf9289618aa8812e3431c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-878f8db79-thtqc" podUID="6f800ed1-bca5-4bb7-baaa-5a42060a9930" Jul 12 00:15:14.426987 containerd[1494]: time="2025-07-12T00:15:14.426848336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qgn44,Uid:a0c45a16-8020-41b3-88eb-e30a4cabdd4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"295fcd64d5198db4bd554202371c0c1fe7982496f73682b2924329538b81a947\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.427698 containerd[1494]: time="2025-07-12T00:15:14.427543545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r5x9g,Uid:3f85911b-183c-4145-9fa8-e10154ef1527,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"952f7fdfc16a7ed2fec27c80f7119170dc543b67a71990c2076a9c7c9f89f9e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.428765 kubelet[2630]: E0712 00:15:14.428721 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"952f7fdfc16a7ed2fec27c80f7119170dc543b67a71990c2076a9c7c9f89f9e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.428834 kubelet[2630]: E0712 00:15:14.428772 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"952f7fdfc16a7ed2fec27c80f7119170dc543b67a71990c2076a9c7c9f89f9e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r5x9g" Jul 12 00:15:14.428834 kubelet[2630]: E0712 00:15:14.428792 2630 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"952f7fdfc16a7ed2fec27c80f7119170dc543b67a71990c2076a9c7c9f89f9e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r5x9g" Jul 12 00:15:14.428834 kubelet[2630]: E0712 00:15:14.428799 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"295fcd64d5198db4bd554202371c0c1fe7982496f73682b2924329538b81a947\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.428911 kubelet[2630]: E0712 00:15:14.428826 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-r5x9g_kube-system(3f85911b-183c-4145-9fa8-e10154ef1527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-r5x9g_kube-system(3f85911b-183c-4145-9fa8-e10154ef1527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"952f7fdfc16a7ed2fec27c80f7119170dc543b67a71990c2076a9c7c9f89f9e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-r5x9g" podUID="3f85911b-183c-4145-9fa8-e10154ef1527" Jul 12 00:15:14.428911 kubelet[2630]: E0712 00:15:14.428860 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"295fcd64d5198db4bd554202371c0c1fe7982496f73682b2924329538b81a947\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-qgn44" Jul 12 00:15:14.428911 kubelet[2630]: E0712 00:15:14.428880 2630 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"295fcd64d5198db4bd554202371c0c1fe7982496f73682b2924329538b81a947\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-qgn44" Jul 12 00:15:14.429043 kubelet[2630]: E0712 00:15:14.429018 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-qgn44_calico-system(a0c45a16-8020-41b3-88eb-e30a4cabdd4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-qgn44_calico-system(a0c45a16-8020-41b3-88eb-e30a4cabdd4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"295fcd64d5198db4bd554202371c0c1fe7982496f73682b2924329538b81a947\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-qgn44" podUID="a0c45a16-8020-41b3-88eb-e30a4cabdd4d" Jul 12 00:15:14.430219 containerd[1494]: time="2025-07-12T00:15:14.430187634Z" level=error msg="Failed to destroy network for sandbox \"dbd7d917a61817b7eeb6a94e7be5c2b8813ff88d7e38a52fd4f78e4477956063\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.432125 containerd[1494]: time="2025-07-12T00:15:14.432076023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jncsr,Uid:0f56b660-36f9-4f11-961a-175fd72ba31b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd7d917a61817b7eeb6a94e7be5c2b8813ff88d7e38a52fd4f78e4477956063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.432464 kubelet[2630]: E0712 00:15:14.432414 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd7d917a61817b7eeb6a94e7be5c2b8813ff88d7e38a52fd4f78e4477956063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.432508 kubelet[2630]: E0712 00:15:14.432486 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd7d917a61817b7eeb6a94e7be5c2b8813ff88d7e38a52fd4f78e4477956063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jncsr" Jul 12 00:15:14.432508 kubelet[2630]: E0712 00:15:14.432504 2630 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd7d917a61817b7eeb6a94e7be5c2b8813ff88d7e38a52fd4f78e4477956063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jncsr" Jul 12 00:15:14.432508 kubelet[2630]: E0712 00:15:14.432571 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jncsr_calico-system(0f56b660-36f9-4f11-961a-175fd72ba31b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jncsr_calico-system(0f56b660-36f9-4f11-961a-175fd72ba31b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbd7d917a61817b7eeb6a94e7be5c2b8813ff88d7e38a52fd4f78e4477956063\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jncsr" podUID="0f56b660-36f9-4f11-961a-175fd72ba31b" Jul 12 00:15:14.435429 containerd[1494]: time="2025-07-12T00:15:14.435388395Z" level=error msg="Failed to destroy network for sandbox \"9dee4ebe803211bdaecc961ce19fb556e372b87ab26283f4ca5f3b0890a43786\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.436331 containerd[1494]: time="2025-07-12T00:15:14.436292322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m2pbd,Uid:856772f7-8fcf-4192-a9b1-3fef3da20bdd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dee4ebe803211bdaecc961ce19fb556e372b87ab26283f4ca5f3b0890a43786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.437546 kubelet[2630]: E0712 00:15:14.436878 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dee4ebe803211bdaecc961ce19fb556e372b87ab26283f4ca5f3b0890a43786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:14.437546 kubelet[2630]: E0712 00:15:14.436955 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dee4ebe803211bdaecc961ce19fb556e372b87ab26283f4ca5f3b0890a43786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-m2pbd" Jul 12 00:15:14.437546 kubelet[2630]: E0712 00:15:14.436973 2630 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dee4ebe803211bdaecc961ce19fb556e372b87ab26283f4ca5f3b0890a43786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-m2pbd" Jul 12 00:15:14.437671 kubelet[2630]: E0712 00:15:14.437039 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-m2pbd_kube-system(856772f7-8fcf-4192-a9b1-3fef3da20bdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-m2pbd_kube-system(856772f7-8fcf-4192-a9b1-3fef3da20bdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dee4ebe803211bdaecc961ce19fb556e372b87ab26283f4ca5f3b0890a43786\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-m2pbd" podUID="856772f7-8fcf-4192-a9b1-3fef3da20bdd" Jul 12 00:15:18.416606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814185060.mount: Deactivated successfully. Jul 12 00:15:18.666381 containerd[1494]: time="2025-07-12T00:15:18.666333301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:18.667201 containerd[1494]: time="2025-07-12T00:15:18.666908195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 00:15:18.668335 containerd[1494]: time="2025-07-12T00:15:18.668262537Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:18.677034 containerd[1494]: time="2025-07-12T00:15:18.676993926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:18.679076 containerd[1494]: time="2025-07-12T00:15:18.678896917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.534842779s" Jul 12 00:15:18.679076 containerd[1494]: time="2025-07-12T00:15:18.678936284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:15:18.712935 containerd[1494]: time="2025-07-12T00:15:18.711650519Z" level=info msg="CreateContainer within sandbox \"512e83e4d883ecc8e26f8e2a3ea4fcbde9d40a910efda8795b4b7e3e6562e724\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:15:18.723095 containerd[1494]: time="2025-07-12T00:15:18.722212848Z" level=info msg="Container 323007fea4648c23081108d39151fe8fa7c5cb04b82649f1fc277545be2cf611: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:18.744527 containerd[1494]: time="2025-07-12T00:15:18.744459729Z" level=info msg="CreateContainer within sandbox \"512e83e4d883ecc8e26f8e2a3ea4fcbde9d40a910efda8795b4b7e3e6562e724\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"323007fea4648c23081108d39151fe8fa7c5cb04b82649f1fc277545be2cf611\"" Jul 12 00:15:18.745238 containerd[1494]: time="2025-07-12T00:15:18.744987736Z" level=info msg="StartContainer for \"323007fea4648c23081108d39151fe8fa7c5cb04b82649f1fc277545be2cf611\"" Jul 12 00:15:18.746410 containerd[1494]: time="2025-07-12T00:15:18.746381604Z" level=info msg="connecting to shim 323007fea4648c23081108d39151fe8fa7c5cb04b82649f1fc277545be2cf611" address="unix:///run/containerd/s/6bbb294dd001876ac8e7d7609f2e3eecbd1d02fba486cbbaa2e1d5134a095620" protocol=ttrpc version=3 Jul 12 00:15:18.769718 systemd[1]: Started cri-containerd-323007fea4648c23081108d39151fe8fa7c5cb04b82649f1fc277545be2cf611.scope - libcontainer container 323007fea4648c23081108d39151fe8fa7c5cb04b82649f1fc277545be2cf611. Jul 12 00:15:18.811584 containerd[1494]: time="2025-07-12T00:15:18.811538789Z" level=info msg="StartContainer for \"323007fea4648c23081108d39151fe8fa7c5cb04b82649f1fc277545be2cf611\" returns successfully" Jul 12 00:15:19.075558 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:15:19.075677 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:15:19.386254 kubelet[2630]: I0712 00:15:19.385814 2630 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f800ed1-bca5-4bb7-baaa-5a42060a9930-whisker-ca-bundle\") pod \"6f800ed1-bca5-4bb7-baaa-5a42060a9930\" (UID: \"6f800ed1-bca5-4bb7-baaa-5a42060a9930\") " Jul 12 00:15:19.386254 kubelet[2630]: I0712 00:15:19.385867 2630 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hmzr\" (UniqueName: \"kubernetes.io/projected/6f800ed1-bca5-4bb7-baaa-5a42060a9930-kube-api-access-5hmzr\") pod \"6f800ed1-bca5-4bb7-baaa-5a42060a9930\" (UID: \"6f800ed1-bca5-4bb7-baaa-5a42060a9930\") " Jul 12 00:15:19.386254 kubelet[2630]: I0712 00:15:19.385901 2630 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f800ed1-bca5-4bb7-baaa-5a42060a9930-whisker-backend-key-pair\") pod \"6f800ed1-bca5-4bb7-baaa-5a42060a9930\" (UID: \"6f800ed1-bca5-4bb7-baaa-5a42060a9930\") " Jul 12 00:15:19.404637 kubelet[2630]: I0712 00:15:19.404554 2630 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f800ed1-bca5-4bb7-baaa-5a42060a9930-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6f800ed1-bca5-4bb7-baaa-5a42060a9930" (UID: "6f800ed1-bca5-4bb7-baaa-5a42060a9930"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:15:19.404885 kubelet[2630]: I0712 00:15:19.404855 2630 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f800ed1-bca5-4bb7-baaa-5a42060a9930-kube-api-access-5hmzr" (OuterVolumeSpecName: "kube-api-access-5hmzr") pod "6f800ed1-bca5-4bb7-baaa-5a42060a9930" (UID: "6f800ed1-bca5-4bb7-baaa-5a42060a9930"). InnerVolumeSpecName "kube-api-access-5hmzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:15:19.408449 kubelet[2630]: I0712 00:15:19.408413 2630 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f800ed1-bca5-4bb7-baaa-5a42060a9930-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6f800ed1-bca5-4bb7-baaa-5a42060a9930" (UID: "6f800ed1-bca5-4bb7-baaa-5a42060a9930"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:15:19.417235 systemd[1]: var-lib-kubelet-pods-6f800ed1\x2dbca5\x2d4bb7\x2dbaaa\x2d5a42060a9930-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5hmzr.mount: Deactivated successfully. Jul 12 00:15:19.417333 systemd[1]: var-lib-kubelet-pods-6f800ed1\x2dbca5\x2d4bb7\x2dbaaa\x2d5a42060a9930-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:15:19.486890 kubelet[2630]: I0712 00:15:19.486816 2630 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hmzr\" (UniqueName: \"kubernetes.io/projected/6f800ed1-bca5-4bb7-baaa-5a42060a9930-kube-api-access-5hmzr\") on node \"localhost\" DevicePath \"\"" Jul 12 00:15:19.486890 kubelet[2630]: I0712 00:15:19.486859 2630 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f800ed1-bca5-4bb7-baaa-5a42060a9930-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 12 00:15:19.486890 kubelet[2630]: I0712 00:15:19.486868 2630 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f800ed1-bca5-4bb7-baaa-5a42060a9930-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 12 00:15:20.194609 kubelet[2630]: I0712 00:15:20.194571 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:15:20.201327 systemd[1]: Removed slice kubepods-besteffort-pod6f800ed1_bca5_4bb7_baaa_5a42060a9930.slice - libcontainer container kubepods-besteffort-pod6f800ed1_bca5_4bb7_baaa_5a42060a9930.slice. Jul 12 00:15:20.212257 kubelet[2630]: I0712 00:15:20.212194 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vj658" podStartSLOduration=2.967859743 podStartE2EDuration="17.212175442s" podCreationTimestamp="2025-07-12 00:15:03 +0000 UTC" firstStartedPulling="2025-07-12 00:15:04.459083589 +0000 UTC m=+21.638491672" lastFinishedPulling="2025-07-12 00:15:18.703399328 +0000 UTC m=+35.882807371" observedRunningTime="2025-07-12 00:15:19.219120886 +0000 UTC m=+36.398528929" watchObservedRunningTime="2025-07-12 00:15:20.212175442 +0000 UTC m=+37.391583525" Jul 12 00:15:20.248822 systemd[1]: Created slice kubepods-besteffort-poda3f3be38_f2fa_4cf0_a2fb_127af2d1929c.slice - libcontainer container kubepods-besteffort-poda3f3be38_f2fa_4cf0_a2fb_127af2d1929c.slice. Jul 12 00:15:20.391900 kubelet[2630]: I0712 00:15:20.391856 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsmnf\" (UniqueName: \"kubernetes.io/projected/a3f3be38-f2fa-4cf0-a2fb-127af2d1929c-kube-api-access-rsmnf\") pod \"whisker-55497c5bb9-lg625\" (UID: \"a3f3be38-f2fa-4cf0-a2fb-127af2d1929c\") " pod="calico-system/whisker-55497c5bb9-lg625" Jul 12 00:15:20.391900 kubelet[2630]: I0712 00:15:20.391900 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a3f3be38-f2fa-4cf0-a2fb-127af2d1929c-whisker-backend-key-pair\") pod \"whisker-55497c5bb9-lg625\" (UID: \"a3f3be38-f2fa-4cf0-a2fb-127af2d1929c\") " pod="calico-system/whisker-55497c5bb9-lg625" Jul 12 00:15:20.392283 kubelet[2630]: I0712 00:15:20.391924 2630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f3be38-f2fa-4cf0-a2fb-127af2d1929c-whisker-ca-bundle\") pod \"whisker-55497c5bb9-lg625\" (UID: \"a3f3be38-f2fa-4cf0-a2fb-127af2d1929c\") " pod="calico-system/whisker-55497c5bb9-lg625" Jul 12 00:15:20.553476 containerd[1494]: time="2025-07-12T00:15:20.553310031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55497c5bb9-lg625,Uid:a3f3be38-f2fa-4cf0-a2fb-127af2d1929c,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:20.829164 systemd-networkd[1437]: cali3e7237566cd: Link UP Jul 12 00:15:20.830585 systemd-networkd[1437]: cali3e7237566cd: Gained carrier Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.583 [INFO][3912] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.663 [INFO][3912] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--55497c5bb9--lg625-eth0 whisker-55497c5bb9- calico-system a3f3be38-f2fa-4cf0-a2fb-127af2d1929c 870 0 2025-07-12 00:15:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55497c5bb9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-55497c5bb9-lg625 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3e7237566cd [] [] }} ContainerID="240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" Namespace="calico-system" Pod="whisker-55497c5bb9-lg625" WorkloadEndpoint="localhost-k8s-whisker--55497c5bb9--lg625-" Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.663 [INFO][3912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" Namespace="calico-system" Pod="whisker-55497c5bb9-lg625" WorkloadEndpoint="localhost-k8s-whisker--55497c5bb9--lg625-eth0" Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.779 [INFO][3928] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" HandleID="k8s-pod-network.240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" Workload="localhost-k8s-whisker--55497c5bb9--lg625-eth0" Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.779 [INFO][3928] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" HandleID="k8s-pod-network.240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" Workload="localhost-k8s-whisker--55497c5bb9--lg625-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000138200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-55497c5bb9-lg625", "timestamp":"2025-07-12 00:15:20.779092582 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.779 [INFO][3928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.779 [INFO][3928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.779 [INFO][3928] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.789 [INFO][3928] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" host="localhost" Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.797 [INFO][3928] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.801 [INFO][3928] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.803 [INFO][3928] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.805 [INFO][3928] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.805 [INFO][3928] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" host="localhost" Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.806 [INFO][3928] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04 Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.811 [INFO][3928] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" host="localhost" Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.816 [INFO][3928] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" host="localhost" Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.816 [INFO][3928] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" host="localhost" Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.816 [INFO][3928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:15:20.847290 containerd[1494]: 2025-07-12 00:15:20.816 [INFO][3928] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" HandleID="k8s-pod-network.240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" Workload="localhost-k8s-whisker--55497c5bb9--lg625-eth0" Jul 12 00:15:20.847864 containerd[1494]: 2025-07-12 00:15:20.819 [INFO][3912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" Namespace="calico-system" Pod="whisker-55497c5bb9-lg625" WorkloadEndpoint="localhost-k8s-whisker--55497c5bb9--lg625-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55497c5bb9--lg625-eth0", GenerateName:"whisker-55497c5bb9-", Namespace:"calico-system", SelfLink:"", UID:"a3f3be38-f2fa-4cf0-a2fb-127af2d1929c", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55497c5bb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-55497c5bb9-lg625", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3e7237566cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:20.847864 containerd[1494]: 2025-07-12 00:15:20.819 [INFO][3912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" Namespace="calico-system" Pod="whisker-55497c5bb9-lg625" WorkloadEndpoint="localhost-k8s-whisker--55497c5bb9--lg625-eth0" Jul 12 00:15:20.847864 containerd[1494]: 2025-07-12 00:15:20.819 [INFO][3912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e7237566cd ContainerID="240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" Namespace="calico-system" Pod="whisker-55497c5bb9-lg625" WorkloadEndpoint="localhost-k8s-whisker--55497c5bb9--lg625-eth0" Jul 12 00:15:20.847864 containerd[1494]: 2025-07-12 00:15:20.831 [INFO][3912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" Namespace="calico-system" Pod="whisker-55497c5bb9-lg625" WorkloadEndpoint="localhost-k8s-whisker--55497c5bb9--lg625-eth0" Jul 12 00:15:20.847864 containerd[1494]: 2025-07-12 00:15:20.831 [INFO][3912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" Namespace="calico-system" Pod="whisker-55497c5bb9-lg625" WorkloadEndpoint="localhost-k8s-whisker--55497c5bb9--lg625-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55497c5bb9--lg625-eth0", GenerateName:"whisker-55497c5bb9-", Namespace:"calico-system", SelfLink:"", UID:"a3f3be38-f2fa-4cf0-a2fb-127af2d1929c", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55497c5bb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04", Pod:"whisker-55497c5bb9-lg625", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3e7237566cd", MAC:"f2:ea:cc:b3:3a:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:20.847864 containerd[1494]: 2025-07-12 00:15:20.844 [INFO][3912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" Namespace="calico-system" Pod="whisker-55497c5bb9-lg625" WorkloadEndpoint="localhost-k8s-whisker--55497c5bb9--lg625-eth0" Jul 12 00:15:20.909080 containerd[1494]: time="2025-07-12T00:15:20.908982752Z" level=info msg="connecting to shim 240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04" address="unix:///run/containerd/s/c8ced06f66ff4baa011733a5e36922d4c339f8f419e7001e5ebe1013f0561fac" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:20.921749 kubelet[2630]: I0712 00:15:20.921715 2630 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f800ed1-bca5-4bb7-baaa-5a42060a9930" path="/var/lib/kubelet/pods/6f800ed1-bca5-4bb7-baaa-5a42060a9930/volumes" Jul 12 00:15:20.941746 systemd[1]: Started cri-containerd-240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04.scope - libcontainer container 240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04. Jul 12 00:15:20.952672 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:15:20.971011 containerd[1494]: time="2025-07-12T00:15:20.970975360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55497c5bb9-lg625,Uid:a3f3be38-f2fa-4cf0-a2fb-127af2d1929c,Namespace:calico-system,Attempt:0,} returns sandbox id \"240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04\"" Jul 12 00:15:20.972867 containerd[1494]: time="2025-07-12T00:15:20.972801123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:15:21.254774 kubelet[2630]: I0712 00:15:21.254648 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:15:21.968728 systemd-networkd[1437]: vxlan.calico: Link UP Jul 12 00:15:21.968736 systemd-networkd[1437]: vxlan.calico: Gained carrier Jul 12 00:15:22.029854 containerd[1494]: time="2025-07-12T00:15:22.029800576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:22.030765 containerd[1494]: time="2025-07-12T00:15:22.030606935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 00:15:22.031463 containerd[1494]: time="2025-07-12T00:15:22.031430616Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:22.033561 containerd[1494]: time="2025-07-12T00:15:22.033531686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:22.034537 containerd[1494]: time="2025-07-12T00:15:22.034475065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.061640737s" Jul 12 00:15:22.034901 containerd[1494]: time="2025-07-12T00:15:22.034852760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:15:22.038136 containerd[1494]: time="2025-07-12T00:15:22.038103839Z" level=info msg="CreateContainer within sandbox \"240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:15:22.044541 containerd[1494]: time="2025-07-12T00:15:22.043838044Z" level=info msg="Container dc8b08e3b96fb58b043aab72c18ccc9ebe2383f9aa615f6e1fd35ea4b29dc4a2: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:22.050774 containerd[1494]: time="2025-07-12T00:15:22.050736981Z" level=info msg="CreateContainer within sandbox \"240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"dc8b08e3b96fb58b043aab72c18ccc9ebe2383f9aa615f6e1fd35ea4b29dc4a2\"" Jul 12 00:15:22.051319 containerd[1494]: time="2025-07-12T00:15:22.051292182Z" level=info msg="StartContainer for \"dc8b08e3b96fb58b043aab72c18ccc9ebe2383f9aa615f6e1fd35ea4b29dc4a2\"" Jul 12 00:15:22.053119 containerd[1494]: time="2025-07-12T00:15:22.053081046Z" level=info msg="connecting to shim dc8b08e3b96fb58b043aab72c18ccc9ebe2383f9aa615f6e1fd35ea4b29dc4a2" address="unix:///run/containerd/s/c8ced06f66ff4baa011733a5e36922d4c339f8f419e7001e5ebe1013f0561fac" protocol=ttrpc version=3 Jul 12 00:15:22.080676 systemd[1]: Started cri-containerd-dc8b08e3b96fb58b043aab72c18ccc9ebe2383f9aa615f6e1fd35ea4b29dc4a2.scope - libcontainer container dc8b08e3b96fb58b043aab72c18ccc9ebe2383f9aa615f6e1fd35ea4b29dc4a2. Jul 12 00:15:22.143106 containerd[1494]: time="2025-07-12T00:15:22.142911320Z" level=info msg="StartContainer for \"dc8b08e3b96fb58b043aab72c18ccc9ebe2383f9aa615f6e1fd35ea4b29dc4a2\" returns successfully" Jul 12 00:15:22.145552 containerd[1494]: time="2025-07-12T00:15:22.145227062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:15:22.438707 systemd-networkd[1437]: cali3e7237566cd: Gained IPv6LL Jul 12 00:15:22.679212 systemd[1]: Started sshd@7-10.0.0.143:22-10.0.0.1:43890.service - OpenSSH per-connection server daemon (10.0.0.1:43890). Jul 12 00:15:22.746078 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 43890 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:22.747708 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:22.751991 systemd-logind[1478]: New session 8 of user core. Jul 12 00:15:22.763688 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:15:23.075050 sshd[4169]: Connection closed by 10.0.0.1 port 43890 Jul 12 00:15:23.075390 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:23.078878 systemd[1]: sshd@7-10.0.0.143:22-10.0.0.1:43890.service: Deactivated successfully. Jul 12 00:15:23.082015 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:15:23.082889 systemd-logind[1478]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:15:23.084069 systemd-logind[1478]: Removed session 8. Jul 12 00:15:23.526683 systemd-networkd[1437]: vxlan.calico: Gained IPv6LL Jul 12 00:15:23.918095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761985018.mount: Deactivated successfully. Jul 12 00:15:23.933423 containerd[1494]: time="2025-07-12T00:15:23.932919200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:23.933423 containerd[1494]: time="2025-07-12T00:15:23.933335259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 00:15:23.936893 containerd[1494]: time="2025-07-12T00:15:23.936858166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.791372347s" Jul 12 00:15:23.936893 containerd[1494]: time="2025-07-12T00:15:23.936896532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:15:23.938947 containerd[1494]: time="2025-07-12T00:15:23.938921223Z" level=info msg="CreateContainer within sandbox \"240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:15:23.940097 containerd[1494]: time="2025-07-12T00:15:23.940007099Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:23.941058 containerd[1494]: time="2025-07-12T00:15:23.941020365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:23.961660 containerd[1494]: time="2025-07-12T00:15:23.961596085Z" level=info msg="Container f37ce990f070abf33c03a4ff1f8043b81b108617e3b854ec036de42404b71e72: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:23.962488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1394032328.mount: Deactivated successfully. Jul 12 00:15:23.969365 containerd[1494]: time="2025-07-12T00:15:23.969312755Z" level=info msg="CreateContainer within sandbox \"240375b16c85a6297aca4f68ef107e93030c0218e4dda1f19071120f19518c04\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f37ce990f070abf33c03a4ff1f8043b81b108617e3b854ec036de42404b71e72\"" Jul 12 00:15:23.970212 containerd[1494]: time="2025-07-12T00:15:23.970190961Z" level=info msg="StartContainer for \"f37ce990f070abf33c03a4ff1f8043b81b108617e3b854ec036de42404b71e72\"" Jul 12 00:15:23.971331 containerd[1494]: time="2025-07-12T00:15:23.971305802Z" level=info msg="connecting to shim f37ce990f070abf33c03a4ff1f8043b81b108617e3b854ec036de42404b71e72" address="unix:///run/containerd/s/c8ced06f66ff4baa011733a5e36922d4c339f8f419e7001e5ebe1013f0561fac" protocol=ttrpc version=3 Jul 12 00:15:24.001713 systemd[1]: Started cri-containerd-f37ce990f070abf33c03a4ff1f8043b81b108617e3b854ec036de42404b71e72.scope - libcontainer container f37ce990f070abf33c03a4ff1f8043b81b108617e3b854ec036de42404b71e72. Jul 12 00:15:24.037549 containerd[1494]: time="2025-07-12T00:15:24.037501284Z" level=info msg="StartContainer for \"f37ce990f070abf33c03a4ff1f8043b81b108617e3b854ec036de42404b71e72\" returns successfully" Jul 12 00:15:24.072655 kubelet[2630]: I0712 00:15:24.072595 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:15:24.201062 containerd[1494]: time="2025-07-12T00:15:24.200957106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"323007fea4648c23081108d39151fe8fa7c5cb04b82649f1fc277545be2cf611\" id:\"0eb77d86a243ad4d645f2f242053d056c9a73663d157b845486b050850c50419\" pid:4241 exited_at:{seconds:1752279324 nanos:200624419}" Jul 12 00:15:24.232841 kubelet[2630]: I0712 00:15:24.232648 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-55497c5bb9-lg625" podStartSLOduration=1.267415703 podStartE2EDuration="4.232627399s" podCreationTimestamp="2025-07-12 00:15:20 +0000 UTC" firstStartedPulling="2025-07-12 00:15:20.972255438 +0000 UTC m=+38.151663521" lastFinishedPulling="2025-07-12 00:15:23.937467134 +0000 UTC m=+41.116875217" observedRunningTime="2025-07-12 00:15:24.231036655 +0000 UTC m=+41.410444698" watchObservedRunningTime="2025-07-12 00:15:24.232627399 +0000 UTC m=+41.412035482" Jul 12 00:15:24.286701 containerd[1494]: time="2025-07-12T00:15:24.286657556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"323007fea4648c23081108d39151fe8fa7c5cb04b82649f1fc277545be2cf611\" id:\"9b53b76161686fb9211f2e0f3a0664796875a6058ade05e67ddac6044f59b028\" pid:4266 exited_at:{seconds:1752279324 nanos:286332270}" Jul 12 00:15:24.919383 containerd[1494]: time="2025-07-12T00:15:24.919333231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc57fd98b-grl9c,Uid:77467823-3d80-4fc0-a989-f5c2bc1de53f,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:15:25.030592 systemd-networkd[1437]: cali6ca666cd64f: Link UP Jul 12 00:15:25.031229 systemd-networkd[1437]: cali6ca666cd64f: Gained carrier Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:24.955 [INFO][4282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0 calico-apiserver-5dc57fd98b- calico-apiserver 77467823-3d80-4fc0-a989-f5c2bc1de53f 811 0 2025-07-12 00:14:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dc57fd98b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5dc57fd98b-grl9c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6ca666cd64f [] [] }} ContainerID="9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-grl9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-" Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:24.955 [INFO][4282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-grl9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0" Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:24.990 [INFO][4296] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" HandleID="k8s-pod-network.9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" Workload="localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0" Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:24.990 [INFO][4296] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" HandleID="k8s-pod-network.9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" Workload="localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d5600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5dc57fd98b-grl9c", "timestamp":"2025-07-12 00:15:24.990356536 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:24.990 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:24.990 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:24.990 [INFO][4296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:24.999 [INFO][4296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" host="localhost" Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:25.006 [INFO][4296] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:25.011 [INFO][4296] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:25.013 [INFO][4296] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:25.015 [INFO][4296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:25.015 [INFO][4296] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" host="localhost" Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:25.016 [INFO][4296] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6 Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:25.020 [INFO][4296] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" host="localhost" Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:25.025 [INFO][4296] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" host="localhost" Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:25.026 [INFO][4296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" host="localhost" Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:25.026 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:15:25.048741 containerd[1494]: 2025-07-12 00:15:25.026 [INFO][4296] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" HandleID="k8s-pod-network.9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" Workload="localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0" Jul 12 00:15:25.049794 containerd[1494]: 2025-07-12 00:15:25.028 [INFO][4282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-grl9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0", GenerateName:"calico-apiserver-5dc57fd98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"77467823-3d80-4fc0-a989-f5c2bc1de53f", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 14, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc57fd98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5dc57fd98b-grl9c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ca666cd64f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:25.049794 containerd[1494]: 2025-07-12 00:15:25.028 [INFO][4282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-grl9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0" Jul 12 00:15:25.049794 containerd[1494]: 2025-07-12 00:15:25.028 [INFO][4282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ca666cd64f ContainerID="9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-grl9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0" Jul 12 00:15:25.049794 containerd[1494]: 2025-07-12 00:15:25.031 [INFO][4282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-grl9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0" Jul 12 00:15:25.049794 containerd[1494]: 2025-07-12 00:15:25.031 [INFO][4282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-grl9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0", GenerateName:"calico-apiserver-5dc57fd98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"77467823-3d80-4fc0-a989-f5c2bc1de53f", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 14, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc57fd98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6", Pod:"calico-apiserver-5dc57fd98b-grl9c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ca666cd64f", MAC:"aa:74:ec:aa:24:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:25.049794 containerd[1494]: 2025-07-12 00:15:25.043 [INFO][4282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-grl9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--grl9c-eth0" Jul 12 00:15:25.072914 containerd[1494]: time="2025-07-12T00:15:25.072870877Z" level=info msg="connecting to shim 9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6" address="unix:///run/containerd/s/6fe31fa02f10dda7b576de8941d4754ec999ea3d1bd6ec327da5ed516c82113e" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:25.101701 systemd[1]: Started cri-containerd-9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6.scope - libcontainer container 9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6. Jul 12 00:15:25.112605 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:15:25.132474 containerd[1494]: time="2025-07-12T00:15:25.132426549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc57fd98b-grl9c,Uid:77467823-3d80-4fc0-a989-f5c2bc1de53f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6\"" Jul 12 00:15:25.133938 containerd[1494]: time="2025-07-12T00:15:25.133904953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:15:25.918868 containerd[1494]: time="2025-07-12T00:15:25.918823158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qgn44,Uid:a0c45a16-8020-41b3-88eb-e30a4cabdd4d,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:26.073047 systemd-networkd[1437]: cali639156dac2e: Link UP Jul 12 00:15:26.073496 systemd-networkd[1437]: cali639156dac2e: Gained carrier Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:25.952 [INFO][4360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--qgn44-eth0 goldmane-58fd7646b9- calico-system a0c45a16-8020-41b3-88eb-e30a4cabdd4d 804 0 2025-07-12 00:15:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-qgn44 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali639156dac2e [] [] }} ContainerID="bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgn44" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qgn44-" Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:25.952 [INFO][4360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgn44" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qgn44-eth0" Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:25.977 [INFO][4373] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" HandleID="k8s-pod-network.bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" Workload="localhost-k8s-goldmane--58fd7646b9--qgn44-eth0" Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:25.977 [INFO][4373] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" HandleID="k8s-pod-network.bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" Workload="localhost-k8s-goldmane--58fd7646b9--qgn44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005a2520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-qgn44", "timestamp":"2025-07-12 00:15:25.977257316 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:25.977 [INFO][4373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:25.977 [INFO][4373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:25.977 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:25.986 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" host="localhost" Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:25.992 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:25.996 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:25.999 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:26.001 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:26.001 [INFO][4373] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" host="localhost" Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:26.002 [INFO][4373] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4 Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:26.050 [INFO][4373] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" host="localhost" Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:26.068 [INFO][4373] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" host="localhost" Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:26.068 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" host="localhost" Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:26.068 [INFO][4373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:15:26.147598 containerd[1494]: 2025-07-12 00:15:26.068 [INFO][4373] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" HandleID="k8s-pod-network.bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" Workload="localhost-k8s-goldmane--58fd7646b9--qgn44-eth0" Jul 12 00:15:26.148343 containerd[1494]: 2025-07-12 00:15:26.071 [INFO][4360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgn44" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qgn44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--qgn44-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a0c45a16-8020-41b3-88eb-e30a4cabdd4d", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-qgn44", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali639156dac2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:26.148343 containerd[1494]: 2025-07-12 00:15:26.071 [INFO][4360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgn44" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qgn44-eth0" Jul 12 00:15:26.148343 containerd[1494]: 2025-07-12 00:15:26.071 [INFO][4360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali639156dac2e ContainerID="bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgn44" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qgn44-eth0" Jul 12 00:15:26.148343 containerd[1494]: 2025-07-12 00:15:26.074 [INFO][4360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgn44" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qgn44-eth0" Jul 12 00:15:26.148343 containerd[1494]: 2025-07-12 00:15:26.075 [INFO][4360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgn44" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qgn44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--qgn44-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a0c45a16-8020-41b3-88eb-e30a4cabdd4d", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4", Pod:"goldmane-58fd7646b9-qgn44", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali639156dac2e", MAC:"de:6a:e9:1e:5c:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:26.148343 containerd[1494]: 2025-07-12 00:15:26.138 [INFO][4360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgn44" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qgn44-eth0" Jul 12 00:15:26.175585 containerd[1494]: time="2025-07-12T00:15:26.175307581Z" level=info msg="connecting to shim bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4" address="unix:///run/containerd/s/9612a9f2889d77f253b04479d6d0db2a2838cdb2ef0073c57c9d07e30492e46a" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:26.209705 systemd[1]: Started cri-containerd-bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4.scope - libcontainer container bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4. Jul 12 00:15:26.229245 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:15:26.269769 containerd[1494]: time="2025-07-12T00:15:26.269721537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qgn44,Uid:a0c45a16-8020-41b3-88eb-e30a4cabdd4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4\"" Jul 12 00:15:26.278686 systemd-networkd[1437]: cali6ca666cd64f: Gained IPv6LL Jul 12 00:15:26.887523 containerd[1494]: time="2025-07-12T00:15:26.887465861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:26.887987 containerd[1494]: time="2025-07-12T00:15:26.887940245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 00:15:26.889247 containerd[1494]: time="2025-07-12T00:15:26.889207616Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:26.891416 containerd[1494]: time="2025-07-12T00:15:26.891362466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:26.892074 containerd[1494]: time="2025-07-12T00:15:26.892023795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.758074716s" Jul 12 00:15:26.892117 containerd[1494]: time="2025-07-12T00:15:26.892074162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:15:26.893728 containerd[1494]: time="2025-07-12T00:15:26.893703461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:15:26.895149 containerd[1494]: time="2025-07-12T00:15:26.894857136Z" level=info msg="CreateContainer within sandbox \"9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:15:26.903111 containerd[1494]: time="2025-07-12T00:15:26.903077884Z" level=info msg="Container 6d8570dcbec63dc2fe614266e91e4a00b96383ef518dae03564d5b95077aac89: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:26.913212 containerd[1494]: time="2025-07-12T00:15:26.913135758Z" level=info msg="CreateContainer within sandbox \"9860d0f61277d38c6371f15c56edcfc8aae0a02e27edad790097d3aa187da8d6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d8570dcbec63dc2fe614266e91e4a00b96383ef518dae03564d5b95077aac89\"" Jul 12 00:15:26.914037 containerd[1494]: time="2025-07-12T00:15:26.914010596Z" level=info msg="StartContainer for \"6d8570dcbec63dc2fe614266e91e4a00b96383ef518dae03564d5b95077aac89\"" Jul 12 00:15:26.915881 containerd[1494]: time="2025-07-12T00:15:26.915847324Z" level=info msg="connecting to shim 6d8570dcbec63dc2fe614266e91e4a00b96383ef518dae03564d5b95077aac89" address="unix:///run/containerd/s/6fe31fa02f10dda7b576de8941d4754ec999ea3d1bd6ec327da5ed516c82113e" protocol=ttrpc version=3 Jul 12 00:15:26.919461 containerd[1494]: time="2025-07-12T00:15:26.919376799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m2pbd,Uid:856772f7-8fcf-4192-a9b1-3fef3da20bdd,Namespace:kube-system,Attempt:0,}" Jul 12 00:15:26.942700 systemd[1]: Started cri-containerd-6d8570dcbec63dc2fe614266e91e4a00b96383ef518dae03564d5b95077aac89.scope - libcontainer container 6d8570dcbec63dc2fe614266e91e4a00b96383ef518dae03564d5b95077aac89. Jul 12 00:15:26.984736 containerd[1494]: time="2025-07-12T00:15:26.984690996Z" level=info msg="StartContainer for \"6d8570dcbec63dc2fe614266e91e4a00b96383ef518dae03564d5b95077aac89\" returns successfully" Jul 12 00:15:27.054166 systemd-networkd[1437]: cali2a85ef0e778: Link UP Jul 12 00:15:27.054709 systemd-networkd[1437]: cali2a85ef0e778: Gained carrier Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:26.963 [INFO][4457] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0 coredns-7c65d6cfc9- kube-system 856772f7-8fcf-4192-a9b1-3fef3da20bdd 808 0 2025-07-12 00:14:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-m2pbd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2a85ef0e778 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m2pbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m2pbd-" Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:26.964 [INFO][4457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m2pbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0" Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:26.991 [INFO][4479] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" HandleID="k8s-pod-network.15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" Workload="localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0" Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:26.991 [INFO][4479] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" HandleID="k8s-pod-network.15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" Workload="localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-m2pbd", "timestamp":"2025-07-12 00:15:26.991222196 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:26.991 [INFO][4479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:26.991 [INFO][4479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:26.991 [INFO][4479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:27.010 [INFO][4479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" host="localhost" Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:27.019 [INFO][4479] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:27.028 [INFO][4479] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:27.030 [INFO][4479] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:27.032 [INFO][4479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:27.032 [INFO][4479] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" host="localhost" Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:27.034 [INFO][4479] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194 Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:27.038 [INFO][4479] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" host="localhost" Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:27.049 [INFO][4479] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" host="localhost" Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:27.049 [INFO][4479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" host="localhost" Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:27.049 [INFO][4479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:15:27.069032 containerd[1494]: 2025-07-12 00:15:27.049 [INFO][4479] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" HandleID="k8s-pod-network.15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" Workload="localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0" Jul 12 00:15:27.069918 containerd[1494]: 2025-07-12 00:15:27.052 [INFO][4457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m2pbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"856772f7-8fcf-4192-a9b1-3fef3da20bdd", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-m2pbd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a85ef0e778", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:27.069918 containerd[1494]: 2025-07-12 00:15:27.052 [INFO][4457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m2pbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0" Jul 12 00:15:27.069918 containerd[1494]: 2025-07-12 00:15:27.052 [INFO][4457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a85ef0e778 ContainerID="15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m2pbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0" Jul 12 00:15:27.069918 containerd[1494]: 2025-07-12 00:15:27.054 [INFO][4457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m2pbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0" Jul 12 00:15:27.069918 containerd[1494]: 2025-07-12 00:15:27.054 [INFO][4457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m2pbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"856772f7-8fcf-4192-a9b1-3fef3da20bdd", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194", Pod:"coredns-7c65d6cfc9-m2pbd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a85ef0e778", MAC:"62:74:65:0a:db:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:27.070282 containerd[1494]: 2025-07-12 00:15:27.063 [INFO][4457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m2pbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--m2pbd-eth0" Jul 12 00:15:27.097186 containerd[1494]: time="2025-07-12T00:15:27.097143126Z" level=info msg="connecting to shim 15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194" address="unix:///run/containerd/s/2a0de1c5f9450deef9e0b54c2504d13cbd1769e7397d7c07c2ee19f965aa7311" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:27.121694 systemd[1]: Started cri-containerd-15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194.scope - libcontainer container 15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194. Jul 12 00:15:27.134144 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:15:27.164212 containerd[1494]: time="2025-07-12T00:15:27.164088843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m2pbd,Uid:856772f7-8fcf-4192-a9b1-3fef3da20bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194\"" Jul 12 00:15:27.169871 containerd[1494]: time="2025-07-12T00:15:27.169811839Z" level=info msg="CreateContainer within sandbox \"15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:15:27.177543 containerd[1494]: time="2025-07-12T00:15:27.177480411Z" level=info msg="Container cc414bfe1a3539f89bc3c8f2fb8a5e63ae3c6bc7f62855b13af6e9afb44c357f: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:27.182055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890643842.mount: Deactivated successfully. Jul 12 00:15:27.188185 containerd[1494]: time="2025-07-12T00:15:27.188110694Z" level=info msg="CreateContainer within sandbox \"15200f7d69a790c663c627fe7facd34a9a327398b22f3511336bd372e9cc6194\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc414bfe1a3539f89bc3c8f2fb8a5e63ae3c6bc7f62855b13af6e9afb44c357f\"" Jul 12 00:15:27.189091 containerd[1494]: time="2025-07-12T00:15:27.189048618Z" level=info msg="StartContainer for \"cc414bfe1a3539f89bc3c8f2fb8a5e63ae3c6bc7f62855b13af6e9afb44c357f\"" Jul 12 00:15:27.190118 containerd[1494]: time="2025-07-12T00:15:27.190018306Z" level=info msg="connecting to shim cc414bfe1a3539f89bc3c8f2fb8a5e63ae3c6bc7f62855b13af6e9afb44c357f" address="unix:///run/containerd/s/2a0de1c5f9450deef9e0b54c2504d13cbd1769e7397d7c07c2ee19f965aa7311" protocol=ttrpc version=3 Jul 12 00:15:27.213705 systemd[1]: Started cri-containerd-cc414bfe1a3539f89bc3c8f2fb8a5e63ae3c6bc7f62855b13af6e9afb44c357f.scope - libcontainer container cc414bfe1a3539f89bc3c8f2fb8a5e63ae3c6bc7f62855b13af6e9afb44c357f. Jul 12 00:15:27.248309 kubelet[2630]: I0712 00:15:27.248226 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dc57fd98b-grl9c" podStartSLOduration=26.488593184 podStartE2EDuration="28.248206748s" podCreationTimestamp="2025-07-12 00:14:59 +0000 UTC" firstStartedPulling="2025-07-12 00:15:25.133652598 +0000 UTC m=+42.313060681" lastFinishedPulling="2025-07-12 00:15:26.893266162 +0000 UTC m=+44.072674245" observedRunningTime="2025-07-12 00:15:27.243544892 +0000 UTC m=+44.422952975" watchObservedRunningTime="2025-07-12 00:15:27.248206748 +0000 UTC m=+44.427614871" Jul 12 00:15:27.262080 containerd[1494]: time="2025-07-12T00:15:27.261894754Z" level=info msg="StartContainer for \"cc414bfe1a3539f89bc3c8f2fb8a5e63ae3c6bc7f62855b13af6e9afb44c357f\" returns successfully" Jul 12 00:15:27.750656 systemd-networkd[1437]: cali639156dac2e: Gained IPv6LL Jul 12 00:15:28.098718 systemd[1]: Started sshd@8-10.0.0.143:22-10.0.0.1:43896.service - OpenSSH per-connection server daemon (10.0.0.1:43896). Jul 12 00:15:28.231056 sshd[4602]: Accepted publickey for core from 10.0.0.1 port 43896 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:28.232613 sshd-session[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:28.239965 systemd-logind[1478]: New session 9 of user core. Jul 12 00:15:28.247537 kubelet[2630]: I0712 00:15:28.246466 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:15:28.248717 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:15:28.299642 kubelet[2630]: I0712 00:15:28.299104 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-m2pbd" podStartSLOduration=39.299083844 podStartE2EDuration="39.299083844s" podCreationTimestamp="2025-07-12 00:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:15:28.272605295 +0000 UTC m=+45.452013378" watchObservedRunningTime="2025-07-12 00:15:28.299083844 +0000 UTC m=+45.478491927" Jul 12 00:15:28.454670 systemd-networkd[1437]: cali2a85ef0e778: Gained IPv6LL Jul 12 00:15:28.477959 sshd[4608]: Connection closed by 10.0.0.1 port 43896 Jul 12 00:15:28.478767 sshd-session[4602]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:28.485614 systemd[1]: sshd@8-10.0.0.143:22-10.0.0.1:43896.service: Deactivated successfully. Jul 12 00:15:28.490478 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:15:28.496066 systemd-logind[1478]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:15:28.497954 systemd-logind[1478]: Removed session 9. Jul 12 00:15:28.699229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352248393.mount: Deactivated successfully. Jul 12 00:15:28.922004 containerd[1494]: time="2025-07-12T00:15:28.921942500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r5x9g,Uid:3f85911b-183c-4145-9fa8-e10154ef1527,Namespace:kube-system,Attempt:0,}" Jul 12 00:15:28.923024 containerd[1494]: time="2025-07-12T00:15:28.922845537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jncsr,Uid:0f56b660-36f9-4f11-961a-175fd72ba31b,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:28.923419 containerd[1494]: time="2025-07-12T00:15:28.923375966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc57fd98b-9k6s6,Uid:b377b7d8-5882-4000-97ad-a6d7809971bf,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:15:29.115297 systemd-networkd[1437]: cali57191a8f67a: Link UP Jul 12 00:15:29.116550 systemd-networkd[1437]: cali57191a8f67a: Gained carrier Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.012 [INFO][4637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0 calico-apiserver-5dc57fd98b- calico-apiserver b377b7d8-5882-4000-97ad-a6d7809971bf 809 0 2025-07-12 00:14:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dc57fd98b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5dc57fd98b-9k6s6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali57191a8f67a [] [] }} ContainerID="1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-9k6s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-" Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.013 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-9k6s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0" Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.055 [INFO][4687] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" HandleID="k8s-pod-network.1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" Workload="localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0" Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.055 [INFO][4687] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" HandleID="k8s-pod-network.1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" Workload="localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c9b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5dc57fd98b-9k6s6", "timestamp":"2025-07-12 00:15:29.055489667 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.055 [INFO][4687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.055 [INFO][4687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.055 [INFO][4687] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.071 [INFO][4687] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" host="localhost" Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.078 [INFO][4687] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.083 [INFO][4687] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.088 [INFO][4687] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.091 [INFO][4687] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.091 [INFO][4687] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" host="localhost" Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.093 [INFO][4687] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.097 [INFO][4687] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" host="localhost" Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.106 [INFO][4687] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" host="localhost" Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.106 [INFO][4687] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" host="localhost" Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.106 [INFO][4687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:15:29.134617 containerd[1494]: 2025-07-12 00:15:29.106 [INFO][4687] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" HandleID="k8s-pod-network.1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" Workload="localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0" Jul 12 00:15:29.135951 containerd[1494]: 2025-07-12 00:15:29.110 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-9k6s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0", GenerateName:"calico-apiserver-5dc57fd98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b377b7d8-5882-4000-97ad-a6d7809971bf", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 14, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc57fd98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5dc57fd98b-9k6s6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57191a8f67a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:29.135951 containerd[1494]: 2025-07-12 00:15:29.110 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-9k6s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0" Jul 12 00:15:29.135951 containerd[1494]: 2025-07-12 00:15:29.110 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57191a8f67a ContainerID="1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-9k6s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0" Jul 12 00:15:29.135951 containerd[1494]: 2025-07-12 00:15:29.117 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-9k6s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0" Jul 12 00:15:29.135951 containerd[1494]: 2025-07-12 00:15:29.118 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-9k6s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0", GenerateName:"calico-apiserver-5dc57fd98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b377b7d8-5882-4000-97ad-a6d7809971bf", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 14, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc57fd98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b", Pod:"calico-apiserver-5dc57fd98b-9k6s6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57191a8f67a", MAC:"f6:17:3c:4d:ab:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:29.135951 containerd[1494]: 2025-07-12 00:15:29.128 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" Namespace="calico-apiserver" Pod="calico-apiserver-5dc57fd98b-9k6s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dc57fd98b--9k6s6-eth0" Jul 12 00:15:29.191020 containerd[1494]: time="2025-07-12T00:15:29.190846996Z" level=info msg="connecting to shim 1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b" address="unix:///run/containerd/s/708145a7edd3d1163364f596a59bab782bb546b7edfc9b80f3c744540e477454" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:29.227894 systemd[1]: Started cri-containerd-1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b.scope - libcontainer container 1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b. Jul 12 00:15:29.229229 systemd-networkd[1437]: calibac4b4ca836: Link UP Jul 12 00:15:29.229662 systemd-networkd[1437]: calibac4b4ca836: Gained carrier Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.004 [INFO][4631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0 coredns-7c65d6cfc9- kube-system 3f85911b-183c-4145-9fa8-e10154ef1527 799 0 2025-07-12 00:14:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-r5x9g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibac4b4ca836 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5x9g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r5x9g-" Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.004 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5x9g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0" Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.055 [INFO][4679] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" HandleID="k8s-pod-network.6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" Workload="localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0" Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.057 [INFO][4679] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" HandleID="k8s-pod-network.6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" Workload="localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b2b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-r5x9g", "timestamp":"2025-07-12 00:15:29.055917121 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.057 [INFO][4679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.106 [INFO][4679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.106 [INFO][4679] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.174 [INFO][4679] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" host="localhost" Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.183 [INFO][4679] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.189 [INFO][4679] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.192 [INFO][4679] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.197 [INFO][4679] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.197 [INFO][4679] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" host="localhost" Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.200 [INFO][4679] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3 Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.210 [INFO][4679] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" host="localhost" Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.218 [INFO][4679] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" host="localhost" Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.219 [INFO][4679] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" host="localhost" Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.220 [INFO][4679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:15:29.247171 containerd[1494]: 2025-07-12 00:15:29.220 [INFO][4679] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" HandleID="k8s-pod-network.6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" Workload="localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0" Jul 12 00:15:29.247874 containerd[1494]: 2025-07-12 00:15:29.225 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5x9g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3f85911b-183c-4145-9fa8-e10154ef1527", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-r5x9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibac4b4ca836", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:29.247874 containerd[1494]: 2025-07-12 00:15:29.225 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5x9g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0" Jul 12 00:15:29.247874 containerd[1494]: 2025-07-12 00:15:29.225 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibac4b4ca836 ContainerID="6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5x9g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0" Jul 12 00:15:29.247874 containerd[1494]: 2025-07-12 00:15:29.227 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5x9g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0" Jul 12 00:15:29.247874 containerd[1494]: 2025-07-12 00:15:29.227 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5x9g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3f85911b-183c-4145-9fa8-e10154ef1527", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3", Pod:"coredns-7c65d6cfc9-r5x9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibac4b4ca836", MAC:"12:f7:de:49:4d:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:29.248246 containerd[1494]: 2025-07-12 00:15:29.243 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r5x9g" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r5x9g-eth0" Jul 12 00:15:29.270732 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:15:29.288444 containerd[1494]: time="2025-07-12T00:15:29.288390757Z" level=info msg="connecting to shim 6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3" address="unix:///run/containerd/s/d641c1b87b73e665224b6d74d575985e496fce32cdee1b7afb6fa03803c8b387" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:29.327804 containerd[1494]: time="2025-07-12T00:15:29.327764123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc57fd98b-9k6s6,Uid:b377b7d8-5882-4000-97ad-a6d7809971bf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b\"" Jul 12 00:15:29.334850 containerd[1494]: time="2025-07-12T00:15:29.334674602Z" level=info msg="CreateContainer within sandbox \"1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:15:29.336709 systemd[1]: Started cri-containerd-6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3.scope - libcontainer container 6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3. Jul 12 00:15:29.337324 systemd-networkd[1437]: cali8cfc895cb9c: Link UP Jul 12 00:15:29.338175 systemd-networkd[1437]: cali8cfc895cb9c: Gained carrier Jul 12 00:15:29.350343 containerd[1494]: time="2025-07-12T00:15:29.350289747Z" level=info msg="Container bc5ee6addcdaf3be2be1d9f7668d76577239919e6f72fa1f14c286f8f42650c4: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:29.368638 containerd[1494]: time="2025-07-12T00:15:29.368584833Z" level=info msg="CreateContainer within sandbox \"1515f17f918cf28b35017152360523e4b02acaa12a00e0de33dff20f4a05ed7b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bc5ee6addcdaf3be2be1d9f7668d76577239919e6f72fa1f14c286f8f42650c4\"" Jul 12 00:15:29.369980 containerd[1494]: time="2025-07-12T00:15:29.369947286Z" level=info msg="StartContainer for \"bc5ee6addcdaf3be2be1d9f7668d76577239919e6f72fa1f14c286f8f42650c4\"" Jul 12 00:15:29.376557 containerd[1494]: time="2025-07-12T00:15:29.376492318Z" level=info msg="connecting to shim bc5ee6addcdaf3be2be1d9f7668d76577239919e6f72fa1f14c286f8f42650c4" address="unix:///run/containerd/s/708145a7edd3d1163364f596a59bab782bb546b7edfc9b80f3c744540e477454" protocol=ttrpc version=3 Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.011 [INFO][4641] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jncsr-eth0 csi-node-driver- calico-system 0f56b660-36f9-4f11-961a-175fd72ba31b 676 0 2025-07-12 00:15:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jncsr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8cfc895cb9c [] [] }} ContainerID="e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" Namespace="calico-system" Pod="csi-node-driver-jncsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jncsr-" Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.011 [INFO][4641] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" Namespace="calico-system" Pod="csi-node-driver-jncsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jncsr-eth0" Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.089 [INFO][4688] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" HandleID="k8s-pod-network.e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" Workload="localhost-k8s-csi--node--driver--jncsr-eth0" Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.089 [INFO][4688] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" HandleID="k8s-pod-network.e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" Workload="localhost-k8s-csi--node--driver--jncsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000239bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jncsr", "timestamp":"2025-07-12 00:15:29.089839114 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.090 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.221 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.221 [INFO][4688] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.274 [INFO][4688] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" host="localhost" Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.286 [INFO][4688] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.291 [INFO][4688] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.296 [INFO][4688] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.302 [INFO][4688] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.302 [INFO][4688] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" host="localhost" Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.304 [INFO][4688] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.310 [INFO][4688] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" host="localhost" Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.319 [INFO][4688] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" host="localhost" Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.319 [INFO][4688] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" host="localhost" Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.319 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:15:29.379771 containerd[1494]: 2025-07-12 00:15:29.319 [INFO][4688] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" HandleID="k8s-pod-network.e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" Workload="localhost-k8s-csi--node--driver--jncsr-eth0" Jul 12 00:15:29.381373 containerd[1494]: 2025-07-12 00:15:29.329 [INFO][4641] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" Namespace="calico-system" Pod="csi-node-driver-jncsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jncsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jncsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f56b660-36f9-4f11-961a-175fd72ba31b", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jncsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8cfc895cb9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:29.381373 containerd[1494]: 2025-07-12 00:15:29.329 [INFO][4641] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" Namespace="calico-system" Pod="csi-node-driver-jncsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jncsr-eth0" Jul 12 00:15:29.381373 containerd[1494]: 2025-07-12 00:15:29.329 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cfc895cb9c ContainerID="e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" Namespace="calico-system" Pod="csi-node-driver-jncsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jncsr-eth0" Jul 12 00:15:29.381373 containerd[1494]: 2025-07-12 00:15:29.339 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" Namespace="calico-system" Pod="csi-node-driver-jncsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jncsr-eth0" Jul 12 00:15:29.381373 containerd[1494]: 2025-07-12 00:15:29.342 [INFO][4641] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" Namespace="calico-system" Pod="csi-node-driver-jncsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jncsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jncsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f56b660-36f9-4f11-961a-175fd72ba31b", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc", Pod:"csi-node-driver-jncsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8cfc895cb9c", MAC:"e2:3b:8d:74:6c:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:29.381373 containerd[1494]: 2025-07-12 00:15:29.375 [INFO][4641] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" Namespace="calico-system" Pod="csi-node-driver-jncsr" WorkloadEndpoint="localhost-k8s-csi--node--driver--jncsr-eth0" Jul 12 00:15:29.387411 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:15:29.429770 systemd[1]: Started cri-containerd-bc5ee6addcdaf3be2be1d9f7668d76577239919e6f72fa1f14c286f8f42650c4.scope - libcontainer container bc5ee6addcdaf3be2be1d9f7668d76577239919e6f72fa1f14c286f8f42650c4. Jul 12 00:15:29.445627 containerd[1494]: time="2025-07-12T00:15:29.444792442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r5x9g,Uid:3f85911b-183c-4145-9fa8-e10154ef1527,Namespace:kube-system,Attempt:0,} returns sandbox id \"6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3\"" Jul 12 00:15:29.464144 containerd[1494]: time="2025-07-12T00:15:29.462800491Z" level=info msg="connecting to shim e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc" address="unix:///run/containerd/s/c6ca47828b207c54114235f2b8a33c88217bd1c11e2b0db8d132b75a09a7bc41" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:29.472234 containerd[1494]: time="2025-07-12T00:15:29.471771272Z" level=info msg="CreateContainer within sandbox \"6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:15:29.494980 systemd[1]: Started cri-containerd-e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc.scope - libcontainer container e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc. Jul 12 00:15:29.513988 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:15:29.514420 containerd[1494]: time="2025-07-12T00:15:29.514009082Z" level=info msg="StartContainer for \"bc5ee6addcdaf3be2be1d9f7668d76577239919e6f72fa1f14c286f8f42650c4\" returns successfully" Jul 12 00:15:29.538614 containerd[1494]: time="2025-07-12T00:15:29.538562203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jncsr,Uid:0f56b660-36f9-4f11-961a-175fd72ba31b,Namespace:calico-system,Attempt:0,} returns sandbox id \"e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc\"" Jul 12 00:15:29.546148 containerd[1494]: time="2025-07-12T00:15:29.546036354Z" level=info msg="Container f62b22e1d88bc34b958b38d09f29f8d78a0fcda3e23dd7b08626bc6ba00afcc1: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:29.552004 containerd[1494]: time="2025-07-12T00:15:29.551953106Z" level=info msg="CreateContainer within sandbox \"6447551587f157c78a75542593dcc0f789f476fcdf1cd9129f2d1f245480a3f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f62b22e1d88bc34b958b38d09f29f8d78a0fcda3e23dd7b08626bc6ba00afcc1\"" Jul 12 00:15:29.552682 containerd[1494]: time="2025-07-12T00:15:29.552645594Z" level=info msg="StartContainer for \"f62b22e1d88bc34b958b38d09f29f8d78a0fcda3e23dd7b08626bc6ba00afcc1\"" Jul 12 00:15:29.553941 containerd[1494]: time="2025-07-12T00:15:29.553910155Z" level=info msg="connecting to shim f62b22e1d88bc34b958b38d09f29f8d78a0fcda3e23dd7b08626bc6ba00afcc1" address="unix:///run/containerd/s/d641c1b87b73e665224b6d74d575985e496fce32cdee1b7afb6fa03803c8b387" protocol=ttrpc version=3 Jul 12 00:15:29.559161 containerd[1494]: time="2025-07-12T00:15:29.559004882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:29.560626 containerd[1494]: time="2025-07-12T00:15:29.560577962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 00:15:29.561907 containerd[1494]: time="2025-07-12T00:15:29.561871327Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:29.564986 containerd[1494]: time="2025-07-12T00:15:29.564938077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:29.566000 containerd[1494]: time="2025-07-12T00:15:29.565964727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.672231622s" Jul 12 00:15:29.566064 containerd[1494]: time="2025-07-12T00:15:29.566005292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:15:29.567290 containerd[1494]: time="2025-07-12T00:15:29.567259892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:15:29.568798 containerd[1494]: time="2025-07-12T00:15:29.568763443Z" level=info msg="CreateContainer within sandbox \"bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:15:29.577475 containerd[1494]: time="2025-07-12T00:15:29.577436026Z" level=info msg="Container a2802e7d6a9170e0f7dbd52349b815f20c67fa282b903c700bcef8641731b4bc: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:29.580754 systemd[1]: Started cri-containerd-f62b22e1d88bc34b958b38d09f29f8d78a0fcda3e23dd7b08626bc6ba00afcc1.scope - libcontainer container f62b22e1d88bc34b958b38d09f29f8d78a0fcda3e23dd7b08626bc6ba00afcc1. Jul 12 00:15:29.586550 containerd[1494]: time="2025-07-12T00:15:29.586443251Z" level=info msg="CreateContainer within sandbox \"bafe0a1fe6146d4b348b846b8427bedd3430f6f11774ab17988eafe874cb74d4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a2802e7d6a9170e0f7dbd52349b815f20c67fa282b903c700bcef8641731b4bc\"" Jul 12 00:15:29.588746 containerd[1494]: time="2025-07-12T00:15:29.588698057Z" level=info msg="StartContainer for \"a2802e7d6a9170e0f7dbd52349b815f20c67fa282b903c700bcef8641731b4bc\"" Jul 12 00:15:29.590598 containerd[1494]: time="2025-07-12T00:15:29.590502607Z" level=info msg="connecting to shim a2802e7d6a9170e0f7dbd52349b815f20c67fa282b903c700bcef8641731b4bc" address="unix:///run/containerd/s/9612a9f2889d77f253b04479d6d0db2a2838cdb2ef0073c57c9d07e30492e46a" protocol=ttrpc version=3 Jul 12 00:15:29.619756 systemd[1]: Started cri-containerd-a2802e7d6a9170e0f7dbd52349b815f20c67fa282b903c700bcef8641731b4bc.scope - libcontainer container a2802e7d6a9170e0f7dbd52349b815f20c67fa282b903c700bcef8641731b4bc. Jul 12 00:15:29.626029 containerd[1494]: time="2025-07-12T00:15:29.625980637Z" level=info msg="StartContainer for \"f62b22e1d88bc34b958b38d09f29f8d78a0fcda3e23dd7b08626bc6ba00afcc1\" returns successfully" Jul 12 00:15:29.670159 containerd[1494]: time="2025-07-12T00:15:29.669663831Z" level=info msg="StartContainer for \"a2802e7d6a9170e0f7dbd52349b815f20c67fa282b903c700bcef8641731b4bc\" returns successfully" Jul 12 00:15:29.919097 containerd[1494]: time="2025-07-12T00:15:29.919057178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bcc7cb945-6swrd,Uid:9a673f24-fcbe-43ff-be04-69618ccc52e5,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:30.090259 systemd-networkd[1437]: cali97d3122d695: Link UP Jul 12 00:15:30.090496 systemd-networkd[1437]: cali97d3122d695: Gained carrier Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:29.972 [INFO][4978] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0 calico-kube-controllers-6bcc7cb945- calico-system 9a673f24-fcbe-43ff-be04-69618ccc52e5 807 0 2025-07-12 00:15:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bcc7cb945 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bcc7cb945-6swrd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali97d3122d695 [] [] }} ContainerID="5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" Namespace="calico-system" Pod="calico-kube-controllers-6bcc7cb945-6swrd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-" Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:29.972 [INFO][4978] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" Namespace="calico-system" Pod="calico-kube-controllers-6bcc7cb945-6swrd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0" Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.003 [INFO][4995] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" HandleID="k8s-pod-network.5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" Workload="localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0" Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.003 [INFO][4995] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" HandleID="k8s-pod-network.5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" Workload="localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bcc7cb945-6swrd", "timestamp":"2025-07-12 00:15:30.003581478 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.003 [INFO][4995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.003 [INFO][4995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.003 [INFO][4995] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.014 [INFO][4995] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" host="localhost" Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.018 [INFO][4995] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.045 [INFO][4995] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.047 [INFO][4995] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.049 [INFO][4995] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.049 [INFO][4995] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" host="localhost" Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.051 [INFO][4995] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.068 [INFO][4995] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" host="localhost" Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.082 [INFO][4995] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" host="localhost" Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.082 [INFO][4995] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" host="localhost" Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.082 [INFO][4995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:15:30.112155 containerd[1494]: 2025-07-12 00:15:30.082 [INFO][4995] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" HandleID="k8s-pod-network.5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" Workload="localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0" Jul 12 00:15:30.114318 containerd[1494]: 2025-07-12 00:15:30.085 [INFO][4978] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" Namespace="calico-system" Pod="calico-kube-controllers-6bcc7cb945-6swrd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0", GenerateName:"calico-kube-controllers-6bcc7cb945-", Namespace:"calico-system", SelfLink:"", UID:"9a673f24-fcbe-43ff-be04-69618ccc52e5", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bcc7cb945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bcc7cb945-6swrd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali97d3122d695", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:30.114318 containerd[1494]: 2025-07-12 00:15:30.085 [INFO][4978] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" Namespace="calico-system" Pod="calico-kube-controllers-6bcc7cb945-6swrd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0" Jul 12 00:15:30.114318 containerd[1494]: 2025-07-12 00:15:30.085 [INFO][4978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97d3122d695 ContainerID="5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" Namespace="calico-system" Pod="calico-kube-controllers-6bcc7cb945-6swrd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0" Jul 12 00:15:30.114318 containerd[1494]: 2025-07-12 00:15:30.091 [INFO][4978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" Namespace="calico-system" Pod="calico-kube-controllers-6bcc7cb945-6swrd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0" Jul 12 00:15:30.114318 containerd[1494]: 2025-07-12 00:15:30.091 [INFO][4978] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" Namespace="calico-system" Pod="calico-kube-controllers-6bcc7cb945-6swrd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0", GenerateName:"calico-kube-controllers-6bcc7cb945-", Namespace:"calico-system", SelfLink:"", UID:"9a673f24-fcbe-43ff-be04-69618ccc52e5", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bcc7cb945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e", Pod:"calico-kube-controllers-6bcc7cb945-6swrd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali97d3122d695", MAC:"b2:58:b4:3a:b3:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:15:30.114318 containerd[1494]: 2025-07-12 00:15:30.107 [INFO][4978] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" Namespace="calico-system" Pod="calico-kube-controllers-6bcc7cb945-6swrd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bcc7cb945--6swrd-eth0" Jul 12 00:15:30.147319 containerd[1494]: time="2025-07-12T00:15:30.147205380Z" level=info msg="connecting to shim 5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e" address="unix:///run/containerd/s/92d8bda8c888665bcdade8163957873c7f520eb7de5f18bee28c58befe343df4" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:30.188734 systemd[1]: Started cri-containerd-5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e.scope - libcontainer container 5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e. Jul 12 00:15:30.224484 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:15:30.265965 containerd[1494]: time="2025-07-12T00:15:30.265895448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bcc7cb945-6swrd,Uid:9a673f24-fcbe-43ff-be04-69618ccc52e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e\"" Jul 12 00:15:30.299087 kubelet[2630]: I0712 00:15:30.298993 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dc57fd98b-9k6s6" podStartSLOduration=31.29897338 podStartE2EDuration="31.29897338s" podCreationTimestamp="2025-07-12 00:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:15:30.281424548 +0000 UTC m=+47.460832631" watchObservedRunningTime="2025-07-12 00:15:30.29897338 +0000 UTC m=+47.478381463" Jul 12 00:15:30.321070 kubelet[2630]: I0712 00:15:30.321008 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-qgn44" podStartSLOduration=23.024803438 podStartE2EDuration="26.320867596s" podCreationTimestamp="2025-07-12 00:15:04 +0000 UTC" firstStartedPulling="2025-07-12 00:15:26.27100643 +0000 UTC m=+43.450414513" lastFinishedPulling="2025-07-12 00:15:29.567070588 +0000 UTC m=+46.746478671" observedRunningTime="2025-07-12 00:15:30.302136896 +0000 UTC m=+47.481544979" watchObservedRunningTime="2025-07-12 00:15:30.320867596 +0000 UTC m=+47.500275679" Jul 12 00:15:30.321863 kubelet[2630]: I0712 00:15:30.321573 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-r5x9g" podStartSLOduration=41.321563363 podStartE2EDuration="41.321563363s" podCreationTimestamp="2025-07-12 00:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:15:30.318433492 +0000 UTC m=+47.497841575" watchObservedRunningTime="2025-07-12 00:15:30.321563363 +0000 UTC m=+47.500971446" Jul 12 00:15:30.450036 containerd[1494]: time="2025-07-12T00:15:30.449583156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2802e7d6a9170e0f7dbd52349b815f20c67fa282b903c700bcef8641731b4bc\" id:\"cd798768ec335543873a5f8529c0f89aba8201f96466d451ea69c1d70d61664a\" pid:5072 exit_status:1 exited_at:{seconds:1752279330 nanos:449028606}" Jul 12 00:15:30.502711 systemd-networkd[1437]: cali57191a8f67a: Gained IPv6LL Jul 12 00:15:30.567623 systemd-networkd[1437]: calibac4b4ca836: Gained IPv6LL Jul 12 00:15:31.085899 containerd[1494]: time="2025-07-12T00:15:31.085846786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:31.086656 containerd[1494]: time="2025-07-12T00:15:31.086625162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 00:15:31.088428 containerd[1494]: time="2025-07-12T00:15:31.087596361Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:31.089551 containerd[1494]: time="2025-07-12T00:15:31.089520757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:31.090138 containerd[1494]: time="2025-07-12T00:15:31.090108870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.522808333s" Jul 12 00:15:31.090192 containerd[1494]: time="2025-07-12T00:15:31.090141394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:15:31.092724 containerd[1494]: time="2025-07-12T00:15:31.092688346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:15:31.094486 containerd[1494]: time="2025-07-12T00:15:31.094429560Z" level=info msg="CreateContainer within sandbox \"e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:15:31.104790 containerd[1494]: time="2025-07-12T00:15:31.104742147Z" level=info msg="Container 402779bc8127f0b83c785478ea69ce66755be7d746af3fabdab43d6a76c20482: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:31.117886 containerd[1494]: time="2025-07-12T00:15:31.117830915Z" level=info msg="CreateContainer within sandbox \"e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"402779bc8127f0b83c785478ea69ce66755be7d746af3fabdab43d6a76c20482\"" Jul 12 00:15:31.118641 containerd[1494]: time="2025-07-12T00:15:31.118315735Z" level=info msg="StartContainer for \"402779bc8127f0b83c785478ea69ce66755be7d746af3fabdab43d6a76c20482\"" Jul 12 00:15:31.119803 containerd[1494]: time="2025-07-12T00:15:31.119760793Z" level=info msg="connecting to shim 402779bc8127f0b83c785478ea69ce66755be7d746af3fabdab43d6a76c20482" address="unix:///run/containerd/s/c6ca47828b207c54114235f2b8a33c88217bd1c11e2b0db8d132b75a09a7bc41" protocol=ttrpc version=3 Jul 12 00:15:31.142653 systemd-networkd[1437]: cali8cfc895cb9c: Gained IPv6LL Jul 12 00:15:31.143731 systemd[1]: Started cri-containerd-402779bc8127f0b83c785478ea69ce66755be7d746af3fabdab43d6a76c20482.scope - libcontainer container 402779bc8127f0b83c785478ea69ce66755be7d746af3fabdab43d6a76c20482. Jul 12 00:15:31.200268 containerd[1494]: time="2025-07-12T00:15:31.200229999Z" level=info msg="StartContainer for \"402779bc8127f0b83c785478ea69ce66755be7d746af3fabdab43d6a76c20482\" returns successfully" Jul 12 00:15:31.280005 kubelet[2630]: I0712 00:15:31.279668 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:15:31.343221 containerd[1494]: time="2025-07-12T00:15:31.342769671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2802e7d6a9170e0f7dbd52349b815f20c67fa282b903c700bcef8641731b4bc\" id:\"60a7f5dcc5364309d76fd0d5526cee67f22d2ee467f28d9007fbd179dfb70d97\" pid:5135 exited_at:{seconds:1752279331 nanos:342323376}" Jul 12 00:15:31.782688 systemd-networkd[1437]: cali97d3122d695: Gained IPv6LL Jul 12 00:15:32.910791 containerd[1494]: time="2025-07-12T00:15:32.910549115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:32.912309 containerd[1494]: time="2025-07-12T00:15:32.912127346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 00:15:32.914822 containerd[1494]: time="2025-07-12T00:15:32.914181674Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:32.917741 containerd[1494]: time="2025-07-12T00:15:32.917277848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:32.921182 containerd[1494]: time="2025-07-12T00:15:32.921134995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.82797115s" Jul 12 00:15:32.921459 containerd[1494]: time="2025-07-12T00:15:32.921345340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:15:32.923981 containerd[1494]: time="2025-07-12T00:15:32.922759631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:15:32.932269 containerd[1494]: time="2025-07-12T00:15:32.932223335Z" level=info msg="CreateContainer within sandbox \"5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:15:32.976671 containerd[1494]: time="2025-07-12T00:15:32.975465924Z" level=info msg="Container 5eb962f74767312186a3146ac9b0c102e3642604e7ef0f490720aa40b0806bbc: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:33.122173 containerd[1494]: time="2025-07-12T00:15:33.122046226Z" level=info msg="CreateContainer within sandbox \"5d06a22db283e59892e87ebe189bdd7052191d8916c3fba1dc24151ceb536f9e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5eb962f74767312186a3146ac9b0c102e3642604e7ef0f490720aa40b0806bbc\"" Jul 12 00:15:33.123103 containerd[1494]: time="2025-07-12T00:15:33.123039385Z" level=info msg="StartContainer for \"5eb962f74767312186a3146ac9b0c102e3642604e7ef0f490720aa40b0806bbc\"" Jul 12 00:15:33.124442 containerd[1494]: time="2025-07-12T00:15:33.124414909Z" level=info msg="connecting to shim 5eb962f74767312186a3146ac9b0c102e3642604e7ef0f490720aa40b0806bbc" address="unix:///run/containerd/s/92d8bda8c888665bcdade8163957873c7f520eb7de5f18bee28c58befe343df4" protocol=ttrpc version=3 Jul 12 00:15:33.155750 systemd[1]: Started cri-containerd-5eb962f74767312186a3146ac9b0c102e3642604e7ef0f490720aa40b0806bbc.scope - libcontainer container 5eb962f74767312186a3146ac9b0c102e3642604e7ef0f490720aa40b0806bbc. Jul 12 00:15:33.208581 containerd[1494]: time="2025-07-12T00:15:33.206192488Z" level=info msg="StartContainer for \"5eb962f74767312186a3146ac9b0c102e3642604e7ef0f490720aa40b0806bbc\" returns successfully" Jul 12 00:15:33.368338 containerd[1494]: time="2025-07-12T00:15:33.368296393Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5eb962f74767312186a3146ac9b0c102e3642604e7ef0f490720aa40b0806bbc\" id:\"77a2bbdb5787bb2b6ce02cf2b5bc5c40136567efbc1fe183a72fe7d158e3f6ec\" pid:5218 exit_status:1 exited_at:{seconds:1752279333 nanos:362974919}" Jul 12 00:15:33.494467 systemd[1]: Started sshd@9-10.0.0.143:22-10.0.0.1:42506.service - OpenSSH per-connection server daemon (10.0.0.1:42506). Jul 12 00:15:33.607607 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 42506 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:33.609372 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:33.613848 systemd-logind[1478]: New session 10 of user core. Jul 12 00:15:33.623772 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:15:33.851808 sshd[5235]: Connection closed by 10.0.0.1 port 42506 Jul 12 00:15:33.852477 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:33.863674 systemd[1]: sshd@9-10.0.0.143:22-10.0.0.1:42506.service: Deactivated successfully. Jul 12 00:15:33.866539 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:15:33.868821 systemd-logind[1478]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:15:33.872331 systemd[1]: Started sshd@10-10.0.0.143:22-10.0.0.1:42520.service - OpenSSH per-connection server daemon (10.0.0.1:42520). Jul 12 00:15:33.872914 systemd-logind[1478]: Removed session 10. Jul 12 00:15:33.925312 sshd[5250]: Accepted publickey for core from 10.0.0.1 port 42520 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:33.926973 sshd-session[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:33.931387 systemd-logind[1478]: New session 11 of user core. Jul 12 00:15:33.940685 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:15:34.279663 sshd[5252]: Connection closed by 10.0.0.1 port 42520 Jul 12 00:15:34.279886 sshd-session[5250]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:34.293876 systemd[1]: sshd@10-10.0.0.143:22-10.0.0.1:42520.service: Deactivated successfully. Jul 12 00:15:34.295940 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:15:34.299782 systemd-logind[1478]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:15:34.305621 systemd[1]: Started sshd@11-10.0.0.143:22-10.0.0.1:42526.service - OpenSSH per-connection server daemon (10.0.0.1:42526). Jul 12 00:15:34.309270 systemd-logind[1478]: Removed session 11. Jul 12 00:15:34.365057 containerd[1494]: time="2025-07-12T00:15:34.364877899Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5eb962f74767312186a3146ac9b0c102e3642604e7ef0f490720aa40b0806bbc\" id:\"803aeec78b645504546de4af33eef0cd5a1b2c379f8342ea00fb3ad590a349c3\" pid:5286 exited_at:{seconds:1752279334 nanos:364611148}" Jul 12 00:15:34.373594 sshd[5273]: Accepted publickey for core from 10.0.0.1 port 42526 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:34.376131 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:34.382890 kubelet[2630]: I0712 00:15:34.382222 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bcc7cb945-6swrd" podStartSLOduration=27.730949049 podStartE2EDuration="30.382204653s" podCreationTimestamp="2025-07-12 00:15:04 +0000 UTC" firstStartedPulling="2025-07-12 00:15:30.271094818 +0000 UTC m=+47.450502901" lastFinishedPulling="2025-07-12 00:15:32.922350422 +0000 UTC m=+50.101758505" observedRunningTime="2025-07-12 00:15:33.299746269 +0000 UTC m=+50.479154352" watchObservedRunningTime="2025-07-12 00:15:34.382204653 +0000 UTC m=+51.561612736" Jul 12 00:15:34.383766 systemd-logind[1478]: New session 12 of user core. Jul 12 00:15:34.391876 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:15:34.454854 containerd[1494]: time="2025-07-12T00:15:34.454809736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:34.458574 containerd[1494]: time="2025-07-12T00:15:34.455498697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 00:15:34.458574 containerd[1494]: time="2025-07-12T00:15:34.457284347Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:34.460384 containerd[1494]: time="2025-07-12T00:15:34.459165488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:34.460384 containerd[1494]: time="2025-07-12T00:15:34.460266577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.536514866s" Jul 12 00:15:34.460384 containerd[1494]: time="2025-07-12T00:15:34.460294820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:15:34.462170 containerd[1494]: time="2025-07-12T00:15:34.462141277Z" level=info msg="CreateContainer within sandbox \"e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:15:34.469501 containerd[1494]: time="2025-07-12T00:15:34.469463456Z" level=info msg="Container 0f16bca9e0e13f75d865e117fe347d4bee69967332680b1678046a70bc01e56a: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:34.478483 containerd[1494]: time="2025-07-12T00:15:34.478356380Z" level=info msg="CreateContainer within sandbox \"e1f7fccfd0eb1d1c1ccf3a259d1dee57f16e25cf14c39186f8e2a0b3ae7ba6bc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0f16bca9e0e13f75d865e117fe347d4bee69967332680b1678046a70bc01e56a\"" Jul 12 00:15:34.479016 containerd[1494]: time="2025-07-12T00:15:34.478993695Z" level=info msg="StartContainer for \"0f16bca9e0e13f75d865e117fe347d4bee69967332680b1678046a70bc01e56a\"" Jul 12 00:15:34.480582 containerd[1494]: time="2025-07-12T00:15:34.480539597Z" level=info msg="connecting to shim 0f16bca9e0e13f75d865e117fe347d4bee69967332680b1678046a70bc01e56a" address="unix:///run/containerd/s/c6ca47828b207c54114235f2b8a33c88217bd1c11e2b0db8d132b75a09a7bc41" protocol=ttrpc version=3 Jul 12 00:15:34.508712 systemd[1]: Started cri-containerd-0f16bca9e0e13f75d865e117fe347d4bee69967332680b1678046a70bc01e56a.scope - libcontainer container 0f16bca9e0e13f75d865e117fe347d4bee69967332680b1678046a70bc01e56a. Jul 12 00:15:34.588014 containerd[1494]: time="2025-07-12T00:15:34.585892644Z" level=info msg="StartContainer for \"0f16bca9e0e13f75d865e117fe347d4bee69967332680b1678046a70bc01e56a\" returns successfully" Jul 12 00:15:34.604757 sshd[5298]: Connection closed by 10.0.0.1 port 42526 Jul 12 00:15:34.605382 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:34.609780 systemd-logind[1478]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:15:34.610056 systemd[1]: sshd@11-10.0.0.143:22-10.0.0.1:42526.service: Deactivated successfully. Jul 12 00:15:34.611930 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:15:34.613780 systemd-logind[1478]: Removed session 12. Jul 12 00:15:35.017463 kubelet[2630]: I0712 00:15:35.017003 2630 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:15:35.019590 kubelet[2630]: I0712 00:15:35.019550 2630 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:15:35.314568 kubelet[2630]: I0712 00:15:35.314353 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jncsr" podStartSLOduration=26.39376695 podStartE2EDuration="31.314334492s" podCreationTimestamp="2025-07-12 00:15:04 +0000 UTC" firstStartedPulling="2025-07-12 00:15:29.540358272 +0000 UTC m=+46.719766315" lastFinishedPulling="2025-07-12 00:15:34.460925774 +0000 UTC m=+51.640333857" observedRunningTime="2025-07-12 00:15:35.312939451 +0000 UTC m=+52.492347534" watchObservedRunningTime="2025-07-12 00:15:35.314334492 +0000 UTC m=+52.493742575" Jul 12 00:15:39.621331 systemd[1]: Started sshd@12-10.0.0.143:22-10.0.0.1:42540.service - OpenSSH per-connection server daemon (10.0.0.1:42540). Jul 12 00:15:39.679593 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 42540 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:39.680557 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:39.684619 systemd-logind[1478]: New session 13 of user core. Jul 12 00:15:39.696718 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:15:39.854832 sshd[5348]: Connection closed by 10.0.0.1 port 42540 Jul 12 00:15:39.856543 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:39.878230 systemd[1]: sshd@12-10.0.0.143:22-10.0.0.1:42540.service: Deactivated successfully. Jul 12 00:15:39.881485 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:15:39.882406 systemd-logind[1478]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:15:39.885666 systemd[1]: Started sshd@13-10.0.0.143:22-10.0.0.1:42542.service - OpenSSH per-connection server daemon (10.0.0.1:42542). Jul 12 00:15:39.886868 systemd-logind[1478]: Removed session 13. Jul 12 00:15:39.948380 sshd[5362]: Accepted publickey for core from 10.0.0.1 port 42542 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:39.949917 sshd-session[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:39.954583 systemd-logind[1478]: New session 14 of user core. Jul 12 00:15:39.961706 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:15:40.205896 sshd[5364]: Connection closed by 10.0.0.1 port 42542 Jul 12 00:15:40.209627 sshd-session[5362]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:40.219096 systemd[1]: sshd@13-10.0.0.143:22-10.0.0.1:42542.service: Deactivated successfully. Jul 12 00:15:40.220777 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:15:40.223091 systemd-logind[1478]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:15:40.225414 systemd[1]: Started sshd@14-10.0.0.143:22-10.0.0.1:42548.service - OpenSSH per-connection server daemon (10.0.0.1:42548). Jul 12 00:15:40.226117 systemd-logind[1478]: Removed session 14. Jul 12 00:15:40.289689 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 42548 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:40.291034 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:40.295534 systemd-logind[1478]: New session 15 of user core. Jul 12 00:15:40.305724 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:15:41.973645 sshd[5379]: Connection closed by 10.0.0.1 port 42548 Jul 12 00:15:41.974345 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:41.984718 systemd[1]: sshd@14-10.0.0.143:22-10.0.0.1:42548.service: Deactivated successfully. Jul 12 00:15:41.988596 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:15:41.988794 systemd[1]: session-15.scope: Consumed 549ms CPU time, 66.1M memory peak. Jul 12 00:15:41.990658 systemd-logind[1478]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:15:41.995306 systemd[1]: Started sshd@15-10.0.0.143:22-10.0.0.1:42558.service - OpenSSH per-connection server daemon (10.0.0.1:42558). Jul 12 00:15:41.996810 systemd-logind[1478]: Removed session 15. Jul 12 00:15:42.055051 sshd[5400]: Accepted publickey for core from 10.0.0.1 port 42558 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:42.056690 sshd-session[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:42.061116 systemd-logind[1478]: New session 16 of user core. Jul 12 00:15:42.067664 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:15:42.376918 sshd[5402]: Connection closed by 10.0.0.1 port 42558 Jul 12 00:15:42.377698 sshd-session[5400]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:42.391739 systemd[1]: sshd@15-10.0.0.143:22-10.0.0.1:42558.service: Deactivated successfully. Jul 12 00:15:42.397155 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:15:42.398856 systemd-logind[1478]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:15:42.405975 systemd[1]: Started sshd@16-10.0.0.143:22-10.0.0.1:42562.service - OpenSSH per-connection server daemon (10.0.0.1:42562). Jul 12 00:15:42.408662 systemd-logind[1478]: Removed session 16. Jul 12 00:15:42.461271 sshd[5420]: Accepted publickey for core from 10.0.0.1 port 42562 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:42.462862 sshd-session[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:42.468558 systemd-logind[1478]: New session 17 of user core. Jul 12 00:15:42.483726 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:15:42.643296 sshd[5422]: Connection closed by 10.0.0.1 port 42562 Jul 12 00:15:42.643837 sshd-session[5420]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:42.647480 systemd[1]: sshd@16-10.0.0.143:22-10.0.0.1:42562.service: Deactivated successfully. Jul 12 00:15:42.649326 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:15:42.650039 systemd-logind[1478]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:15:42.651410 systemd-logind[1478]: Removed session 17. Jul 12 00:15:43.963858 containerd[1494]: time="2025-07-12T00:15:43.963801797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5eb962f74767312186a3146ac9b0c102e3642604e7ef0f490720aa40b0806bbc\" id:\"ec395c451ea15735c88443ee771b15e167a7956291983773db8aa5544f8b9a45\" pid:5448 exited_at:{seconds:1752279343 nanos:963463081}" Jul 12 00:15:44.046502 containerd[1494]: time="2025-07-12T00:15:44.046446926Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2802e7d6a9170e0f7dbd52349b815f20c67fa282b903c700bcef8641731b4bc\" id:\"acfe1f29df839335f1772de29d32c38fc6b6b5710a600c04686ea78ca0e856c2\" pid:5470 exited_at:{seconds:1752279344 nanos:46167336}" Jul 12 00:15:47.668997 systemd[1]: Started sshd@17-10.0.0.143:22-10.0.0.1:40010.service - OpenSSH per-connection server daemon (10.0.0.1:40010). Jul 12 00:15:47.715241 sshd[5485]: Accepted publickey for core from 10.0.0.1 port 40010 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:47.716572 sshd-session[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:47.721157 systemd-logind[1478]: New session 18 of user core. Jul 12 00:15:47.734684 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:15:47.872898 sshd[5487]: Connection closed by 10.0.0.1 port 40010 Jul 12 00:15:47.873222 sshd-session[5485]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:47.877541 systemd[1]: sshd@17-10.0.0.143:22-10.0.0.1:40010.service: Deactivated successfully. Jul 12 00:15:47.881941 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:15:47.884368 systemd-logind[1478]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:15:47.886912 systemd-logind[1478]: Removed session 18. Jul 12 00:15:48.283464 kubelet[2630]: I0712 00:15:48.283140 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:15:52.892593 systemd[1]: Started sshd@18-10.0.0.143:22-10.0.0.1:51852.service - OpenSSH per-connection server daemon (10.0.0.1:51852). Jul 12 00:15:52.963208 sshd[5508]: Accepted publickey for core from 10.0.0.1 port 51852 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:52.965139 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:52.978154 systemd-logind[1478]: New session 19 of user core. Jul 12 00:15:52.986464 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:15:53.120205 sshd[5510]: Connection closed by 10.0.0.1 port 51852 Jul 12 00:15:53.120508 sshd-session[5508]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:53.124108 systemd[1]: sshd@18-10.0.0.143:22-10.0.0.1:51852.service: Deactivated successfully. Jul 12 00:15:53.126259 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:15:53.127366 systemd-logind[1478]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:15:53.129377 systemd-logind[1478]: Removed session 19. Jul 12 00:15:53.988239 kubelet[2630]: I0712 00:15:53.987830 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:15:54.152383 containerd[1494]: time="2025-07-12T00:15:54.152243302Z" level=info msg="TaskExit event in podsandbox handler container_id:\"323007fea4648c23081108d39151fe8fa7c5cb04b82649f1fc277545be2cf611\" id:\"656e8fd486cb170dc782b0628403317635703f1b4dda76f2776fe92ab4018610\" pid:5536 exited_at:{seconds:1752279354 nanos:151954441}" Jul 12 00:15:58.133931 systemd[1]: Started sshd@19-10.0.0.143:22-10.0.0.1:51858.service - OpenSSH per-connection server daemon (10.0.0.1:51858). Jul 12 00:15:58.184942 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 51858 ssh2: RSA SHA256:RtT/QxinouVmA0AAxfL/HTaSaGmn926OW38A0cdMEs4 Jul 12 00:15:58.186254 sshd-session[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:58.190254 systemd-logind[1478]: New session 20 of user core. Jul 12 00:15:58.195700 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:15:58.354070 sshd[5552]: Connection closed by 10.0.0.1 port 51858 Jul 12 00:15:58.354643 sshd-session[5550]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:58.357997 systemd[1]: sshd@19-10.0.0.143:22-10.0.0.1:51858.service: Deactivated successfully. Jul 12 00:15:58.359999 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:15:58.360661 systemd-logind[1478]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:15:58.362119 systemd-logind[1478]: Removed session 20.