Jun 21 02:33:05.792332 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 21 02:33:05.792356 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sat Jun 21 00:00:47 -00 2025 Jun 21 02:33:05.792365 kernel: KASLR enabled Jun 21 02:33:05.792371 kernel: efi: EFI v2.7 by EDK II Jun 21 02:33:05.792377 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Jun 21 02:33:05.792382 kernel: random: crng init done Jun 21 02:33:05.792389 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jun 21 02:33:05.792395 kernel: secureboot: Secure boot enabled Jun 21 02:33:05.792401 kernel: ACPI: Early table checksum verification disabled Jun 21 02:33:05.792408 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Jun 21 02:33:05.792414 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jun 21 02:33:05.792420 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:33:05.792426 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:33:05.792432 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:33:05.792439 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:33:05.792447 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:33:05.792453 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:33:05.792459 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:33:05.792465 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:33:05.792472 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 02:33:05.792478 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jun 21 02:33:05.792484 kernel: ACPI: Use ACPI SPCR as default console: Yes Jun 21 02:33:05.792490 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jun 21 02:33:05.792496 kernel: NODE_DATA(0) allocated [mem 0xdc736a00-0xdc73dfff] Jun 21 02:33:05.792502 kernel: Zone ranges: Jun 21 02:33:05.792510 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jun 21 02:33:05.792516 kernel: DMA32 empty Jun 21 02:33:05.792522 kernel: Normal empty Jun 21 02:33:05.792528 kernel: Device empty Jun 21 02:33:05.792534 kernel: Movable zone start for each node Jun 21 02:33:05.792540 kernel: Early memory node ranges Jun 21 02:33:05.792546 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Jun 21 02:33:05.792552 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Jun 21 02:33:05.792558 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Jun 21 02:33:05.792564 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Jun 21 02:33:05.792571 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Jun 21 02:33:05.792577 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Jun 21 02:33:05.792584 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Jun 21 02:33:05.792590 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Jun 21 02:33:05.792597 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jun 21 02:33:05.792617 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jun 21 02:33:05.792624 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jun 21 02:33:05.792630 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Jun 21 02:33:05.792637 kernel: psci: probing for conduit method from ACPI. Jun 21 02:33:05.792645 kernel: psci: PSCIv1.1 detected in firmware. Jun 21 02:33:05.792654 kernel: psci: Using standard PSCI v0.2 function IDs Jun 21 02:33:05.792661 kernel: psci: Trusted OS migration not required Jun 21 02:33:05.792668 kernel: psci: SMC Calling Convention v1.1 Jun 21 02:33:05.792675 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jun 21 02:33:05.792681 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jun 21 02:33:05.792687 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jun 21 02:33:05.792694 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jun 21 02:33:05.792701 kernel: Detected PIPT I-cache on CPU0 Jun 21 02:33:05.792709 kernel: CPU features: detected: GIC system register CPU interface Jun 21 02:33:05.792715 kernel: CPU features: detected: Spectre-v4 Jun 21 02:33:05.792722 kernel: CPU features: detected: Spectre-BHB Jun 21 02:33:05.792728 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 21 02:33:05.792735 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 21 02:33:05.792741 kernel: CPU features: detected: ARM erratum 1418040 Jun 21 02:33:05.792748 kernel: CPU features: detected: SSBS not fully self-synchronizing Jun 21 02:33:05.792754 kernel: alternatives: applying boot alternatives Jun 21 02:33:05.792761 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb99487be08e9decec94bac26681ba79a4365c210ec86e0c6fe47991cb7f77db Jun 21 02:33:05.792768 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 02:33:05.792784 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 21 02:33:05.792794 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 21 02:33:05.792800 kernel: Fallback order for Node 0: 0 Jun 21 02:33:05.792807 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jun 21 02:33:05.792814 kernel: Policy zone: DMA Jun 21 02:33:05.792820 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 02:33:05.792827 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jun 21 02:33:05.792833 kernel: software IO TLB: area num 4. Jun 21 02:33:05.792840 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jun 21 02:33:05.792846 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Jun 21 02:33:05.792853 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 21 02:33:05.792859 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 02:33:05.792866 kernel: rcu: RCU event tracing is enabled. Jun 21 02:33:05.792874 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 21 02:33:05.792881 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 02:33:05.792887 kernel: Tracing variant of Tasks RCU enabled. Jun 21 02:33:05.792894 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 02:33:05.792900 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 21 02:33:05.792907 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 21 02:33:05.792913 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 21 02:33:05.792919 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 21 02:33:05.792926 kernel: GICv3: 256 SPIs implemented Jun 21 02:33:05.792932 kernel: GICv3: 0 Extended SPIs implemented Jun 21 02:33:05.792938 kernel: Root IRQ handler: gic_handle_irq Jun 21 02:33:05.792946 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 21 02:33:05.792952 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jun 21 02:33:05.792959 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jun 21 02:33:05.792965 kernel: ITS [mem 0x08080000-0x0809ffff] Jun 21 02:33:05.792972 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jun 21 02:33:05.792978 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jun 21 02:33:05.792985 kernel: GICv3: using LPI property table @0x00000000400f0000 Jun 21 02:33:05.792991 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 Jun 21 02:33:05.792997 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 02:33:05.793004 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 21 02:33:05.793010 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 21 02:33:05.793017 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 21 02:33:05.793025 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 21 02:33:05.793031 kernel: arm-pv: using stolen time PV Jun 21 02:33:05.793038 kernel: Console: colour dummy device 80x25 Jun 21 02:33:05.793044 kernel: ACPI: Core revision 20240827 Jun 21 02:33:05.793051 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 21 02:33:05.793058 kernel: pid_max: default: 32768 minimum: 301 Jun 21 02:33:05.793065 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 02:33:05.793071 kernel: landlock: Up and running. Jun 21 02:33:05.793077 kernel: SELinux: Initializing. Jun 21 02:33:05.793085 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 02:33:05.793092 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 02:33:05.793098 kernel: rcu: Hierarchical SRCU implementation. Jun 21 02:33:05.793105 kernel: rcu: Max phase no-delay instances is 400. Jun 21 02:33:05.793112 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 02:33:05.793118 kernel: Remapping and enabling EFI services. Jun 21 02:33:05.793125 kernel: smp: Bringing up secondary CPUs ... Jun 21 02:33:05.793131 kernel: Detected PIPT I-cache on CPU1 Jun 21 02:33:05.793138 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jun 21 02:33:05.793145 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 Jun 21 02:33:05.793157 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 21 02:33:05.793164 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 21 02:33:05.793172 kernel: Detected PIPT I-cache on CPU2 Jun 21 02:33:05.793179 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jun 21 02:33:05.793186 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 Jun 21 02:33:05.793193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 21 02:33:05.793200 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jun 21 02:33:05.793207 kernel: Detected PIPT I-cache on CPU3 Jun 21 02:33:05.793227 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jun 21 02:33:05.793234 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 Jun 21 02:33:05.793241 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 21 02:33:05.793247 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jun 21 02:33:05.793254 kernel: smp: Brought up 1 node, 4 CPUs Jun 21 02:33:05.793261 kernel: SMP: Total of 4 processors activated. Jun 21 02:33:05.793268 kernel: CPU: All CPU(s) started at EL1 Jun 21 02:33:05.793275 kernel: CPU features: detected: 32-bit EL0 Support Jun 21 02:33:05.793282 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 21 02:33:05.793290 kernel: CPU features: detected: Common not Private translations Jun 21 02:33:05.793297 kernel: CPU features: detected: CRC32 instructions Jun 21 02:33:05.793304 kernel: CPU features: detected: Enhanced Virtualization Traps Jun 21 02:33:05.793311 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 21 02:33:05.793318 kernel: CPU features: detected: LSE atomic instructions Jun 21 02:33:05.793325 kernel: CPU features: detected: Privileged Access Never Jun 21 02:33:05.793332 kernel: CPU features: detected: RAS Extension Support Jun 21 02:33:05.793339 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 21 02:33:05.793346 kernel: alternatives: applying system-wide alternatives Jun 21 02:33:05.793355 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jun 21 02:33:05.793362 kernel: Memory: 2422296K/2572288K available (11136K kernel code, 2284K rwdata, 8980K rodata, 39488K init, 1037K bss, 127840K reserved, 16384K cma-reserved) Jun 21 02:33:05.793369 kernel: devtmpfs: initialized Jun 21 02:33:05.793376 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 02:33:05.793383 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 21 02:33:05.793390 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 21 02:33:05.793397 kernel: 0 pages in range for non-PLT usage Jun 21 02:33:05.793404 kernel: 508496 pages in range for PLT usage Jun 21 02:33:05.793411 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 02:33:05.793418 kernel: SMBIOS 3.0.0 present. Jun 21 02:33:05.793425 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jun 21 02:33:05.793432 kernel: DMI: Memory slots populated: 1/1 Jun 21 02:33:05.793439 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 02:33:05.793446 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 21 02:33:05.793453 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 21 02:33:05.793460 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 21 02:33:05.793467 kernel: audit: initializing netlink subsys (disabled) Jun 21 02:33:05.793474 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jun 21 02:33:05.793482 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 02:33:05.793489 kernel: cpuidle: using governor menu Jun 21 02:33:05.793496 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 21 02:33:05.793503 kernel: ASID allocator initialised with 32768 entries Jun 21 02:33:05.793510 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 02:33:05.793517 kernel: Serial: AMBA PL011 UART driver Jun 21 02:33:05.793524 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 21 02:33:05.793531 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 21 02:33:05.793538 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 21 02:33:05.793546 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 21 02:33:05.793553 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 02:33:05.793560 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 02:33:05.793567 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 21 02:33:05.793574 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 21 02:33:05.793581 kernel: ACPI: Added _OSI(Module Device) Jun 21 02:33:05.793587 kernel: ACPI: Added _OSI(Processor Device) Jun 21 02:33:05.793594 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 02:33:05.793601 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 02:33:05.793615 kernel: ACPI: Interpreter enabled Jun 21 02:33:05.793623 kernel: ACPI: Using GIC for interrupt routing Jun 21 02:33:05.793630 kernel: ACPI: MCFG table detected, 1 entries Jun 21 02:33:05.793637 kernel: ACPI: CPU0 has been hot-added Jun 21 02:33:05.793643 kernel: ACPI: CPU1 has been hot-added Jun 21 02:33:05.793651 kernel: ACPI: CPU2 has been hot-added Jun 21 02:33:05.793658 kernel: ACPI: CPU3 has been hot-added Jun 21 02:33:05.793665 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jun 21 02:33:05.793673 kernel: printk: legacy console [ttyAMA0] enabled Jun 21 02:33:05.793682 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 21 02:33:05.793836 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 21 02:33:05.793903 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 21 02:33:05.793960 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 21 02:33:05.794017 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jun 21 02:33:05.794073 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jun 21 02:33:05.794082 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jun 21 02:33:05.794092 kernel: PCI host bridge to bus 0000:00 Jun 21 02:33:05.794176 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jun 21 02:33:05.794233 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 21 02:33:05.794285 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jun 21 02:33:05.794337 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 21 02:33:05.794411 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jun 21 02:33:05.794482 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 02:33:05.794545 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jun 21 02:33:05.794612 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jun 21 02:33:05.794679 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jun 21 02:33:05.794738 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jun 21 02:33:05.794820 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jun 21 02:33:05.794880 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jun 21 02:33:05.794939 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jun 21 02:33:05.794990 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 21 02:33:05.795042 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jun 21 02:33:05.795054 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 21 02:33:05.795062 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 21 02:33:05.795069 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 21 02:33:05.795076 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 21 02:33:05.795083 kernel: iommu: Default domain type: Translated Jun 21 02:33:05.795090 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 21 02:33:05.795099 kernel: efivars: Registered efivars operations Jun 21 02:33:05.795107 kernel: vgaarb: loaded Jun 21 02:33:05.795113 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 21 02:33:05.795120 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 02:33:05.795127 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 02:33:05.795134 kernel: pnp: PnP ACPI init Jun 21 02:33:05.795204 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jun 21 02:33:05.795214 kernel: pnp: PnP ACPI: found 1 devices Jun 21 02:33:05.795223 kernel: NET: Registered PF_INET protocol family Jun 21 02:33:05.795230 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 21 02:33:05.795237 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 21 02:33:05.795244 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 02:33:05.795251 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 21 02:33:05.795258 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 21 02:33:05.795265 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 21 02:33:05.795272 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 02:33:05.795279 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 02:33:05.795288 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 02:33:05.795295 kernel: PCI: CLS 0 bytes, default 64 Jun 21 02:33:05.795301 kernel: kvm [1]: HYP mode not available Jun 21 02:33:05.795313 kernel: Initialise system trusted keyrings Jun 21 02:33:05.795320 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 21 02:33:05.795327 kernel: Key type asymmetric registered Jun 21 02:33:05.795334 kernel: Asymmetric key parser 'x509' registered Jun 21 02:33:05.795341 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 21 02:33:05.795348 kernel: io scheduler mq-deadline registered Jun 21 02:33:05.795356 kernel: io scheduler kyber registered Jun 21 02:33:05.795363 kernel: io scheduler bfq registered Jun 21 02:33:05.795370 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 21 02:33:05.795377 kernel: ACPI: button: Power Button [PWRB] Jun 21 02:33:05.795384 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 21 02:33:05.795443 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jun 21 02:33:05.795452 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 02:33:05.795459 kernel: thunder_xcv, ver 1.0 Jun 21 02:33:05.795466 kernel: thunder_bgx, ver 1.0 Jun 21 02:33:05.795475 kernel: nicpf, ver 1.0 Jun 21 02:33:05.795481 kernel: nicvf, ver 1.0 Jun 21 02:33:05.795555 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 21 02:33:05.795620 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-21T02:33:05 UTC (1750473185) Jun 21 02:33:05.795630 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 21 02:33:05.795637 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jun 21 02:33:05.795645 kernel: watchdog: NMI not fully supported Jun 21 02:33:05.795652 kernel: watchdog: Hard watchdog permanently disabled Jun 21 02:33:05.795666 kernel: NET: Registered PF_INET6 protocol family Jun 21 02:33:05.795673 kernel: Segment Routing with IPv6 Jun 21 02:33:05.795680 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 02:33:05.795687 kernel: NET: Registered PF_PACKET protocol family Jun 21 02:33:05.795694 kernel: Key type dns_resolver registered Jun 21 02:33:05.795701 kernel: registered taskstats version 1 Jun 21 02:33:05.795708 kernel: Loading compiled-in X.509 certificates Jun 21 02:33:05.795715 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 0d4b619b81572779adc2f9dd5f1325c23c2a41ec' Jun 21 02:33:05.795722 kernel: Demotion targets for Node 0: null Jun 21 02:33:05.795731 kernel: Key type .fscrypt registered Jun 21 02:33:05.795738 kernel: Key type fscrypt-provisioning registered Jun 21 02:33:05.795745 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 02:33:05.795752 kernel: ima: Allocated hash algorithm: sha1 Jun 21 02:33:05.795759 kernel: ima: No architecture policies found Jun 21 02:33:05.795766 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 21 02:33:05.795783 kernel: clk: Disabling unused clocks Jun 21 02:33:05.795790 kernel: PM: genpd: Disabling unused power domains Jun 21 02:33:05.795798 kernel: Warning: unable to open an initial console. Jun 21 02:33:05.795807 kernel: Freeing unused kernel memory: 39488K Jun 21 02:33:05.795814 kernel: Run /init as init process Jun 21 02:33:05.795821 kernel: with arguments: Jun 21 02:33:05.795828 kernel: /init Jun 21 02:33:05.795835 kernel: with environment: Jun 21 02:33:05.795842 kernel: HOME=/ Jun 21 02:33:05.795848 kernel: TERM=linux Jun 21 02:33:05.795855 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 02:33:05.795863 systemd[1]: Successfully made /usr/ read-only. Jun 21 02:33:05.795875 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 02:33:05.795883 systemd[1]: Detected virtualization kvm. Jun 21 02:33:05.795894 systemd[1]: Detected architecture arm64. Jun 21 02:33:05.795903 systemd[1]: Running in initrd. Jun 21 02:33:05.795914 systemd[1]: No hostname configured, using default hostname. Jun 21 02:33:05.795923 systemd[1]: Hostname set to . Jun 21 02:33:05.795931 systemd[1]: Initializing machine ID from VM UUID. Jun 21 02:33:05.795940 systemd[1]: Queued start job for default target initrd.target. Jun 21 02:33:05.795948 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 02:33:05.795955 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 02:33:05.795963 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 02:33:05.795971 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 02:33:05.795978 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 02:33:05.795987 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 02:33:05.795997 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 02:33:05.796005 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 02:33:05.796012 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 02:33:05.796020 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 02:33:05.796027 systemd[1]: Reached target paths.target - Path Units. Jun 21 02:33:05.796035 systemd[1]: Reached target slices.target - Slice Units. Jun 21 02:33:05.796042 systemd[1]: Reached target swap.target - Swaps. Jun 21 02:33:05.796050 systemd[1]: Reached target timers.target - Timer Units. Jun 21 02:33:05.796058 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 02:33:05.796066 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 02:33:05.796074 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 02:33:05.796081 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 02:33:05.796089 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 02:33:05.796097 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 02:33:05.796104 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 02:33:05.796112 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 02:33:05.796120 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 02:33:05.796128 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 02:33:05.796136 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 02:33:05.796144 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 02:33:05.796151 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 02:33:05.796159 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 02:33:05.796166 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 02:33:05.796174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 02:33:05.796182 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 02:33:05.796191 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 02:33:05.796199 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 02:33:05.796206 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 02:33:05.796232 systemd-journald[244]: Collecting audit messages is disabled. Jun 21 02:33:05.796253 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 02:33:05.796262 systemd-journald[244]: Journal started Jun 21 02:33:05.796282 systemd-journald[244]: Runtime Journal (/run/log/journal/31da89b04db14f7ca20df2bd8b5bcf94) is 6M, max 48.5M, 42.4M free. Jun 21 02:33:05.789939 systemd-modules-load[245]: Inserted module 'overlay' Jun 21 02:33:05.798493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:33:05.801230 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 02:33:05.803833 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 02:33:05.805306 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 02:33:05.810791 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 02:33:05.812102 systemd-modules-load[245]: Inserted module 'br_netfilter' Jun 21 02:33:05.812835 kernel: Bridge firewalling registered Jun 21 02:33:05.816711 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 02:33:05.818145 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 02:33:05.820269 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 02:33:05.822096 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 02:33:05.833281 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 02:33:05.835964 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 02:33:05.839871 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 02:33:05.841290 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 02:33:05.843101 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 02:33:05.846943 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 02:33:05.850060 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb99487be08e9decec94bac26681ba79a4365c210ec86e0c6fe47991cb7f77db Jun 21 02:33:05.884508 systemd-resolved[296]: Positive Trust Anchors: Jun 21 02:33:05.884529 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 02:33:05.884566 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 02:33:05.889323 systemd-resolved[296]: Defaulting to hostname 'linux'. Jun 21 02:33:05.890251 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 02:33:05.891660 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 02:33:05.927806 kernel: SCSI subsystem initialized Jun 21 02:33:05.931793 kernel: Loading iSCSI transport class v2.0-870. Jun 21 02:33:05.939810 kernel: iscsi: registered transport (tcp) Jun 21 02:33:05.952010 kernel: iscsi: registered transport (qla4xxx) Jun 21 02:33:05.952036 kernel: QLogic iSCSI HBA Driver Jun 21 02:33:05.969984 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 02:33:05.992405 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 02:33:05.994354 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 02:33:06.037422 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 02:33:06.039622 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 02:33:06.102798 kernel: raid6: neonx8 gen() 15777 MB/s Jun 21 02:33:06.119796 kernel: raid6: neonx4 gen() 15830 MB/s Jun 21 02:33:06.136788 kernel: raid6: neonx2 gen() 13212 MB/s Jun 21 02:33:06.153788 kernel: raid6: neonx1 gen() 10546 MB/s Jun 21 02:33:06.170791 kernel: raid6: int64x8 gen() 6892 MB/s Jun 21 02:33:06.187787 kernel: raid6: int64x4 gen() 7350 MB/s Jun 21 02:33:06.204788 kernel: raid6: int64x2 gen() 6096 MB/s Jun 21 02:33:06.221791 kernel: raid6: int64x1 gen() 5044 MB/s Jun 21 02:33:06.221805 kernel: raid6: using algorithm neonx4 gen() 15830 MB/s Jun 21 02:33:06.238798 kernel: raid6: .... xor() 12310 MB/s, rmw enabled Jun 21 02:33:06.238810 kernel: raid6: using neon recovery algorithm Jun 21 02:33:06.243792 kernel: xor: measuring software checksum speed Jun 21 02:33:06.243815 kernel: 8regs : 21607 MB/sec Jun 21 02:33:06.245204 kernel: 32regs : 19850 MB/sec Jun 21 02:33:06.245218 kernel: arm64_neon : 28128 MB/sec Jun 21 02:33:06.245229 kernel: xor: using function: arm64_neon (28128 MB/sec) Jun 21 02:33:06.297802 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 02:33:06.304491 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 02:33:06.306726 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 02:33:06.339295 systemd-udevd[499]: Using default interface naming scheme 'v255'. Jun 21 02:33:06.345767 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 02:33:06.347447 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 02:33:06.374219 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jun 21 02:33:06.397184 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 02:33:06.399285 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 02:33:06.454938 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 02:33:06.456810 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 02:33:06.506799 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jun 21 02:33:06.512305 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 02:33:06.513496 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 21 02:33:06.512425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:33:06.515517 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 02:33:06.520086 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 21 02:33:06.520110 kernel: GPT:9289727 != 19775487 Jun 21 02:33:06.520123 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 21 02:33:06.520141 kernel: GPT:9289727 != 19775487 Jun 21 02:33:06.520149 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 21 02:33:06.520158 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 02:33:06.518759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 02:33:06.542305 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 21 02:33:06.543454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:33:06.551211 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 02:33:06.559029 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 21 02:33:06.570710 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 02:33:06.576574 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 21 02:33:06.577453 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 21 02:33:06.579609 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 02:33:06.581232 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 02:33:06.582724 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 02:33:06.585089 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 02:33:06.586509 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 02:33:06.610499 disk-uuid[590]: Primary Header is updated. Jun 21 02:33:06.610499 disk-uuid[590]: Secondary Entries is updated. Jun 21 02:33:06.610499 disk-uuid[590]: Secondary Header is updated. Jun 21 02:33:06.613551 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 02:33:06.617711 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 02:33:07.620816 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 02:33:07.623568 disk-uuid[594]: The operation has completed successfully. Jun 21 02:33:07.648725 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 02:33:07.648839 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 02:33:07.673223 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 02:33:07.689822 sh[610]: Success Jun 21 02:33:07.701797 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 02:33:07.701847 kernel: device-mapper: uevent: version 1.0.3 Jun 21 02:33:07.702878 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 02:33:07.711799 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jun 21 02:33:07.736555 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 02:33:07.739267 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 02:33:07.762013 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 02:33:07.768222 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 02:33:07.768267 kernel: BTRFS: device fsid 750e5bb7-0e5c-4b2e-87f6-233588ea3c64 devid 1 transid 51 /dev/mapper/usr (253:0) scanned by mount (623) Jun 21 02:33:07.770384 kernel: BTRFS info (device dm-0): first mount of filesystem 750e5bb7-0e5c-4b2e-87f6-233588ea3c64 Jun 21 02:33:07.770418 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 21 02:33:07.771014 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 02:33:07.774613 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 02:33:07.775770 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 02:33:07.776713 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 02:33:07.777521 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 02:33:07.779864 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 02:33:07.814557 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (655) Jun 21 02:33:07.814614 kernel: BTRFS info (device vda6): first mount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:33:07.814626 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 21 02:33:07.815283 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 02:33:07.821732 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 02:33:07.823191 kernel: BTRFS info (device vda6): last unmount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:33:07.823697 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 02:33:07.917384 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 02:33:07.921465 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 02:33:07.966879 systemd-networkd[798]: lo: Link UP Jun 21 02:33:07.967578 systemd-networkd[798]: lo: Gained carrier Jun 21 02:33:07.968573 systemd-networkd[798]: Enumeration completed Jun 21 02:33:07.968699 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 02:33:07.970090 systemd[1]: Reached target network.target - Network. Jun 21 02:33:07.971551 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 02:33:07.971555 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 02:33:07.971990 systemd-networkd[798]: eth0: Link UP Jun 21 02:33:07.971993 systemd-networkd[798]: eth0: Gained carrier Jun 21 02:33:07.972001 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 02:33:08.005827 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 21 02:33:08.015458 ignition[697]: Ignition 2.21.0 Jun 21 02:33:08.015474 ignition[697]: Stage: fetch-offline Jun 21 02:33:08.015504 ignition[697]: no configs at "/usr/lib/ignition/base.d" Jun 21 02:33:08.018919 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:33:08.019168 ignition[697]: parsed url from cmdline: "" Jun 21 02:33:08.019172 ignition[697]: no config URL provided Jun 21 02:33:08.019179 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 02:33:08.019193 ignition[697]: no config at "/usr/lib/ignition/user.ign" Jun 21 02:33:08.019215 ignition[697]: op(1): [started] loading QEMU firmware config module Jun 21 02:33:08.019221 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 21 02:33:08.024408 ignition[697]: op(1): [finished] loading QEMU firmware config module Jun 21 02:33:08.062322 ignition[697]: parsing config with SHA512: b33fa0e3ea5e7c682aee5b43343c3c318077b6fdca4c5c0e33aa3a4226a5e1da8d13ffe2bc243fb0cd5412933d44a6a24e471f11274c8ececc6f060ec7a76375 Jun 21 02:33:08.068973 unknown[697]: fetched base config from "system" Jun 21 02:33:08.068985 unknown[697]: fetched user config from "qemu" Jun 21 02:33:08.069523 ignition[697]: fetch-offline: fetch-offline passed Jun 21 02:33:08.069621 ignition[697]: Ignition finished successfully Jun 21 02:33:08.071388 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 02:33:08.072469 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 21 02:33:08.073283 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 02:33:08.106380 ignition[811]: Ignition 2.21.0 Jun 21 02:33:08.106397 ignition[811]: Stage: kargs Jun 21 02:33:08.106566 ignition[811]: no configs at "/usr/lib/ignition/base.d" Jun 21 02:33:08.106574 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:33:08.108469 ignition[811]: kargs: kargs passed Jun 21 02:33:08.108522 ignition[811]: Ignition finished successfully Jun 21 02:33:08.110842 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 02:33:08.112520 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 02:33:08.136565 ignition[820]: Ignition 2.21.0 Jun 21 02:33:08.136585 ignition[820]: Stage: disks Jun 21 02:33:08.136734 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jun 21 02:33:08.136743 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:33:08.138566 ignition[820]: disks: disks passed Jun 21 02:33:08.138636 ignition[820]: Ignition finished successfully Jun 21 02:33:08.139986 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 02:33:08.141542 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 02:33:08.142524 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 02:33:08.143374 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 02:33:08.144081 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 02:33:08.145389 systemd[1]: Reached target basic.target - Basic System. Jun 21 02:33:08.147631 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 02:33:08.171906 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 21 02:33:08.176302 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 02:33:08.178221 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 02:33:08.249789 kernel: EXT4-fs (vda9): mounted filesystem 9ad072e4-7680-4e5b-adc0-72c770c20c86 r/w with ordered data mode. Quota mode: none. Jun 21 02:33:08.250218 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 02:33:08.251232 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 02:33:08.253162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 02:33:08.254485 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 02:33:08.255270 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 21 02:33:08.255309 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 02:33:08.255332 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 02:33:08.268549 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 02:33:08.271467 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 02:33:08.273668 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (838) Jun 21 02:33:08.274791 kernel: BTRFS info (device vda6): first mount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:33:08.274810 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 21 02:33:08.274820 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 02:33:08.277910 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 02:33:08.313908 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 02:33:08.317753 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Jun 21 02:33:08.321449 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 02:33:08.324411 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 02:33:08.396881 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 02:33:08.398839 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 02:33:08.400120 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 02:33:08.416789 kernel: BTRFS info (device vda6): last unmount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:33:08.426986 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 02:33:08.434340 ignition[951]: INFO : Ignition 2.21.0 Jun 21 02:33:08.434340 ignition[951]: INFO : Stage: mount Jun 21 02:33:08.435555 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 02:33:08.435555 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:33:08.437829 ignition[951]: INFO : mount: mount passed Jun 21 02:33:08.439349 ignition[951]: INFO : Ignition finished successfully Jun 21 02:33:08.440044 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 02:33:08.442878 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 02:33:08.938052 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 02:33:08.939550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 02:33:08.966089 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (964) Jun 21 02:33:08.966133 kernel: BTRFS info (device vda6): first mount of filesystem 3419b9f8-2562-4f16-b892-4960d53a6e77 Jun 21 02:33:08.966144 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 21 02:33:08.967180 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 02:33:08.969895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 02:33:09.005641 ignition[981]: INFO : Ignition 2.21.0 Jun 21 02:33:09.005641 ignition[981]: INFO : Stage: files Jun 21 02:33:09.007231 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 02:33:09.007231 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:33:09.008889 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Jun 21 02:33:09.008889 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 02:33:09.008889 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 02:33:09.011799 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 02:33:09.011799 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 02:33:09.011799 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 02:33:09.010913 unknown[981]: wrote ssh authorized keys file for user: core Jun 21 02:33:09.015626 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 21 02:33:09.015626 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 21 02:33:09.017085 systemd-networkd[798]: eth0: Gained IPv6LL Jun 21 02:33:09.050437 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 02:33:09.190906 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 21 02:33:09.190906 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 21 02:33:09.193669 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 02:33:09.193669 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 02:33:09.193669 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 02:33:09.193669 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 02:33:09.193669 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 02:33:09.193669 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 02:33:09.193669 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 02:33:09.202766 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 02:33:09.202766 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 02:33:09.202766 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jun 21 02:33:09.202766 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jun 21 02:33:09.202766 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jun 21 02:33:09.202766 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jun 21 02:33:09.623834 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 21 02:33:10.096659 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jun 21 02:33:10.096659 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 21 02:33:10.099709 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 02:33:10.099709 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 02:33:10.099709 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 21 02:33:10.099709 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 21 02:33:10.099709 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 21 02:33:10.099709 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 21 02:33:10.099709 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 21 02:33:10.099709 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 21 02:33:10.118528 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 21 02:33:10.121874 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 21 02:33:10.123933 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 21 02:33:10.123933 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 21 02:33:10.123933 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 02:33:10.123933 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 02:33:10.123933 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 02:33:10.123933 ignition[981]: INFO : files: files passed Jun 21 02:33:10.123933 ignition[981]: INFO : Ignition finished successfully Jun 21 02:33:10.125439 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 02:33:10.127919 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 02:33:10.130921 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 02:33:10.139836 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 02:33:10.140929 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Jun 21 02:33:10.141209 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 02:33:10.143395 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 02:33:10.143395 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 02:33:10.145667 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 02:33:10.145949 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 02:33:10.147795 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 02:33:10.150903 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 02:33:10.178923 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 02:33:10.179032 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 02:33:10.180650 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 02:33:10.182123 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 02:33:10.183398 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 02:33:10.184085 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 02:33:10.197829 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 02:33:10.199798 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 02:33:10.218299 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 02:33:10.219226 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 02:33:10.220638 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 02:33:10.221884 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 02:33:10.221991 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 02:33:10.223830 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 02:33:10.225205 systemd[1]: Stopped target basic.target - Basic System. Jun 21 02:33:10.226439 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 02:33:10.227671 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 02:33:10.229073 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 02:33:10.230486 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 02:33:10.231896 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 02:33:10.233236 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 02:33:10.234655 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 02:33:10.236047 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 02:33:10.237275 systemd[1]: Stopped target swap.target - Swaps. Jun 21 02:33:10.238374 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 02:33:10.238489 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 02:33:10.240181 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 02:33:10.241559 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 02:33:10.242959 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 02:33:10.244338 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 02:33:10.245270 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 02:33:10.245382 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 02:33:10.247392 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 02:33:10.247589 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 02:33:10.248867 systemd[1]: Stopped target paths.target - Path Units. Jun 21 02:33:10.250048 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 02:33:10.250691 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 02:33:10.251677 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 02:33:10.252747 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 02:33:10.254000 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 02:33:10.254083 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 02:33:10.255541 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 02:33:10.255624 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 02:33:10.256712 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 02:33:10.256833 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 02:33:10.258055 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 02:33:10.258149 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 02:33:10.259961 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 02:33:10.261495 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 02:33:10.262769 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 02:33:10.262884 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 02:33:10.264423 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 02:33:10.264516 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 02:33:10.268814 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 02:33:10.277893 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 02:33:10.285610 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 02:33:10.289426 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 02:33:10.289509 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 02:33:10.291370 ignition[1037]: INFO : Ignition 2.21.0 Jun 21 02:33:10.291370 ignition[1037]: INFO : Stage: umount Jun 21 02:33:10.291370 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 02:33:10.291370 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 02:33:10.291370 ignition[1037]: INFO : umount: umount passed Jun 21 02:33:10.291370 ignition[1037]: INFO : Ignition finished successfully Jun 21 02:33:10.292572 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 02:33:10.292682 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 02:33:10.294119 systemd[1]: Stopped target network.target - Network. Jun 21 02:33:10.295358 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 02:33:10.295413 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 02:33:10.296132 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 02:33:10.296168 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 02:33:10.296875 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 02:33:10.296920 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 02:33:10.298041 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 02:33:10.298077 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 02:33:10.299316 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 02:33:10.299360 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 02:33:10.300669 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 02:33:10.301847 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 02:33:10.306452 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 02:33:10.306549 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 02:33:10.309919 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 02:33:10.310131 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 02:33:10.310164 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 02:33:10.313959 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 02:33:10.316818 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 02:33:10.316936 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 02:33:10.319068 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 02:33:10.319201 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 02:33:10.320184 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 02:33:10.320219 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 02:33:10.322408 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 02:33:10.323606 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 02:33:10.323661 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 02:33:10.325218 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 02:33:10.325257 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 02:33:10.327226 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 02:33:10.327266 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 02:33:10.328648 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 02:33:10.332430 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 02:33:10.351155 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 02:33:10.351301 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 02:33:10.353146 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 02:33:10.353203 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 02:33:10.356019 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 02:33:10.356051 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 02:33:10.357234 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 02:33:10.357275 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 02:33:10.359177 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 02:33:10.359219 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 02:33:10.361165 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 02:33:10.361207 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 02:33:10.363902 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 02:33:10.365213 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 02:33:10.365261 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 02:33:10.367612 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 02:33:10.367656 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 02:33:10.370280 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 21 02:33:10.370320 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 02:33:10.372695 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 02:33:10.372755 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 02:33:10.374255 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 02:33:10.374293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:33:10.377048 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 02:33:10.377124 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 02:33:10.378302 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 02:33:10.378369 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 02:33:10.380197 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 02:33:10.382173 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 02:33:10.399879 systemd[1]: Switching root. Jun 21 02:33:10.435206 systemd-journald[244]: Journal stopped Jun 21 02:33:11.179480 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jun 21 02:33:11.179527 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 02:33:11.179539 kernel: SELinux: policy capability open_perms=1 Jun 21 02:33:11.179548 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 02:33:11.179561 kernel: SELinux: policy capability always_check_network=0 Jun 21 02:33:11.179570 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 02:33:11.179580 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 02:33:11.179605 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 02:33:11.179617 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 02:33:11.179628 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 02:33:11.179638 kernel: audit: type=1403 audit(1750473190.593:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 02:33:11.179650 systemd[1]: Successfully loaded SELinux policy in 34.390ms. Jun 21 02:33:11.179670 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.219ms. Jun 21 02:33:11.179682 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 02:33:11.179693 systemd[1]: Detected virtualization kvm. Jun 21 02:33:11.179703 systemd[1]: Detected architecture arm64. Jun 21 02:33:11.179713 systemd[1]: Detected first boot. Jun 21 02:33:11.179723 systemd[1]: Initializing machine ID from VM UUID. Jun 21 02:33:11.179734 zram_generator::config[1081]: No configuration found. Jun 21 02:33:11.179745 kernel: NET: Registered PF_VSOCK protocol family Jun 21 02:33:11.179755 systemd[1]: Populated /etc with preset unit settings. Jun 21 02:33:11.179827 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 02:33:11.179842 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 02:33:11.179858 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 02:33:11.179869 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 02:33:11.179879 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 02:33:11.179890 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 02:33:11.179901 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 02:33:11.179912 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 02:33:11.179924 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 02:33:11.179935 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 02:33:11.179948 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 02:33:11.179958 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 02:33:11.179968 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 02:33:11.179979 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 02:33:11.179990 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 02:33:11.180000 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 02:33:11.180012 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 02:33:11.180023 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 02:33:11.180033 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 21 02:33:11.180044 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 02:33:11.180054 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 02:33:11.180064 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 02:33:11.180075 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 02:33:11.180086 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 02:33:11.180096 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 02:33:11.180106 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 02:33:11.180116 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 02:33:11.180126 systemd[1]: Reached target slices.target - Slice Units. Jun 21 02:33:11.180136 systemd[1]: Reached target swap.target - Swaps. Jun 21 02:33:11.180146 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 02:33:11.180156 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 02:33:11.180166 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 02:33:11.180176 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 02:33:11.180188 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 02:33:11.180198 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 02:33:11.180208 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 02:33:11.180218 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 02:33:11.180228 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 02:33:11.180238 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 02:33:11.180248 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 02:33:11.180258 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 02:33:11.180273 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 02:33:11.180285 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 02:33:11.180296 systemd[1]: Reached target machines.target - Containers. Jun 21 02:33:11.180307 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 02:33:11.180317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 02:33:11.180328 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 02:33:11.180338 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 02:33:11.180347 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 02:33:11.180358 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 02:33:11.180369 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 02:33:11.180379 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 02:33:11.180389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 02:33:11.180400 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 02:33:11.180410 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 02:33:11.180420 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 02:33:11.180430 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 02:33:11.180441 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 02:33:11.180453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 02:33:11.180463 kernel: loop: module loaded Jun 21 02:33:11.180473 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 02:33:11.180483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 02:33:11.180494 kernel: fuse: init (API version 7.41) Jun 21 02:33:11.180504 kernel: ACPI: bus type drm_connector registered Jun 21 02:33:11.180514 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 02:33:11.180525 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 02:33:11.180535 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 02:33:11.180547 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 02:33:11.180558 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 02:33:11.180569 systemd[1]: Stopped verity-setup.service. Jun 21 02:33:11.180579 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 02:33:11.180597 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 02:33:11.180611 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 02:33:11.180646 systemd-journald[1149]: Collecting audit messages is disabled. Jun 21 02:33:11.180669 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 02:33:11.180680 systemd-journald[1149]: Journal started Jun 21 02:33:11.180701 systemd-journald[1149]: Runtime Journal (/run/log/journal/31da89b04db14f7ca20df2bd8b5bcf94) is 6M, max 48.5M, 42.4M free. Jun 21 02:33:10.970229 systemd[1]: Queued start job for default target multi-user.target. Jun 21 02:33:10.996650 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 21 02:33:10.997033 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 02:33:11.182269 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 02:33:11.182980 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 02:33:11.183926 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 02:33:11.185915 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 02:33:11.187144 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 02:33:11.188287 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 02:33:11.188458 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 02:33:11.189606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 02:33:11.189785 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 02:33:11.190885 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 02:33:11.191037 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 02:33:11.192057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 02:33:11.192212 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 02:33:11.193355 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 02:33:11.195023 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 02:33:11.196065 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 02:33:11.196219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 02:33:11.197324 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 02:33:11.198419 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 02:33:11.199799 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 02:33:11.201836 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 02:33:11.213744 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 02:33:11.215900 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 02:33:11.217620 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 02:33:11.218540 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 02:33:11.218572 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 02:33:11.220236 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 02:33:11.227538 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 02:33:11.228454 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 02:33:11.229878 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 02:33:11.231648 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 02:33:11.232752 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 02:33:11.233919 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 02:33:11.234731 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 02:33:11.237082 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 02:33:11.240496 systemd-journald[1149]: Time spent on flushing to /var/log/journal/31da89b04db14f7ca20df2bd8b5bcf94 is 24.120ms for 879 entries. Jun 21 02:33:11.240496 systemd-journald[1149]: System Journal (/var/log/journal/31da89b04db14f7ca20df2bd8b5bcf94) is 8M, max 195.6M, 187.6M free. Jun 21 02:33:11.279226 systemd-journald[1149]: Received client request to flush runtime journal. Jun 21 02:33:11.279282 kernel: loop0: detected capacity change from 0 to 203944 Jun 21 02:33:11.242261 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 02:33:11.244764 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 02:33:11.247541 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 02:33:11.249918 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 02:33:11.251075 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 02:33:11.260894 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 02:33:11.261904 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 02:33:11.267533 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 02:33:11.280320 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 02:33:11.283994 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jun 21 02:33:11.284008 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jun 21 02:33:11.286208 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 02:33:11.291365 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 02:33:11.293806 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 02:33:11.294689 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 02:33:11.310787 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 02:33:11.322811 kernel: loop1: detected capacity change from 0 to 107312 Jun 21 02:33:11.326738 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 02:33:11.331006 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 02:33:11.348807 kernel: loop2: detected capacity change from 0 to 138376 Jun 21 02:33:11.354961 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jun 21 02:33:11.354978 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jun 21 02:33:11.358961 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 02:33:11.375803 kernel: loop3: detected capacity change from 0 to 203944 Jun 21 02:33:11.383797 kernel: loop4: detected capacity change from 0 to 107312 Jun 21 02:33:11.390796 kernel: loop5: detected capacity change from 0 to 138376 Jun 21 02:33:11.397326 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 21 02:33:11.397750 (sd-merge)[1224]: Merged extensions into '/usr'. Jun 21 02:33:11.401055 systemd[1]: Reload requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 02:33:11.401068 systemd[1]: Reloading... Jun 21 02:33:11.459798 zram_generator::config[1254]: No configuration found. Jun 21 02:33:11.532019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 02:33:11.546979 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 02:33:11.596331 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 02:33:11.596470 systemd[1]: Reloading finished in 194 ms. Jun 21 02:33:11.625812 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 02:33:11.627015 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 02:33:11.641313 systemd[1]: Starting ensure-sysext.service... Jun 21 02:33:11.642947 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 02:33:11.657638 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Jun 21 02:33:11.657654 systemd[1]: Reloading... Jun 21 02:33:11.663348 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 02:33:11.663380 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 02:33:11.663620 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 02:33:11.663836 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 02:33:11.664423 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 02:33:11.664637 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jun 21 02:33:11.664687 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jun 21 02:33:11.671143 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 02:33:11.671156 systemd-tmpfiles[1286]: Skipping /boot Jun 21 02:33:11.680205 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 02:33:11.680222 systemd-tmpfiles[1286]: Skipping /boot Jun 21 02:33:11.710804 zram_generator::config[1313]: No configuration found. Jun 21 02:33:11.779249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 02:33:11.842006 systemd[1]: Reloading finished in 184 ms. Jun 21 02:33:11.867330 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 02:33:11.872638 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 02:33:11.885723 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 02:33:11.887829 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 02:33:11.889750 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 02:33:11.893892 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 02:33:11.896222 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 02:33:11.901034 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 02:33:11.907816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 02:33:11.919207 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 02:33:11.921182 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 02:33:11.924204 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 02:33:11.925458 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 02:33:11.925597 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 02:33:11.927965 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 02:33:11.930712 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 02:33:11.932852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 02:33:11.934177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 02:33:11.936207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 02:33:11.939971 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 02:33:11.941529 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 02:33:11.943269 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 02:33:11.951188 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 02:33:11.952628 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jun 21 02:33:11.953856 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 02:33:11.958822 augenrules[1383]: No rules Jun 21 02:33:11.959618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 02:33:11.961004 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 02:33:11.965992 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 02:33:11.975490 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 02:33:11.976369 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 02:33:11.976489 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 02:33:11.977805 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 02:33:11.978573 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 02:33:11.979526 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 02:33:11.980944 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 02:33:11.982255 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 02:33:11.983814 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 02:33:11.985009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 02:33:11.985153 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 02:33:11.986518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 02:33:11.986686 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 02:33:11.997551 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 02:33:11.998397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 02:33:11.999580 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 02:33:12.003002 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 02:33:12.006134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 02:33:12.007073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 02:33:12.007191 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 02:33:12.008813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 02:33:12.010143 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 02:33:12.012518 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 02:33:12.013610 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 02:33:12.014940 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 02:33:12.016107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 02:33:12.016255 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 02:33:12.020748 systemd[1]: Finished ensure-sysext.service. Jun 21 02:33:12.024704 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 02:33:12.026449 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 21 02:33:12.030291 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 02:33:12.031813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 02:33:12.032950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 02:33:12.033114 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 02:33:12.035212 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 02:33:12.038014 augenrules[1428]: /sbin/augenrules: No change Jun 21 02:33:12.046680 augenrules[1458]: No rules Jun 21 02:33:12.048205 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 02:33:12.048398 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 02:33:12.064601 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 21 02:33:12.111637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 02:33:12.115798 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 02:33:12.154103 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 02:33:12.193669 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 21 02:33:12.196046 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 02:33:12.207707 systemd-resolved[1353]: Positive Trust Anchors: Jun 21 02:33:12.207725 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 02:33:12.207757 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 02:33:12.216908 systemd-networkd[1433]: lo: Link UP Jun 21 02:33:12.216916 systemd-networkd[1433]: lo: Gained carrier Jun 21 02:33:12.217730 systemd-networkd[1433]: Enumeration completed Jun 21 02:33:12.218246 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 02:33:12.218252 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 02:33:12.218765 systemd-networkd[1433]: eth0: Link UP Jun 21 02:33:12.218901 systemd-networkd[1433]: eth0: Gained carrier Jun 21 02:33:12.218919 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 02:33:12.219467 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 02:33:12.220712 systemd-resolved[1353]: Defaulting to hostname 'linux'. Jun 21 02:33:12.223859 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 02:33:12.225849 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 02:33:12.227653 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 02:33:12.228709 systemd[1]: Reached target network.target - Network. Jun 21 02:33:12.229636 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 02:33:12.230514 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 02:33:12.231479 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 02:33:12.231835 systemd-networkd[1433]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 21 02:33:12.232683 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Jun 21 02:33:12.233126 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 02:33:12.234645 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 02:33:11.825330 systemd-timesyncd[1443]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 21 02:33:11.834187 systemd-journald[1149]: Time jumped backwards, rotating. Jun 21 02:33:11.825378 systemd-timesyncd[1443]: Initial clock synchronization to Sat 2025-06-21 02:33:11.825252 UTC. Jun 21 02:33:11.825656 systemd-resolved[1353]: Clock change detected. Flushing caches. Jun 21 02:33:11.827463 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 02:33:11.828441 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 02:33:11.829440 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 02:33:11.829468 systemd[1]: Reached target paths.target - Path Units. Jun 21 02:33:11.830234 systemd[1]: Reached target timers.target - Timer Units. Jun 21 02:33:11.831999 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 02:33:11.834351 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 02:33:11.839975 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 02:33:11.841886 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 02:33:11.843904 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 02:33:11.853628 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 02:33:11.855514 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 02:33:11.857419 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 02:33:11.864708 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 02:33:11.865466 systemd[1]: Reached target basic.target - Basic System. Jun 21 02:33:11.866184 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 02:33:11.866215 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 02:33:11.869693 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 02:33:11.871536 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 02:33:11.874990 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 02:33:11.876741 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 02:33:11.879978 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 02:33:11.880745 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 02:33:11.881741 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 02:33:11.883398 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 02:33:11.885036 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 02:33:11.886131 jq[1497]: false Jun 21 02:33:11.888885 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 02:33:11.892969 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 02:33:11.894766 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 02:33:11.896481 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 02:33:11.896924 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 02:33:11.898987 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 02:33:11.900524 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 02:33:11.902989 extend-filesystems[1498]: Found /dev/vda6 Jun 21 02:33:11.903874 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 02:33:11.907237 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 02:33:11.908440 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 02:33:11.908627 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 02:33:11.910428 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 02:33:11.911831 jq[1513]: true Jun 21 02:33:11.910621 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 02:33:11.912233 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 02:33:11.912422 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 02:33:11.917280 extend-filesystems[1498]: Found /dev/vda9 Jun 21 02:33:11.924004 extend-filesystems[1498]: Checking size of /dev/vda9 Jun 21 02:33:11.934541 (ntainerd)[1524]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 02:33:11.936896 jq[1522]: true Jun 21 02:33:11.945432 extend-filesystems[1498]: Resized partition /dev/vda9 Jun 21 02:33:11.947297 tar[1521]: linux-arm64/helm Jun 21 02:33:11.953921 extend-filesystems[1544]: resize2fs 1.47.2 (1-Jan-2025) Jun 21 02:33:11.957928 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 21 02:33:11.970997 dbus-daemon[1495]: [system] SELinux support is enabled Jun 21 02:33:11.973113 update_engine[1512]: I20250621 02:33:11.972523 1512 main.cc:92] Flatcar Update Engine starting Jun 21 02:33:11.971184 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 02:33:11.975313 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 02:33:11.975356 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 02:33:11.977667 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 02:33:11.977686 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 02:33:11.987427 systemd[1]: Started update-engine.service - Update Engine. Jun 21 02:33:11.989443 update_engine[1512]: I20250621 02:33:11.987906 1512 update_check_scheduler.cc:74] Next update check in 10m4s Jun 21 02:33:11.991163 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 02:33:11.992366 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 02:33:11.999458 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 21 02:33:12.018288 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 21 02:33:12.018288 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 21 02:33:12.018288 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 21 02:33:12.021082 extend-filesystems[1498]: Resized filesystem in /dev/vda9 Jun 21 02:33:12.019813 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 02:33:12.020028 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 02:33:12.044292 bash[1561]: Updated "/home/core/.ssh/authorized_keys" Jun 21 02:33:12.045247 systemd-logind[1507]: Watching system buttons on /dev/input/event0 (Power Button) Jun 21 02:33:12.046848 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 02:33:12.048275 systemd-logind[1507]: New seat seat0. Jun 21 02:33:12.056551 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 02:33:12.059915 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 21 02:33:12.082413 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 02:33:12.176717 containerd[1524]: time="2025-06-21T02:33:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 02:33:12.181032 containerd[1524]: time="2025-06-21T02:33:12.180985478Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 02:33:12.192232 containerd[1524]: time="2025-06-21T02:33:12.192194598Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.48µs" Jun 21 02:33:12.192232 containerd[1524]: time="2025-06-21T02:33:12.192227838Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 02:33:12.192302 containerd[1524]: time="2025-06-21T02:33:12.192246638Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 02:33:12.192407 containerd[1524]: time="2025-06-21T02:33:12.192387038Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 02:33:12.192443 containerd[1524]: time="2025-06-21T02:33:12.192408918Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 02:33:12.192443 containerd[1524]: time="2025-06-21T02:33:12.192431198Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 02:33:12.192527 containerd[1524]: time="2025-06-21T02:33:12.192480518Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 02:33:12.192527 containerd[1524]: time="2025-06-21T02:33:12.192495678Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 02:33:12.192750 containerd[1524]: time="2025-06-21T02:33:12.192726198Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 02:33:12.192750 containerd[1524]: time="2025-06-21T02:33:12.192747838Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 02:33:12.192800 containerd[1524]: time="2025-06-21T02:33:12.192759518Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 02:33:12.192800 containerd[1524]: time="2025-06-21T02:33:12.192767078Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 02:33:12.192870 containerd[1524]: time="2025-06-21T02:33:12.192851478Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 02:33:12.193063 containerd[1524]: time="2025-06-21T02:33:12.193044558Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 02:33:12.193094 containerd[1524]: time="2025-06-21T02:33:12.193078678Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 02:33:12.193094 containerd[1524]: time="2025-06-21T02:33:12.193089838Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 02:33:12.193132 containerd[1524]: time="2025-06-21T02:33:12.193117678Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 02:33:12.193330 containerd[1524]: time="2025-06-21T02:33:12.193313998Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 02:33:12.193387 containerd[1524]: time="2025-06-21T02:33:12.193372038Z" level=info msg="metadata content store policy set" policy=shared Jun 21 02:33:12.196711 containerd[1524]: time="2025-06-21T02:33:12.196681318Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 02:33:12.196711 containerd[1524]: time="2025-06-21T02:33:12.196726798Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 02:33:12.196791 containerd[1524]: time="2025-06-21T02:33:12.196739478Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 02:33:12.196791 containerd[1524]: time="2025-06-21T02:33:12.196750718Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 02:33:12.196791 containerd[1524]: time="2025-06-21T02:33:12.196762078Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 02:33:12.196791 containerd[1524]: time="2025-06-21T02:33:12.196771998Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 02:33:12.196791 containerd[1524]: time="2025-06-21T02:33:12.196789878Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 02:33:12.196894 containerd[1524]: time="2025-06-21T02:33:12.196802878Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 02:33:12.196894 containerd[1524]: time="2025-06-21T02:33:12.196814558Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 02:33:12.196894 containerd[1524]: time="2025-06-21T02:33:12.196825318Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 02:33:12.196894 containerd[1524]: time="2025-06-21T02:33:12.196845318Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 02:33:12.196894 containerd[1524]: time="2025-06-21T02:33:12.196858798Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 02:33:12.197098 containerd[1524]: time="2025-06-21T02:33:12.196970638Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 02:33:12.197098 containerd[1524]: time="2025-06-21T02:33:12.196997318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 02:33:12.197098 containerd[1524]: time="2025-06-21T02:33:12.197013038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 02:33:12.197098 containerd[1524]: time="2025-06-21T02:33:12.197023478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 02:33:12.197098 containerd[1524]: time="2025-06-21T02:33:12.197033878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 02:33:12.197098 containerd[1524]: time="2025-06-21T02:33:12.197044118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 02:33:12.197098 containerd[1524]: time="2025-06-21T02:33:12.197056198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 02:33:12.197098 containerd[1524]: time="2025-06-21T02:33:12.197066558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 02:33:12.197098 containerd[1524]: time="2025-06-21T02:33:12.197077358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 02:33:12.197098 containerd[1524]: time="2025-06-21T02:33:12.197087798Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 02:33:12.197098 containerd[1524]: time="2025-06-21T02:33:12.197098438Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 02:33:12.197535 containerd[1524]: time="2025-06-21T02:33:12.197283198Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 02:33:12.197535 containerd[1524]: time="2025-06-21T02:33:12.197297438Z" level=info msg="Start snapshots syncer" Jun 21 02:33:12.197535 containerd[1524]: time="2025-06-21T02:33:12.197320758Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 02:33:12.197589 containerd[1524]: time="2025-06-21T02:33:12.197540678Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197602438Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197690918Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197793838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197814718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197824958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197852718Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197865678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197876038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197888278Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197913918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197926358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197936358Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197966798Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 02:33:12.198035 containerd[1524]: time="2025-06-21T02:33:12.197979358Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 02:33:12.198269 containerd[1524]: time="2025-06-21T02:33:12.197987798Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 02:33:12.198269 containerd[1524]: time="2025-06-21T02:33:12.197997078Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 02:33:12.198269 containerd[1524]: time="2025-06-21T02:33:12.198004038Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 02:33:12.198269 containerd[1524]: time="2025-06-21T02:33:12.198014478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 02:33:12.198269 containerd[1524]: time="2025-06-21T02:33:12.198025678Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 02:33:12.198269 containerd[1524]: time="2025-06-21T02:33:12.198099638Z" level=info msg="runtime interface created" Jun 21 02:33:12.198269 containerd[1524]: time="2025-06-21T02:33:12.198104758Z" level=info msg="created NRI interface" Jun 21 02:33:12.198269 containerd[1524]: time="2025-06-21T02:33:12.198111838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 02:33:12.198269 containerd[1524]: time="2025-06-21T02:33:12.198121798Z" level=info msg="Connect containerd service" Jun 21 02:33:12.198269 containerd[1524]: time="2025-06-21T02:33:12.198151118Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 02:33:12.198901 containerd[1524]: time="2025-06-21T02:33:12.198772558Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 02:33:12.298534 containerd[1524]: time="2025-06-21T02:33:12.298433358Z" level=info msg="Start subscribing containerd event" Jun 21 02:33:12.298794 containerd[1524]: time="2025-06-21T02:33:12.298775638Z" level=info msg="Start recovering state" Jun 21 02:33:12.299204 containerd[1524]: time="2025-06-21T02:33:12.299162118Z" level=info msg="Start event monitor" Jun 21 02:33:12.299204 containerd[1524]: time="2025-06-21T02:33:12.299182918Z" level=info msg="Start cni network conf syncer for default" Jun 21 02:33:12.299509 containerd[1524]: time="2025-06-21T02:33:12.299191918Z" level=info msg="Start streaming server" Jun 21 02:33:12.299509 containerd[1524]: time="2025-06-21T02:33:12.299450958Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 02:33:12.299509 containerd[1524]: time="2025-06-21T02:33:12.299463438Z" level=info msg="runtime interface starting up..." Jun 21 02:33:12.299509 containerd[1524]: time="2025-06-21T02:33:12.299469558Z" level=info msg="starting plugins..." Jun 21 02:33:12.299509 containerd[1524]: time="2025-06-21T02:33:12.299488278Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 02:33:12.299925 containerd[1524]: time="2025-06-21T02:33:12.299700518Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 02:33:12.299925 containerd[1524]: time="2025-06-21T02:33:12.299900838Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 02:33:12.300066 containerd[1524]: time="2025-06-21T02:33:12.300040558Z" level=info msg="containerd successfully booted in 0.123669s" Jun 21 02:33:12.300136 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 02:33:12.359223 tar[1521]: linux-arm64/LICENSE Jun 21 02:33:12.359223 tar[1521]: linux-arm64/README.md Jun 21 02:33:12.372938 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 02:33:12.633764 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 02:33:12.653318 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 02:33:12.656058 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 02:33:12.671355 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 02:33:12.671580 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 02:33:12.674022 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 02:33:12.693585 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 02:33:12.696163 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 02:33:12.697997 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 21 02:33:12.699127 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 02:33:12.957040 systemd-networkd[1433]: eth0: Gained IPv6LL Jun 21 02:33:12.959434 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 02:33:12.960800 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 02:33:12.962945 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 21 02:33:12.965077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:33:12.973397 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 02:33:12.987659 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 21 02:33:12.988098 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 21 02:33:12.989480 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 02:33:12.998768 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 02:33:13.538226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:33:13.539572 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 02:33:13.543327 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 02:33:13.543943 systemd[1]: Startup finished in 2.091s (kernel) + 4.984s (initrd) + 3.398s (userspace) = 10.474s. Jun 21 02:33:13.994499 kubelet[1635]: E0621 02:33:13.994377 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 02:33:13.996663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 02:33:13.996809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 02:33:13.997135 systemd[1]: kubelet.service: Consumed 850ms CPU time, 257.8M memory peak. Jun 21 02:33:18.495235 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 02:33:18.496338 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:33102.service - OpenSSH per-connection server daemon (10.0.0.1:33102). Jun 21 02:33:18.589370 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 33102 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:33:18.591444 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:33:18.610221 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 02:33:18.611094 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 02:33:18.616528 systemd-logind[1507]: New session 1 of user core. Jun 21 02:33:18.631112 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 02:33:18.633663 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 02:33:18.661036 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 02:33:18.663250 systemd-logind[1507]: New session c1 of user core. Jun 21 02:33:18.777711 systemd[1652]: Queued start job for default target default.target. Jun 21 02:33:18.784733 systemd[1652]: Created slice app.slice - User Application Slice. Jun 21 02:33:18.784763 systemd[1652]: Reached target paths.target - Paths. Jun 21 02:33:18.784800 systemd[1652]: Reached target timers.target - Timers. Jun 21 02:33:18.786003 systemd[1652]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 02:33:18.795151 systemd[1652]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 02:33:18.795213 systemd[1652]: Reached target sockets.target - Sockets. Jun 21 02:33:18.795250 systemd[1652]: Reached target basic.target - Basic System. Jun 21 02:33:18.795278 systemd[1652]: Reached target default.target - Main User Target. Jun 21 02:33:18.795303 systemd[1652]: Startup finished in 126ms. Jun 21 02:33:18.795441 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 02:33:18.797215 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 02:33:18.855177 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:33106.service - OpenSSH per-connection server daemon (10.0.0.1:33106). Jun 21 02:33:18.913544 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 33106 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:33:18.915073 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:33:18.920103 systemd-logind[1507]: New session 2 of user core. Jun 21 02:33:18.939040 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 02:33:18.990990 sshd[1665]: Connection closed by 10.0.0.1 port 33106 Jun 21 02:33:18.991453 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Jun 21 02:33:19.006955 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:33106.service: Deactivated successfully. Jun 21 02:33:19.008786 systemd[1]: session-2.scope: Deactivated successfully. Jun 21 02:33:19.011873 systemd-logind[1507]: Session 2 logged out. Waiting for processes to exit. Jun 21 02:33:19.014078 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:33114.service - OpenSSH per-connection server daemon (10.0.0.1:33114). Jun 21 02:33:19.015286 systemd-logind[1507]: Removed session 2. Jun 21 02:33:19.069537 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 33114 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:33:19.071155 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:33:19.076732 systemd-logind[1507]: New session 3 of user core. Jun 21 02:33:19.088016 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 02:33:19.137914 sshd[1673]: Connection closed by 10.0.0.1 port 33114 Jun 21 02:33:19.138415 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Jun 21 02:33:19.148439 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:33114.service: Deactivated successfully. Jun 21 02:33:19.151274 systemd[1]: session-3.scope: Deactivated successfully. Jun 21 02:33:19.152339 systemd-logind[1507]: Session 3 logged out. Waiting for processes to exit. Jun 21 02:33:19.156262 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:33130.service - OpenSSH per-connection server daemon (10.0.0.1:33130). Jun 21 02:33:19.157086 systemd-logind[1507]: Removed session 3. Jun 21 02:33:19.210652 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 33130 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:33:19.212053 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:33:19.216903 systemd-logind[1507]: New session 4 of user core. Jun 21 02:33:19.232056 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 02:33:19.285191 sshd[1681]: Connection closed by 10.0.0.1 port 33130 Jun 21 02:33:19.285681 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Jun 21 02:33:19.293013 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:33130.service: Deactivated successfully. Jun 21 02:33:19.296220 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 02:33:19.297189 systemd-logind[1507]: Session 4 logged out. Waiting for processes to exit. Jun 21 02:33:19.299452 systemd-logind[1507]: Removed session 4. Jun 21 02:33:19.302264 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:33144.service - OpenSSH per-connection server daemon (10.0.0.1:33144). Jun 21 02:33:19.354257 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 33144 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:33:19.355696 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:33:19.360940 systemd-logind[1507]: New session 5 of user core. Jun 21 02:33:19.371042 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 02:33:19.433409 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 02:33:19.433701 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 02:33:19.447757 sudo[1690]: pam_unix(sudo:session): session closed for user root Jun 21 02:33:19.450771 sshd[1689]: Connection closed by 10.0.0.1 port 33144 Jun 21 02:33:19.451200 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Jun 21 02:33:19.471134 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:33144.service: Deactivated successfully. Jun 21 02:33:19.473310 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 02:33:19.474266 systemd-logind[1507]: Session 5 logged out. Waiting for processes to exit. Jun 21 02:33:19.477144 systemd-logind[1507]: Removed session 5. Jun 21 02:33:19.480163 systemd[1]: Started sshd@5-10.0.0.140:22-10.0.0.1:33148.service - OpenSSH per-connection server daemon (10.0.0.1:33148). Jun 21 02:33:19.535797 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 33148 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:33:19.537292 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:33:19.541355 systemd-logind[1507]: New session 6 of user core. Jun 21 02:33:19.548063 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 02:33:19.599445 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 02:33:19.599736 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 02:33:19.676763 sudo[1700]: pam_unix(sudo:session): session closed for user root Jun 21 02:33:19.682154 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 02:33:19.682444 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 02:33:19.692686 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 02:33:19.733514 augenrules[1722]: No rules Jun 21 02:33:19.734786 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 02:33:19.735023 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 02:33:19.736511 sudo[1699]: pam_unix(sudo:session): session closed for user root Jun 21 02:33:19.738984 sshd[1698]: Connection closed by 10.0.0.1 port 33148 Jun 21 02:33:19.738365 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jun 21 02:33:19.745333 systemd[1]: sshd@5-10.0.0.140:22-10.0.0.1:33148.service: Deactivated successfully. Jun 21 02:33:19.747017 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 02:33:19.747917 systemd-logind[1507]: Session 6 logged out. Waiting for processes to exit. Jun 21 02:33:19.751085 systemd[1]: Started sshd@6-10.0.0.140:22-10.0.0.1:33164.service - OpenSSH per-connection server daemon (10.0.0.1:33164). Jun 21 02:33:19.751957 systemd-logind[1507]: Removed session 6. Jun 21 02:33:19.808469 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 33164 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:33:19.809904 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:33:19.813926 systemd-logind[1507]: New session 7 of user core. Jun 21 02:33:19.826028 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 02:33:19.876221 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 02:33:19.876489 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 02:33:20.336166 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 02:33:20.353203 (dockerd)[1754]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 02:33:20.657221 dockerd[1754]: time="2025-06-21T02:33:20.657096558Z" level=info msg="Starting up" Jun 21 02:33:20.658105 dockerd[1754]: time="2025-06-21T02:33:20.658069998Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 02:33:20.698826 dockerd[1754]: time="2025-06-21T02:33:20.698780758Z" level=info msg="Loading containers: start." Jun 21 02:33:20.708248 kernel: Initializing XFRM netlink socket Jun 21 02:33:20.919963 systemd-networkd[1433]: docker0: Link UP Jun 21 02:33:20.923225 dockerd[1754]: time="2025-06-21T02:33:20.923092278Z" level=info msg="Loading containers: done." Jun 21 02:33:20.940775 dockerd[1754]: time="2025-06-21T02:33:20.940721438Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 02:33:20.940921 dockerd[1754]: time="2025-06-21T02:33:20.940813598Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 02:33:20.940948 dockerd[1754]: time="2025-06-21T02:33:20.940940798Z" level=info msg="Initializing buildkit" Jun 21 02:33:20.963476 dockerd[1754]: time="2025-06-21T02:33:20.963437078Z" level=info msg="Completed buildkit initialization" Jun 21 02:33:20.968330 dockerd[1754]: time="2025-06-21T02:33:20.968291998Z" level=info msg="Daemon has completed initialization" Jun 21 02:33:20.968540 dockerd[1754]: time="2025-06-21T02:33:20.968374478Z" level=info msg="API listen on /run/docker.sock" Jun 21 02:33:20.968559 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 02:33:21.528165 containerd[1524]: time="2025-06-21T02:33:21.528103918Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 21 02:33:22.136379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1675528383.mount: Deactivated successfully. Jun 21 02:33:23.002647 containerd[1524]: time="2025-06-21T02:33:23.002558958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:23.003045 containerd[1524]: time="2025-06-21T02:33:23.002894358Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jun 21 02:33:23.003913 containerd[1524]: time="2025-06-21T02:33:23.003877998Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:23.006338 containerd[1524]: time="2025-06-21T02:33:23.006272598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:23.007491 containerd[1524]: time="2025-06-21T02:33:23.007322798Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.47917508s" Jun 21 02:33:23.007491 containerd[1524]: time="2025-06-21T02:33:23.007366238Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jun 21 02:33:23.010342 containerd[1524]: time="2025-06-21T02:33:23.010308958Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 21 02:33:23.964386 containerd[1524]: time="2025-06-21T02:33:23.964329158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:23.965073 containerd[1524]: time="2025-06-21T02:33:23.965048718Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jun 21 02:33:23.965766 containerd[1524]: time="2025-06-21T02:33:23.965739318Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:23.968373 containerd[1524]: time="2025-06-21T02:33:23.968346518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:23.969388 containerd[1524]: time="2025-06-21T02:33:23.969356798Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 959.00572ms" Jun 21 02:33:23.969486 containerd[1524]: time="2025-06-21T02:33:23.969461798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jun 21 02:33:23.970126 containerd[1524]: time="2025-06-21T02:33:23.969962478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 21 02:33:24.247186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 02:33:24.248716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:33:24.397682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:33:24.401761 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 02:33:24.440388 kubelet[2028]: E0621 02:33:24.440322 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 02:33:24.443573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 02:33:24.443722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 02:33:24.444040 systemd[1]: kubelet.service: Consumed 151ms CPU time, 108M memory peak. Jun 21 02:33:25.156417 containerd[1524]: time="2025-06-21T02:33:25.156368238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:25.158220 containerd[1524]: time="2025-06-21T02:33:25.158124518Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jun 21 02:33:25.159116 containerd[1524]: time="2025-06-21T02:33:25.159042158Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:25.162814 containerd[1524]: time="2025-06-21T02:33:25.162752318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:25.164299 containerd[1524]: time="2025-06-21T02:33:25.164255998Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.19412324s" Jun 21 02:33:25.164299 containerd[1524]: time="2025-06-21T02:33:25.164298518Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jun 21 02:33:25.164893 containerd[1524]: time="2025-06-21T02:33:25.164859398Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 21 02:33:26.071350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682051824.mount: Deactivated successfully. Jun 21 02:33:26.314346 containerd[1524]: time="2025-06-21T02:33:26.314304398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:26.314936 containerd[1524]: time="2025-06-21T02:33:26.314900038Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jun 21 02:33:26.315406 containerd[1524]: time="2025-06-21T02:33:26.315380038Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:26.319283 containerd[1524]: time="2025-06-21T02:33:26.319249518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:26.320540 containerd[1524]: time="2025-06-21T02:33:26.320322958Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.15542868s" Jun 21 02:33:26.320540 containerd[1524]: time="2025-06-21T02:33:26.320356718Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jun 21 02:33:26.320791 containerd[1524]: time="2025-06-21T02:33:26.320766638Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 21 02:33:26.831191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3179161243.mount: Deactivated successfully. Jun 21 02:33:27.521234 containerd[1524]: time="2025-06-21T02:33:27.520920838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:27.522192 containerd[1524]: time="2025-06-21T02:33:27.522156598Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jun 21 02:33:27.524143 containerd[1524]: time="2025-06-21T02:33:27.524102718Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:27.526865 containerd[1524]: time="2025-06-21T02:33:27.526837958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:27.528109 containerd[1524]: time="2025-06-21T02:33:27.528057478Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.20725272s" Jun 21 02:33:27.528567 containerd[1524]: time="2025-06-21T02:33:27.528548198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jun 21 02:33:27.529263 containerd[1524]: time="2025-06-21T02:33:27.529060478Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 02:33:27.951524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328175751.mount: Deactivated successfully. Jun 21 02:33:27.956782 containerd[1524]: time="2025-06-21T02:33:27.956735918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 02:33:27.957602 containerd[1524]: time="2025-06-21T02:33:27.957553438Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jun 21 02:33:27.958250 containerd[1524]: time="2025-06-21T02:33:27.958222718Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 02:33:27.960861 containerd[1524]: time="2025-06-21T02:33:27.960684238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 02:33:27.961227 containerd[1524]: time="2025-06-21T02:33:27.961203598Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 432.10844ms" Jun 21 02:33:27.961305 containerd[1524]: time="2025-06-21T02:33:27.961290118Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 21 02:33:27.961794 containerd[1524]: time="2025-06-21T02:33:27.961767558Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 21 02:33:28.538423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380651016.mount: Deactivated successfully. Jun 21 02:33:30.082219 containerd[1524]: time="2025-06-21T02:33:30.082168918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:30.083027 containerd[1524]: time="2025-06-21T02:33:30.082984118Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jun 21 02:33:30.083822 containerd[1524]: time="2025-06-21T02:33:30.083796078Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:30.087571 containerd[1524]: time="2025-06-21T02:33:30.087523238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:30.088638 containerd[1524]: time="2025-06-21T02:33:30.088601998Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.12679808s" Jun 21 02:33:30.088693 containerd[1524]: time="2025-06-21T02:33:30.088637558Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jun 21 02:33:34.694159 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 21 02:33:34.696067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:33:34.891526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:33:34.907166 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 02:33:34.941944 kubelet[2188]: E0621 02:33:34.941888 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 02:33:34.944538 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 02:33:34.944802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 02:33:34.946911 systemd[1]: kubelet.service: Consumed 131ms CPU time, 107.3M memory peak. Jun 21 02:33:36.559263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:33:36.559790 systemd[1]: kubelet.service: Consumed 131ms CPU time, 107.3M memory peak. Jun 21 02:33:36.562799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:33:36.581774 systemd[1]: Reload requested from client PID 2202 ('systemctl') (unit session-7.scope)... Jun 21 02:33:36.581789 systemd[1]: Reloading... Jun 21 02:33:36.657865 zram_generator::config[2246]: No configuration found. Jun 21 02:33:36.757764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 02:33:36.841621 systemd[1]: Reloading finished in 259 ms. Jun 21 02:33:36.909294 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 02:33:36.909369 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 02:33:36.909670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:33:36.909716 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95M memory peak. Jun 21 02:33:36.911395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:33:37.015601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:33:37.018746 (kubelet)[2291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 02:33:37.051033 kubelet[2291]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 02:33:37.051033 kubelet[2291]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 21 02:33:37.051033 kubelet[2291]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 02:33:37.051335 kubelet[2291]: I0621 02:33:37.051069 2291 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 02:33:37.978851 kubelet[2291]: I0621 02:33:37.978792 2291 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 21 02:33:37.979860 kubelet[2291]: I0621 02:33:37.978827 2291 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 02:33:37.979860 kubelet[2291]: I0621 02:33:37.979222 2291 server.go:934] "Client rotation is on, will bootstrap in background" Jun 21 02:33:38.038737 kubelet[2291]: E0621 02:33:38.038689 2291 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jun 21 02:33:38.040493 kubelet[2291]: I0621 02:33:38.040465 2291 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 02:33:38.048925 kubelet[2291]: I0621 02:33:38.048898 2291 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 02:33:38.053192 kubelet[2291]: I0621 02:33:38.053164 2291 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 02:33:38.053982 kubelet[2291]: I0621 02:33:38.053953 2291 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 21 02:33:38.054131 kubelet[2291]: I0621 02:33:38.054094 2291 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 02:33:38.054496 kubelet[2291]: I0621 02:33:38.054127 2291 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 02:33:38.054594 kubelet[2291]: I0621 02:33:38.054579 2291 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 02:33:38.054623 kubelet[2291]: I0621 02:33:38.054596 2291 container_manager_linux.go:300] "Creating device plugin manager" Jun 21 02:33:38.054932 kubelet[2291]: I0621 02:33:38.054830 2291 state_mem.go:36] "Initialized new in-memory state store" Jun 21 02:33:38.058766 kubelet[2291]: I0621 02:33:38.058717 2291 kubelet.go:408] "Attempting to sync node with API server" Jun 21 02:33:38.058766 kubelet[2291]: I0621 02:33:38.058747 2291 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 02:33:38.058766 kubelet[2291]: I0621 02:33:38.058765 2291 kubelet.go:314] "Adding apiserver pod source" Jun 21 02:33:38.059458 kubelet[2291]: I0621 02:33:38.058781 2291 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 02:33:38.063296 kubelet[2291]: W0621 02:33:38.062628 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jun 21 02:33:38.063296 kubelet[2291]: E0621 02:33:38.062680 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jun 21 02:33:38.063524 kubelet[2291]: I0621 02:33:38.063499 2291 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 02:33:38.063782 kubelet[2291]: W0621 02:33:38.063728 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jun 21 02:33:38.063782 kubelet[2291]: E0621 02:33:38.063776 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jun 21 02:33:38.064285 kubelet[2291]: I0621 02:33:38.064261 2291 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 02:33:38.064510 kubelet[2291]: W0621 02:33:38.064491 2291 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 02:33:38.065766 kubelet[2291]: I0621 02:33:38.065452 2291 server.go:1274] "Started kubelet" Jun 21 02:33:38.066551 kubelet[2291]: I0621 02:33:38.066500 2291 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 02:33:38.067608 kubelet[2291]: I0621 02:33:38.067587 2291 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 02:33:38.067686 kubelet[2291]: I0621 02:33:38.067623 2291 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 02:33:38.067780 kubelet[2291]: I0621 02:33:38.067760 2291 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 02:33:38.071699 kubelet[2291]: I0621 02:33:38.068989 2291 server.go:449] "Adding debug handlers to kubelet server" Jun 21 02:33:38.072203 kubelet[2291]: I0621 02:33:38.071776 2291 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 02:33:38.072538 kubelet[2291]: I0621 02:33:38.072517 2291 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 21 02:33:38.072803 kubelet[2291]: E0621 02:33:38.072780 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 02:33:38.073087 kubelet[2291]: E0621 02:33:38.071972 2291 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.140:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.140:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184aee1e0f9fe3ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-21 02:33:38.065425358 +0000 UTC m=+1.043836721,LastTimestamp:2025-06-21 02:33:38.065425358 +0000 UTC m=+1.043836721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 21 02:33:38.073255 kubelet[2291]: E0621 02:33:38.073224 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="200ms" Jun 21 02:33:38.073357 kubelet[2291]: I0621 02:33:38.073342 2291 reconciler.go:26] "Reconciler: start to sync state" Jun 21 02:33:38.073396 kubelet[2291]: I0621 02:33:38.073376 2291 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 21 02:33:38.073575 kubelet[2291]: I0621 02:33:38.073547 2291 factory.go:221] Registration of the systemd container factory successfully Jun 21 02:33:38.073734 kubelet[2291]: I0621 02:33:38.073697 2291 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 02:33:38.074048 kubelet[2291]: E0621 02:33:38.073817 2291 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 02:33:38.074174 kubelet[2291]: W0621 02:33:38.073699 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jun 21 02:33:38.074230 kubelet[2291]: E0621 02:33:38.074186 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jun 21 02:33:38.075566 kubelet[2291]: I0621 02:33:38.075541 2291 factory.go:221] Registration of the containerd container factory successfully Jun 21 02:33:38.081309 kubelet[2291]: I0621 02:33:38.081186 2291 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 02:33:38.082325 kubelet[2291]: I0621 02:33:38.082305 2291 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 02:33:38.082768 kubelet[2291]: I0621 02:33:38.082405 2291 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 21 02:33:38.082768 kubelet[2291]: I0621 02:33:38.082428 2291 kubelet.go:2321] "Starting kubelet main sync loop" Jun 21 02:33:38.082768 kubelet[2291]: E0621 02:33:38.082465 2291 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 02:33:38.087263 kubelet[2291]: W0621 02:33:38.087212 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jun 21 02:33:38.087323 kubelet[2291]: E0621 02:33:38.087267 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jun 21 02:33:38.089192 kubelet[2291]: I0621 02:33:38.089170 2291 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 21 02:33:38.089192 kubelet[2291]: I0621 02:33:38.089188 2291 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 21 02:33:38.089267 kubelet[2291]: I0621 02:33:38.089206 2291 state_mem.go:36] "Initialized new in-memory state store" Jun 21 02:33:38.162173 kubelet[2291]: I0621 02:33:38.162142 2291 policy_none.go:49] "None policy: Start" Jun 21 02:33:38.163122 kubelet[2291]: I0621 02:33:38.163084 2291 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 21 02:33:38.163122 kubelet[2291]: I0621 02:33:38.163126 2291 state_mem.go:35] "Initializing new in-memory state store" Jun 21 02:33:38.167971 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 02:33:38.173846 kubelet[2291]: E0621 02:33:38.173805 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 02:33:38.180298 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 02:33:38.182688 kubelet[2291]: E0621 02:33:38.182640 2291 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 21 02:33:38.182893 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 02:33:38.200594 kubelet[2291]: I0621 02:33:38.200531 2291 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 02:33:38.200767 kubelet[2291]: I0621 02:33:38.200743 2291 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 02:33:38.200806 kubelet[2291]: I0621 02:33:38.200761 2291 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 02:33:38.201112 kubelet[2291]: I0621 02:33:38.201007 2291 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 02:33:38.202366 kubelet[2291]: E0621 02:33:38.202343 2291 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 21 02:33:38.274043 kubelet[2291]: E0621 02:33:38.273943 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="400ms" Jun 21 02:33:38.302098 kubelet[2291]: I0621 02:33:38.302041 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 21 02:33:38.302466 kubelet[2291]: E0621 02:33:38.302442 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Jun 21 02:33:38.390360 systemd[1]: Created slice kubepods-burstable-pod11d75a9b0f6b95d8a7290e382e1b643b.slice - libcontainer container kubepods-burstable-pod11d75a9b0f6b95d8a7290e382e1b643b.slice. Jun 21 02:33:38.403906 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jun 21 02:33:38.408571 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jun 21 02:33:38.475647 kubelet[2291]: I0621 02:33:38.475582 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11d75a9b0f6b95d8a7290e382e1b643b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"11d75a9b0f6b95d8a7290e382e1b643b\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:33:38.475647 kubelet[2291]: I0621 02:33:38.475620 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11d75a9b0f6b95d8a7290e382e1b643b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"11d75a9b0f6b95d8a7290e382e1b643b\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:33:38.475850 kubelet[2291]: I0621 02:33:38.475771 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11d75a9b0f6b95d8a7290e382e1b643b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"11d75a9b0f6b95d8a7290e382e1b643b\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:33:38.475850 kubelet[2291]: I0621 02:33:38.475809 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:33:38.475997 kubelet[2291]: I0621 02:33:38.475927 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:33:38.475997 kubelet[2291]: I0621 02:33:38.475962 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:33:38.475997 kubelet[2291]: I0621 02:33:38.475980 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jun 21 02:33:38.476106 kubelet[2291]: I0621 02:33:38.476093 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:33:38.476186 kubelet[2291]: I0621 02:33:38.476175 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:33:38.503659 kubelet[2291]: I0621 02:33:38.503640 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 21 02:33:38.504001 kubelet[2291]: E0621 02:33:38.503975 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Jun 21 02:33:38.675238 kubelet[2291]: E0621 02:33:38.675116 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="800ms" Jun 21 02:33:38.703138 containerd[1524]: time="2025-06-21T02:33:38.703099758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:11d75a9b0f6b95d8a7290e382e1b643b,Namespace:kube-system,Attempt:0,}" Jun 21 02:33:38.707734 containerd[1524]: time="2025-06-21T02:33:38.707672518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jun 21 02:33:38.711364 containerd[1524]: time="2025-06-21T02:33:38.711327238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jun 21 02:33:38.729606 containerd[1524]: time="2025-06-21T02:33:38.729495318Z" level=info msg="connecting to shim 6e6331f65411e50c9fa3064d9a0126b3723772fad0007425fc924e3756829978" address="unix:///run/containerd/s/70fb3939de2c12ed1e5d4a23d516c921b8aae400e345b738d4952e94b5f32629" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:33:38.734959 containerd[1524]: time="2025-06-21T02:33:38.734925798Z" level=info msg="connecting to shim 74595602208e12adcb6574253e822f36339730373c7df367a51407bd44b00dae" address="unix:///run/containerd/s/4cd6e4508746ecc3cf57c79e0ecc84ebecdc7759594f92f28f1bce497156d6d7" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:33:38.741259 containerd[1524]: time="2025-06-21T02:33:38.741211958Z" level=info msg="connecting to shim 9e7b17c9ab1492517088779b20fcdab8e3f28e03d30f3cb734ec4c3b8cdfa60e" address="unix:///run/containerd/s/ada39dcc0dd4f539a4f6798f1136085f0002d34a5bccaec056d093f016f84d9f" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:33:38.761023 systemd[1]: Started cri-containerd-6e6331f65411e50c9fa3064d9a0126b3723772fad0007425fc924e3756829978.scope - libcontainer container 6e6331f65411e50c9fa3064d9a0126b3723772fad0007425fc924e3756829978. Jun 21 02:33:38.762140 systemd[1]: Started cri-containerd-74595602208e12adcb6574253e822f36339730373c7df367a51407bd44b00dae.scope - libcontainer container 74595602208e12adcb6574253e822f36339730373c7df367a51407bd44b00dae. Jun 21 02:33:38.765872 systemd[1]: Started cri-containerd-9e7b17c9ab1492517088779b20fcdab8e3f28e03d30f3cb734ec4c3b8cdfa60e.scope - libcontainer container 9e7b17c9ab1492517088779b20fcdab8e3f28e03d30f3cb734ec4c3b8cdfa60e. Jun 21 02:33:38.803357 containerd[1524]: time="2025-06-21T02:33:38.803300798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e7b17c9ab1492517088779b20fcdab8e3f28e03d30f3cb734ec4c3b8cdfa60e\"" Jun 21 02:33:38.806316 containerd[1524]: time="2025-06-21T02:33:38.806284758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"74595602208e12adcb6574253e822f36339730373c7df367a51407bd44b00dae\"" Jun 21 02:33:38.807844 containerd[1524]: time="2025-06-21T02:33:38.807802278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:11d75a9b0f6b95d8a7290e382e1b643b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e6331f65411e50c9fa3064d9a0126b3723772fad0007425fc924e3756829978\"" Jun 21 02:33:38.808338 containerd[1524]: time="2025-06-21T02:33:38.808311558Z" level=info msg="CreateContainer within sandbox \"9e7b17c9ab1492517088779b20fcdab8e3f28e03d30f3cb734ec4c3b8cdfa60e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 02:33:38.809072 containerd[1524]: time="2025-06-21T02:33:38.808930718Z" level=info msg="CreateContainer within sandbox \"74595602208e12adcb6574253e822f36339730373c7df367a51407bd44b00dae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 02:33:38.810806 containerd[1524]: time="2025-06-21T02:33:38.810777838Z" level=info msg="CreateContainer within sandbox \"6e6331f65411e50c9fa3064d9a0126b3723772fad0007425fc924e3756829978\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 02:33:38.818307 containerd[1524]: time="2025-06-21T02:33:38.818276718Z" level=info msg="Container 629854a1679f613d5ff3c70579bd4940b87f42b6082869c04d94f50fdfd4d6d0: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:33:38.824066 containerd[1524]: time="2025-06-21T02:33:38.824027198Z" level=info msg="Container 4b967039573aff6dc844f25d46212f4503a1b06d15799e34f40f8295ae3f1b02: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:33:38.826274 containerd[1524]: time="2025-06-21T02:33:38.826088678Z" level=info msg="Container 073ec10a9916d3e4e2ddd06c38d99a04a58d6ad17627af0ff1658d0eefe11043: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:33:38.831582 containerd[1524]: time="2025-06-21T02:33:38.831539838Z" level=info msg="CreateContainer within sandbox \"9e7b17c9ab1492517088779b20fcdab8e3f28e03d30f3cb734ec4c3b8cdfa60e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"629854a1679f613d5ff3c70579bd4940b87f42b6082869c04d94f50fdfd4d6d0\"" Jun 21 02:33:38.833315 containerd[1524]: time="2025-06-21T02:33:38.833281158Z" level=info msg="CreateContainer within sandbox \"6e6331f65411e50c9fa3064d9a0126b3723772fad0007425fc924e3756829978\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4b967039573aff6dc844f25d46212f4503a1b06d15799e34f40f8295ae3f1b02\"" Jun 21 02:33:38.834418 containerd[1524]: time="2025-06-21T02:33:38.834389478Z" level=info msg="StartContainer for \"629854a1679f613d5ff3c70579bd4940b87f42b6082869c04d94f50fdfd4d6d0\"" Jun 21 02:33:38.835586 containerd[1524]: time="2025-06-21T02:33:38.835540678Z" level=info msg="connecting to shim 629854a1679f613d5ff3c70579bd4940b87f42b6082869c04d94f50fdfd4d6d0" address="unix:///run/containerd/s/ada39dcc0dd4f539a4f6798f1136085f0002d34a5bccaec056d093f016f84d9f" protocol=ttrpc version=3 Jun 21 02:33:38.837181 containerd[1524]: time="2025-06-21T02:33:38.837150158Z" level=info msg="StartContainer for \"4b967039573aff6dc844f25d46212f4503a1b06d15799e34f40f8295ae3f1b02\"" Jun 21 02:33:38.838299 containerd[1524]: time="2025-06-21T02:33:38.838269238Z" level=info msg="connecting to shim 4b967039573aff6dc844f25d46212f4503a1b06d15799e34f40f8295ae3f1b02" address="unix:///run/containerd/s/70fb3939de2c12ed1e5d4a23d516c921b8aae400e345b738d4952e94b5f32629" protocol=ttrpc version=3 Jun 21 02:33:38.839422 containerd[1524]: time="2025-06-21T02:33:38.839386598Z" level=info msg="CreateContainer within sandbox \"74595602208e12adcb6574253e822f36339730373c7df367a51407bd44b00dae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"073ec10a9916d3e4e2ddd06c38d99a04a58d6ad17627af0ff1658d0eefe11043\"" Jun 21 02:33:38.840125 containerd[1524]: time="2025-06-21T02:33:38.840097558Z" level=info msg="StartContainer for \"073ec10a9916d3e4e2ddd06c38d99a04a58d6ad17627af0ff1658d0eefe11043\"" Jun 21 02:33:38.841109 containerd[1524]: time="2025-06-21T02:33:38.841083118Z" level=info msg="connecting to shim 073ec10a9916d3e4e2ddd06c38d99a04a58d6ad17627af0ff1658d0eefe11043" address="unix:///run/containerd/s/4cd6e4508746ecc3cf57c79e0ecc84ebecdc7759594f92f28f1bce497156d6d7" protocol=ttrpc version=3 Jun 21 02:33:38.862012 systemd[1]: Started cri-containerd-4b967039573aff6dc844f25d46212f4503a1b06d15799e34f40f8295ae3f1b02.scope - libcontainer container 4b967039573aff6dc844f25d46212f4503a1b06d15799e34f40f8295ae3f1b02. Jun 21 02:33:38.863232 systemd[1]: Started cri-containerd-629854a1679f613d5ff3c70579bd4940b87f42b6082869c04d94f50fdfd4d6d0.scope - libcontainer container 629854a1679f613d5ff3c70579bd4940b87f42b6082869c04d94f50fdfd4d6d0. Jun 21 02:33:38.867438 systemd[1]: Started cri-containerd-073ec10a9916d3e4e2ddd06c38d99a04a58d6ad17627af0ff1658d0eefe11043.scope - libcontainer container 073ec10a9916d3e4e2ddd06c38d99a04a58d6ad17627af0ff1658d0eefe11043. Jun 21 02:33:38.907479 kubelet[2291]: I0621 02:33:38.907402 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 21 02:33:38.908353 kubelet[2291]: E0621 02:33:38.908327 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Jun 21 02:33:39.072099 containerd[1524]: time="2025-06-21T02:33:39.071827238Z" level=info msg="StartContainer for \"4b967039573aff6dc844f25d46212f4503a1b06d15799e34f40f8295ae3f1b02\" returns successfully" Jun 21 02:33:39.073154 containerd[1524]: time="2025-06-21T02:33:39.073119398Z" level=info msg="StartContainer for \"073ec10a9916d3e4e2ddd06c38d99a04a58d6ad17627af0ff1658d0eefe11043\" returns successfully" Jun 21 02:33:39.074101 containerd[1524]: time="2025-06-21T02:33:39.074026198Z" level=info msg="StartContainer for \"629854a1679f613d5ff3c70579bd4940b87f42b6082869c04d94f50fdfd4d6d0\" returns successfully" Jun 21 02:33:39.710605 kubelet[2291]: I0621 02:33:39.710309 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 21 02:33:40.420873 kubelet[2291]: E0621 02:33:40.419374 2291 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 21 02:33:40.528628 kubelet[2291]: I0621 02:33:40.526309 2291 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jun 21 02:33:41.060817 kubelet[2291]: I0621 02:33:41.060772 2291 apiserver.go:52] "Watching apiserver" Jun 21 02:33:41.074094 kubelet[2291]: I0621 02:33:41.074048 2291 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 21 02:33:42.459918 systemd[1]: Reload requested from client PID 2564 ('systemctl') (unit session-7.scope)... Jun 21 02:33:42.459934 systemd[1]: Reloading... Jun 21 02:33:42.530883 zram_generator::config[2607]: No configuration found. Jun 21 02:33:42.599280 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 02:33:42.695655 systemd[1]: Reloading finished in 235 ms. Jun 21 02:33:42.715294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:33:42.724126 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 02:33:42.724385 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:33:42.724446 systemd[1]: kubelet.service: Consumed 1.501s CPU time, 129.3M memory peak. Jun 21 02:33:42.726064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 02:33:42.867577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 02:33:42.871700 (kubelet)[2649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 02:33:42.912652 kubelet[2649]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 02:33:42.912652 kubelet[2649]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 21 02:33:42.912652 kubelet[2649]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 02:33:42.913011 kubelet[2649]: I0621 02:33:42.912696 2649 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 02:33:42.918059 kubelet[2649]: I0621 02:33:42.918013 2649 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 21 02:33:42.918059 kubelet[2649]: I0621 02:33:42.918049 2649 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 02:33:42.918328 kubelet[2649]: I0621 02:33:42.918297 2649 server.go:934] "Client rotation is on, will bootstrap in background" Jun 21 02:33:42.919723 kubelet[2649]: I0621 02:33:42.919705 2649 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 21 02:33:42.923511 kubelet[2649]: I0621 02:33:42.923403 2649 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 02:33:42.927303 kubelet[2649]: I0621 02:33:42.927270 2649 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 02:33:42.929986 kubelet[2649]: I0621 02:33:42.929950 2649 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 02:33:42.930103 kubelet[2649]: I0621 02:33:42.930075 2649 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 21 02:33:42.930237 kubelet[2649]: I0621 02:33:42.930199 2649 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 02:33:42.930418 kubelet[2649]: I0621 02:33:42.930229 2649 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 02:33:42.930494 kubelet[2649]: I0621 02:33:42.930419 2649 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 02:33:42.930494 kubelet[2649]: I0621 02:33:42.930429 2649 container_manager_linux.go:300] "Creating device plugin manager" Jun 21 02:33:42.930494 kubelet[2649]: I0621 02:33:42.930465 2649 state_mem.go:36] "Initialized new in-memory state store" Jun 21 02:33:42.930599 kubelet[2649]: I0621 02:33:42.930585 2649 kubelet.go:408] "Attempting to sync node with API server" Jun 21 02:33:42.930626 kubelet[2649]: I0621 02:33:42.930603 2649 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 02:33:42.930626 kubelet[2649]: I0621 02:33:42.930622 2649 kubelet.go:314] "Adding apiserver pod source" Jun 21 02:33:42.930667 kubelet[2649]: I0621 02:33:42.930637 2649 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 02:33:42.932769 kubelet[2649]: I0621 02:33:42.932720 2649 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 02:33:42.933301 kubelet[2649]: I0621 02:33:42.933283 2649 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 02:33:42.933869 kubelet[2649]: I0621 02:33:42.933711 2649 server.go:1274] "Started kubelet" Jun 21 02:33:42.934755 kubelet[2649]: I0621 02:33:42.934382 2649 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 02:33:42.936650 kubelet[2649]: I0621 02:33:42.935000 2649 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 02:33:42.936650 kubelet[2649]: I0621 02:33:42.935002 2649 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 02:33:42.936650 kubelet[2649]: I0621 02:33:42.935386 2649 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 02:33:42.939358 kubelet[2649]: E0621 02:33:42.939338 2649 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 02:33:42.939748 kubelet[2649]: I0621 02:33:42.939721 2649 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 02:33:42.941712 kubelet[2649]: E0621 02:33:42.941561 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 02:33:42.942024 kubelet[2649]: I0621 02:33:42.942001 2649 server.go:449] "Adding debug handlers to kubelet server" Jun 21 02:33:42.943106 kubelet[2649]: I0621 02:33:42.943081 2649 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 21 02:33:42.943682 kubelet[2649]: I0621 02:33:42.943607 2649 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 21 02:33:42.944046 kubelet[2649]: I0621 02:33:42.944031 2649 reconciler.go:26] "Reconciler: start to sync state" Jun 21 02:33:42.956810 kubelet[2649]: I0621 02:33:42.956775 2649 factory.go:221] Registration of the containerd container factory successfully Jun 21 02:33:42.956810 kubelet[2649]: I0621 02:33:42.956800 2649 factory.go:221] Registration of the systemd container factory successfully Jun 21 02:33:42.956945 kubelet[2649]: I0621 02:33:42.956895 2649 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 02:33:42.965146 kubelet[2649]: I0621 02:33:42.965113 2649 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 02:33:42.966041 kubelet[2649]: I0621 02:33:42.965972 2649 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 02:33:42.966113 kubelet[2649]: I0621 02:33:42.966103 2649 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 21 02:33:42.966140 kubelet[2649]: I0621 02:33:42.966125 2649 kubelet.go:2321] "Starting kubelet main sync loop" Jun 21 02:33:42.967067 kubelet[2649]: E0621 02:33:42.966178 2649 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 02:33:42.991518 kubelet[2649]: I0621 02:33:42.991489 2649 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 21 02:33:42.991869 kubelet[2649]: I0621 02:33:42.991655 2649 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 21 02:33:42.991869 kubelet[2649]: I0621 02:33:42.991684 2649 state_mem.go:36] "Initialized new in-memory state store" Jun 21 02:33:42.991987 kubelet[2649]: I0621 02:33:42.991969 2649 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 02:33:42.992044 kubelet[2649]: I0621 02:33:42.992022 2649 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 02:33:42.992088 kubelet[2649]: I0621 02:33:42.992081 2649 policy_none.go:49] "None policy: Start" Jun 21 02:33:42.992769 kubelet[2649]: I0621 02:33:42.992749 2649 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 21 02:33:42.992769 kubelet[2649]: I0621 02:33:42.992773 2649 state_mem.go:35] "Initializing new in-memory state store" Jun 21 02:33:42.992941 kubelet[2649]: I0621 02:33:42.992925 2649 state_mem.go:75] "Updated machine memory state" Jun 21 02:33:42.997324 kubelet[2649]: I0621 02:33:42.997190 2649 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 02:33:42.998030 kubelet[2649]: I0621 02:33:42.997916 2649 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 02:33:42.999273 kubelet[2649]: I0621 02:33:42.998804 2649 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 02:33:42.999515 kubelet[2649]: I0621 02:33:42.999438 2649 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 02:33:43.072481 kubelet[2649]: E0621 02:33:43.072405 2649 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 21 02:33:43.101085 kubelet[2649]: I0621 02:33:43.101032 2649 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 21 02:33:43.106855 kubelet[2649]: I0621 02:33:43.106762 2649 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jun 21 02:33:43.106855 kubelet[2649]: I0621 02:33:43.106826 2649 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jun 21 02:33:43.145466 kubelet[2649]: I0621 02:33:43.145419 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11d75a9b0f6b95d8a7290e382e1b643b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"11d75a9b0f6b95d8a7290e382e1b643b\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:33:43.145466 kubelet[2649]: I0621 02:33:43.145463 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:33:43.145608 kubelet[2649]: I0621 02:33:43.145484 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:33:43.145608 kubelet[2649]: I0621 02:33:43.145503 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11d75a9b0f6b95d8a7290e382e1b643b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"11d75a9b0f6b95d8a7290e382e1b643b\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:33:43.145608 kubelet[2649]: I0621 02:33:43.145523 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11d75a9b0f6b95d8a7290e382e1b643b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"11d75a9b0f6b95d8a7290e382e1b643b\") " pod="kube-system/kube-apiserver-localhost" Jun 21 02:33:43.145608 kubelet[2649]: I0621 02:33:43.145547 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:33:43.145608 kubelet[2649]: I0621 02:33:43.145569 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:33:43.145710 kubelet[2649]: I0621 02:33:43.145586 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 02:33:43.145710 kubelet[2649]: I0621 02:33:43.145600 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jun 21 02:33:43.931852 kubelet[2649]: I0621 02:33:43.931784 2649 apiserver.go:52] "Watching apiserver" Jun 21 02:33:43.944676 kubelet[2649]: I0621 02:33:43.944626 2649 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 21 02:33:43.981880 kubelet[2649]: E0621 02:33:43.981749 2649 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 21 02:33:44.008011 kubelet[2649]: I0621 02:33:44.007931 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.007911958 podStartE2EDuration="1.007911958s" podCreationTimestamp="2025-06-21 02:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:33:44.006650998 +0000 UTC m=+1.131785041" watchObservedRunningTime="2025-06-21 02:33:44.007911958 +0000 UTC m=+1.133046001" Jun 21 02:33:44.008159 kubelet[2649]: I0621 02:33:44.008056 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.008050878 podStartE2EDuration="3.008050878s" podCreationTimestamp="2025-06-21 02:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:33:43.997305838 +0000 UTC m=+1.122439921" watchObservedRunningTime="2025-06-21 02:33:44.008050878 +0000 UTC m=+1.133184961" Jun 21 02:33:48.172787 kubelet[2649]: I0621 02:33:48.172645 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.172611001 podStartE2EDuration="5.172611001s" podCreationTimestamp="2025-06-21 02:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:33:44.016917678 +0000 UTC m=+1.142051761" watchObservedRunningTime="2025-06-21 02:33:48.172611001 +0000 UTC m=+5.297745084" Jun 21 02:33:49.248426 kubelet[2649]: I0621 02:33:49.248384 2649 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 02:33:49.248824 containerd[1524]: time="2025-06-21T02:33:49.248785449Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 02:33:49.249659 kubelet[2649]: I0621 02:33:49.249005 2649 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 02:33:50.193857 systemd[1]: Created slice kubepods-besteffort-pod1edb732a_dfea_4b7c_b1f9_afc4d7e77c09.slice - libcontainer container kubepods-besteffort-pod1edb732a_dfea_4b7c_b1f9_afc4d7e77c09.slice. Jun 21 02:33:50.195183 kubelet[2649]: I0621 02:33:50.195143 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1edb732a-dfea-4b7c-b1f9-afc4d7e77c09-kube-proxy\") pod \"kube-proxy-zk487\" (UID: \"1edb732a-dfea-4b7c-b1f9-afc4d7e77c09\") " pod="kube-system/kube-proxy-zk487" Jun 21 02:33:50.195262 kubelet[2649]: I0621 02:33:50.195192 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1edb732a-dfea-4b7c-b1f9-afc4d7e77c09-xtables-lock\") pod \"kube-proxy-zk487\" (UID: \"1edb732a-dfea-4b7c-b1f9-afc4d7e77c09\") " pod="kube-system/kube-proxy-zk487" Jun 21 02:33:50.195262 kubelet[2649]: I0621 02:33:50.195214 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-246tx\" (UniqueName: \"kubernetes.io/projected/1edb732a-dfea-4b7c-b1f9-afc4d7e77c09-kube-api-access-246tx\") pod \"kube-proxy-zk487\" (UID: \"1edb732a-dfea-4b7c-b1f9-afc4d7e77c09\") " pod="kube-system/kube-proxy-zk487" Jun 21 02:33:50.195262 kubelet[2649]: I0621 02:33:50.195232 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1edb732a-dfea-4b7c-b1f9-afc4d7e77c09-lib-modules\") pod \"kube-proxy-zk487\" (UID: \"1edb732a-dfea-4b7c-b1f9-afc4d7e77c09\") " pod="kube-system/kube-proxy-zk487" Jun 21 02:33:50.313016 systemd[1]: Created slice kubepods-besteffort-pod7247bde3_9d5f_4644_b0e3_d20c93e7ec86.slice - libcontainer container kubepods-besteffort-pod7247bde3_9d5f_4644_b0e3_d20c93e7ec86.slice. Jun 21 02:33:50.496649 kubelet[2649]: I0621 02:33:50.496508 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7247bde3-9d5f-4644-b0e3-d20c93e7ec86-var-lib-calico\") pod \"tigera-operator-6c78c649f6-974kb\" (UID: \"7247bde3-9d5f-4644-b0e3-d20c93e7ec86\") " pod="tigera-operator/tigera-operator-6c78c649f6-974kb" Jun 21 02:33:50.496649 kubelet[2649]: I0621 02:33:50.496572 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg6wc\" (UniqueName: \"kubernetes.io/projected/7247bde3-9d5f-4644-b0e3-d20c93e7ec86-kube-api-access-hg6wc\") pod \"tigera-operator-6c78c649f6-974kb\" (UID: \"7247bde3-9d5f-4644-b0e3-d20c93e7ec86\") " pod="tigera-operator/tigera-operator-6c78c649f6-974kb" Jun 21 02:33:50.506199 containerd[1524]: time="2025-06-21T02:33:50.506145868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zk487,Uid:1edb732a-dfea-4b7c-b1f9-afc4d7e77c09,Namespace:kube-system,Attempt:0,}" Jun 21 02:33:50.521346 containerd[1524]: time="2025-06-21T02:33:50.521306901Z" level=info msg="connecting to shim 74c9b8da0a5332a68035ad268717b2665dac3414077cd25fd7eb1372c75668bb" address="unix:///run/containerd/s/9ee16b5cb7a807a239cdb20812176f7d4f6ae128a21d317c59c549a17c625723" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:33:50.545013 systemd[1]: Started cri-containerd-74c9b8da0a5332a68035ad268717b2665dac3414077cd25fd7eb1372c75668bb.scope - libcontainer container 74c9b8da0a5332a68035ad268717b2665dac3414077cd25fd7eb1372c75668bb. Jun 21 02:33:50.566138 containerd[1524]: time="2025-06-21T02:33:50.566093563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zk487,Uid:1edb732a-dfea-4b7c-b1f9-afc4d7e77c09,Namespace:kube-system,Attempt:0,} returns sandbox id \"74c9b8da0a5332a68035ad268717b2665dac3414077cd25fd7eb1372c75668bb\"" Jun 21 02:33:50.572422 containerd[1524]: time="2025-06-21T02:33:50.572386584Z" level=info msg="CreateContainer within sandbox \"74c9b8da0a5332a68035ad268717b2665dac3414077cd25fd7eb1372c75668bb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 02:33:50.585715 containerd[1524]: time="2025-06-21T02:33:50.584248948Z" level=info msg="Container a84a821b85fd1e815399fe51af501ae583fd6b74fb615d0024bf1785bc49892d: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:33:50.587630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295914730.mount: Deactivated successfully. Jun 21 02:33:50.594281 containerd[1524]: time="2025-06-21T02:33:50.594234557Z" level=info msg="CreateContainer within sandbox \"74c9b8da0a5332a68035ad268717b2665dac3414077cd25fd7eb1372c75668bb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a84a821b85fd1e815399fe51af501ae583fd6b74fb615d0024bf1785bc49892d\"" Jun 21 02:33:50.594828 containerd[1524]: time="2025-06-21T02:33:50.594787395Z" level=info msg="StartContainer for \"a84a821b85fd1e815399fe51af501ae583fd6b74fb615d0024bf1785bc49892d\"" Jun 21 02:33:50.597132 containerd[1524]: time="2025-06-21T02:33:50.597095748Z" level=info msg="connecting to shim a84a821b85fd1e815399fe51af501ae583fd6b74fb615d0024bf1785bc49892d" address="unix:///run/containerd/s/9ee16b5cb7a807a239cdb20812176f7d4f6ae128a21d317c59c549a17c625723" protocol=ttrpc version=3 Jun 21 02:33:50.616543 containerd[1524]: time="2025-06-21T02:33:50.616499608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6c78c649f6-974kb,Uid:7247bde3-9d5f-4644-b0e3-d20c93e7ec86,Namespace:tigera-operator,Attempt:0,}" Jun 21 02:33:50.618984 systemd[1]: Started cri-containerd-a84a821b85fd1e815399fe51af501ae583fd6b74fb615d0024bf1785bc49892d.scope - libcontainer container a84a821b85fd1e815399fe51af501ae583fd6b74fb615d0024bf1785bc49892d. Jun 21 02:33:50.632600 containerd[1524]: time="2025-06-21T02:33:50.632547319Z" level=info msg="connecting to shim 26ca4917986289dff52ca723f6cd7939588657c51dfb83c4d4126ac302b5fbda" address="unix:///run/containerd/s/03ae9d43d29088a2759429874471b415dd9324d73c16d50d5c9592786a9e2a88" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:33:50.654999 systemd[1]: Started cri-containerd-26ca4917986289dff52ca723f6cd7939588657c51dfb83c4d4126ac302b5fbda.scope - libcontainer container 26ca4917986289dff52ca723f6cd7939588657c51dfb83c4d4126ac302b5fbda. Jun 21 02:33:50.661043 containerd[1524]: time="2025-06-21T02:33:50.661006312Z" level=info msg="StartContainer for \"a84a821b85fd1e815399fe51af501ae583fd6b74fb615d0024bf1785bc49892d\" returns successfully" Jun 21 02:33:50.696540 containerd[1524]: time="2025-06-21T02:33:50.696503202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6c78c649f6-974kb,Uid:7247bde3-9d5f-4644-b0e3-d20c93e7ec86,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"26ca4917986289dff52ca723f6cd7939588657c51dfb83c4d4126ac302b5fbda\"" Jun 21 02:33:50.698518 containerd[1524]: time="2025-06-21T02:33:50.698480236Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 21 02:33:51.992879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4005166765.mount: Deactivated successfully. Jun 21 02:33:52.520330 containerd[1524]: time="2025-06-21T02:33:52.520263619Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:52.520940 containerd[1524]: time="2025-06-21T02:33:52.520663298Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=22149772" Jun 21 02:33:52.521665 containerd[1524]: time="2025-06-21T02:33:52.521612135Z" level=info msg="ImageCreate event name:\"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:52.523935 containerd[1524]: time="2025-06-21T02:33:52.523884649Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:33:52.524586 containerd[1524]: time="2025-06-21T02:33:52.524551047Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"22145767\" in 1.826028411s" Jun 21 02:33:52.524586 containerd[1524]: time="2025-06-21T02:33:52.524584087Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\"" Jun 21 02:33:52.527145 containerd[1524]: time="2025-06-21T02:33:52.527112681Z" level=info msg="CreateContainer within sandbox \"26ca4917986289dff52ca723f6cd7939588657c51dfb83c4d4126ac302b5fbda\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 21 02:33:52.533630 containerd[1524]: time="2025-06-21T02:33:52.533594023Z" level=info msg="Container 06c68e682d0d71bbba555e7553f2b59c7e7e474590b2db9a99329a06e16c197f: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:33:52.538132 containerd[1524]: time="2025-06-21T02:33:52.538093771Z" level=info msg="CreateContainer within sandbox \"26ca4917986289dff52ca723f6cd7939588657c51dfb83c4d4126ac302b5fbda\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"06c68e682d0d71bbba555e7553f2b59c7e7e474590b2db9a99329a06e16c197f\"" Jun 21 02:33:52.538583 containerd[1524]: time="2025-06-21T02:33:52.538558450Z" level=info msg="StartContainer for \"06c68e682d0d71bbba555e7553f2b59c7e7e474590b2db9a99329a06e16c197f\"" Jun 21 02:33:52.539587 containerd[1524]: time="2025-06-21T02:33:52.539544567Z" level=info msg="connecting to shim 06c68e682d0d71bbba555e7553f2b59c7e7e474590b2db9a99329a06e16c197f" address="unix:///run/containerd/s/03ae9d43d29088a2759429874471b415dd9324d73c16d50d5c9592786a9e2a88" protocol=ttrpc version=3 Jun 21 02:33:52.560973 systemd[1]: Started cri-containerd-06c68e682d0d71bbba555e7553f2b59c7e7e474590b2db9a99329a06e16c197f.scope - libcontainer container 06c68e682d0d71bbba555e7553f2b59c7e7e474590b2db9a99329a06e16c197f. Jun 21 02:33:52.588206 containerd[1524]: time="2025-06-21T02:33:52.588159076Z" level=info msg="StartContainer for \"06c68e682d0d71bbba555e7553f2b59c7e7e474590b2db9a99329a06e16c197f\" returns successfully" Jun 21 02:33:52.771867 kubelet[2649]: I0621 02:33:52.771651 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zk487" podStartSLOduration=2.77163558 podStartE2EDuration="2.77163558s" podCreationTimestamp="2025-06-21 02:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:33:50.999973709 +0000 UTC m=+8.125107792" watchObservedRunningTime="2025-06-21 02:33:52.77163558 +0000 UTC m=+9.896769663" Jun 21 02:33:53.703970 kubelet[2649]: I0621 02:33:53.703908 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6c78c649f6-974kb" podStartSLOduration=1.876004613 podStartE2EDuration="3.703803219s" podCreationTimestamp="2025-06-21 02:33:50 +0000 UTC" firstStartedPulling="2025-06-21 02:33:50.697580519 +0000 UTC m=+7.822714602" lastFinishedPulling="2025-06-21 02:33:52.525379125 +0000 UTC m=+9.650513208" observedRunningTime="2025-06-21 02:33:53.013809807 +0000 UTC m=+10.138943890" watchObservedRunningTime="2025-06-21 02:33:53.703803219 +0000 UTC m=+10.828937302" Jun 21 02:33:57.074889 update_engine[1512]: I20250621 02:33:57.074362 1512 update_attempter.cc:509] Updating boot flags... Jun 21 02:33:57.898414 sudo[1734]: pam_unix(sudo:session): session closed for user root Jun 21 02:33:57.901050 sshd[1733]: Connection closed by 10.0.0.1 port 33164 Jun 21 02:33:57.901862 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jun 21 02:33:57.906055 systemd-logind[1507]: Session 7 logged out. Waiting for processes to exit. Jun 21 02:33:57.907118 systemd[1]: sshd@6-10.0.0.140:22-10.0.0.1:33164.service: Deactivated successfully. Jun 21 02:33:57.909468 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 02:33:57.909659 systemd[1]: session-7.scope: Consumed 8.603s CPU time, 228.3M memory peak. Jun 21 02:33:57.912945 systemd-logind[1507]: Removed session 7. Jun 21 02:34:02.967680 systemd[1]: Created slice kubepods-besteffort-pod46fa1e8b_8186_4b8a_82ce_fdcb95f853ed.slice - libcontainer container kubepods-besteffort-pod46fa1e8b_8186_4b8a_82ce_fdcb95f853ed.slice. Jun 21 02:34:03.077190 kubelet[2649]: I0621 02:34:03.077146 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46fa1e8b-8186-4b8a-82ce-fdcb95f853ed-tigera-ca-bundle\") pod \"calico-typha-8d5b74c85-hv9wv\" (UID: \"46fa1e8b-8186-4b8a-82ce-fdcb95f853ed\") " pod="calico-system/calico-typha-8d5b74c85-hv9wv" Jun 21 02:34:03.077190 kubelet[2649]: I0621 02:34:03.077196 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxcw2\" (UniqueName: \"kubernetes.io/projected/46fa1e8b-8186-4b8a-82ce-fdcb95f853ed-kube-api-access-pxcw2\") pod \"calico-typha-8d5b74c85-hv9wv\" (UID: \"46fa1e8b-8186-4b8a-82ce-fdcb95f853ed\") " pod="calico-system/calico-typha-8d5b74c85-hv9wv" Jun 21 02:34:03.077582 kubelet[2649]: I0621 02:34:03.077225 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/46fa1e8b-8186-4b8a-82ce-fdcb95f853ed-typha-certs\") pod \"calico-typha-8d5b74c85-hv9wv\" (UID: \"46fa1e8b-8186-4b8a-82ce-fdcb95f853ed\") " pod="calico-system/calico-typha-8d5b74c85-hv9wv" Jun 21 02:34:03.251327 systemd[1]: Created slice kubepods-besteffort-podbbd1de6e_a20a_4d17_97d6_ce01d2e6bca9.slice - libcontainer container kubepods-besteffort-podbbd1de6e_a20a_4d17_97d6_ce01d2e6bca9.slice. Jun 21 02:34:03.292097 containerd[1524]: time="2025-06-21T02:34:03.292043491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d5b74c85-hv9wv,Uid:46fa1e8b-8186-4b8a-82ce-fdcb95f853ed,Namespace:calico-system,Attempt:0,}" Jun 21 02:34:03.325910 containerd[1524]: time="2025-06-21T02:34:03.325870926Z" level=info msg="connecting to shim 8a2ec7e09d88a26e9722d04b9bceb9b955671c88ecf87849212859fa2834f10a" address="unix:///run/containerd/s/74ac7ba8a793b98223d071362262de1af30fcfd88a991477bd859ddabad72653" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:34:03.373991 systemd[1]: Started cri-containerd-8a2ec7e09d88a26e9722d04b9bceb9b955671c88ecf87849212859fa2834f10a.scope - libcontainer container 8a2ec7e09d88a26e9722d04b9bceb9b955671c88ecf87849212859fa2834f10a. Jun 21 02:34:03.379089 kubelet[2649]: I0621 02:34:03.378935 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9-flexvol-driver-host\") pod \"calico-node-jchj4\" (UID: \"bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9\") " pod="calico-system/calico-node-jchj4" Jun 21 02:34:03.379089 kubelet[2649]: I0621 02:34:03.378986 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9-cni-net-dir\") pod \"calico-node-jchj4\" (UID: \"bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9\") " pod="calico-system/calico-node-jchj4" Jun 21 02:34:03.379089 kubelet[2649]: I0621 02:34:03.379005 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9-xtables-lock\") pod \"calico-node-jchj4\" (UID: \"bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9\") " pod="calico-system/calico-node-jchj4" Jun 21 02:34:03.379089 kubelet[2649]: I0621 02:34:03.379024 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqf5p\" (UniqueName: \"kubernetes.io/projected/bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9-kube-api-access-gqf5p\") pod \"calico-node-jchj4\" (UID: \"bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9\") " pod="calico-system/calico-node-jchj4" Jun 21 02:34:03.379382 kubelet[2649]: I0621 02:34:03.379127 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9-policysync\") pod \"calico-node-jchj4\" (UID: \"bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9\") " pod="calico-system/calico-node-jchj4" Jun 21 02:34:03.379382 kubelet[2649]: I0621 02:34:03.379356 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9-node-certs\") pod \"calico-node-jchj4\" (UID: \"bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9\") " pod="calico-system/calico-node-jchj4" Jun 21 02:34:03.379445 kubelet[2649]: I0621 02:34:03.379387 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9-var-run-calico\") pod \"calico-node-jchj4\" (UID: \"bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9\") " pod="calico-system/calico-node-jchj4" Jun 21 02:34:03.379445 kubelet[2649]: I0621 02:34:03.379405 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9-cni-bin-dir\") pod \"calico-node-jchj4\" (UID: \"bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9\") " pod="calico-system/calico-node-jchj4" Jun 21 02:34:03.379496 kubelet[2649]: I0621 02:34:03.379459 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9-var-lib-calico\") pod \"calico-node-jchj4\" (UID: \"bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9\") " pod="calico-system/calico-node-jchj4" Jun 21 02:34:03.379496 kubelet[2649]: I0621 02:34:03.379478 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9-cni-log-dir\") pod \"calico-node-jchj4\" (UID: \"bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9\") " pod="calico-system/calico-node-jchj4" Jun 21 02:34:03.379640 kubelet[2649]: I0621 02:34:03.379609 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9-lib-modules\") pod \"calico-node-jchj4\" (UID: \"bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9\") " pod="calico-system/calico-node-jchj4" Jun 21 02:34:03.379640 kubelet[2649]: I0621 02:34:03.379638 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9-tigera-ca-bundle\") pod \"calico-node-jchj4\" (UID: \"bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9\") " pod="calico-system/calico-node-jchj4" Jun 21 02:34:03.430945 containerd[1524]: time="2025-06-21T02:34:03.430349748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d5b74c85-hv9wv,Uid:46fa1e8b-8186-4b8a-82ce-fdcb95f853ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a2ec7e09d88a26e9722d04b9bceb9b955671c88ecf87849212859fa2834f10a\"" Jun 21 02:34:03.437008 kubelet[2649]: E0621 02:34:03.436796 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2p5zs" podUID="beef3fba-32a8-4ff1-9b42-beb45fd36a99" Jun 21 02:34:03.437486 containerd[1524]: time="2025-06-21T02:34:03.437447818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 21 02:34:03.481588 kubelet[2649]: E0621 02:34:03.481364 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.481588 kubelet[2649]: W0621 02:34:03.481397 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.481588 kubelet[2649]: E0621 02:34:03.481423 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.481736 kubelet[2649]: E0621 02:34:03.481603 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.481736 kubelet[2649]: W0621 02:34:03.481612 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.481736 kubelet[2649]: E0621 02:34:03.481636 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.486484 kubelet[2649]: E0621 02:34:03.486415 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.486484 kubelet[2649]: W0621 02:34:03.486459 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.486484 kubelet[2649]: E0621 02:34:03.486480 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.486718 kubelet[2649]: E0621 02:34:03.486686 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.486718 kubelet[2649]: W0621 02:34:03.486713 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.486919 kubelet[2649]: E0621 02:34:03.486730 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.487072 kubelet[2649]: E0621 02:34:03.487053 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.487244 kubelet[2649]: W0621 02:34:03.487128 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.487244 kubelet[2649]: E0621 02:34:03.487158 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.487383 kubelet[2649]: E0621 02:34:03.487370 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.487583 kubelet[2649]: W0621 02:34:03.487448 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.487583 kubelet[2649]: E0621 02:34:03.487490 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.487734 kubelet[2649]: E0621 02:34:03.487720 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.487795 kubelet[2649]: W0621 02:34:03.487783 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.487963 kubelet[2649]: E0621 02:34:03.487889 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.488087 kubelet[2649]: E0621 02:34:03.488074 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.488262 kubelet[2649]: W0621 02:34:03.488140 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.488262 kubelet[2649]: E0621 02:34:03.488172 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.488409 kubelet[2649]: E0621 02:34:03.488395 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.488472 kubelet[2649]: W0621 02:34:03.488460 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.488608 kubelet[2649]: E0621 02:34:03.488584 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.488850 kubelet[2649]: E0621 02:34:03.488767 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.488946 kubelet[2649]: W0621 02:34:03.488930 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.489038 kubelet[2649]: E0621 02:34:03.489011 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.491391 kubelet[2649]: E0621 02:34:03.491352 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.491391 kubelet[2649]: W0621 02:34:03.491369 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.491391 kubelet[2649]: E0621 02:34:03.491380 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.555233 containerd[1524]: time="2025-06-21T02:34:03.555065342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jchj4,Uid:bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9,Namespace:calico-system,Attempt:0,}" Jun 21 02:34:03.571463 containerd[1524]: time="2025-06-21T02:34:03.571401240Z" level=info msg="connecting to shim d2287f04fd1fa275d73fdae309973d4f0229c4153d583002cb78a8937d50aac3" address="unix:///run/containerd/s/0df201aa76d113023e74863f5f9fe5b33acd7e23939b717232c26f19aa0d165e" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:34:03.581564 kubelet[2649]: E0621 02:34:03.581535 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.581564 kubelet[2649]: W0621 02:34:03.581559 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.581705 kubelet[2649]: E0621 02:34:03.581579 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.581705 kubelet[2649]: I0621 02:34:03.581666 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/beef3fba-32a8-4ff1-9b42-beb45fd36a99-socket-dir\") pod \"csi-node-driver-2p5zs\" (UID: \"beef3fba-32a8-4ff1-9b42-beb45fd36a99\") " pod="calico-system/csi-node-driver-2p5zs" Jun 21 02:34:03.582028 kubelet[2649]: E0621 02:34:03.582012 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.582028 kubelet[2649]: W0621 02:34:03.582027 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.582199 kubelet[2649]: E0621 02:34:03.582098 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.582199 kubelet[2649]: I0621 02:34:03.582119 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/beef3fba-32a8-4ff1-9b42-beb45fd36a99-varrun\") pod \"csi-node-driver-2p5zs\" (UID: \"beef3fba-32a8-4ff1-9b42-beb45fd36a99\") " pod="calico-system/csi-node-driver-2p5zs" Jun 21 02:34:03.582464 kubelet[2649]: E0621 02:34:03.582447 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.582464 kubelet[2649]: W0621 02:34:03.582463 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.582676 kubelet[2649]: E0621 02:34:03.582660 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.582708 kubelet[2649]: I0621 02:34:03.582688 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/beef3fba-32a8-4ff1-9b42-beb45fd36a99-kubelet-dir\") pod \"csi-node-driver-2p5zs\" (UID: \"beef3fba-32a8-4ff1-9b42-beb45fd36a99\") " pod="calico-system/csi-node-driver-2p5zs" Jun 21 02:34:03.582918 kubelet[2649]: E0621 02:34:03.582903 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.582918 kubelet[2649]: W0621 02:34:03.582916 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.582979 kubelet[2649]: E0621 02:34:03.582927 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.583115 kubelet[2649]: E0621 02:34:03.583102 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.583115 kubelet[2649]: W0621 02:34:03.583113 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.583163 kubelet[2649]: E0621 02:34:03.583121 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.583290 kubelet[2649]: E0621 02:34:03.583279 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.583314 kubelet[2649]: W0621 02:34:03.583291 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.583336 kubelet[2649]: E0621 02:34:03.583311 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.583605 kubelet[2649]: E0621 02:34:03.583588 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.583605 kubelet[2649]: W0621 02:34:03.583603 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.583670 kubelet[2649]: E0621 02:34:03.583618 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.583817 kubelet[2649]: E0621 02:34:03.583804 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.583866 kubelet[2649]: W0621 02:34:03.583817 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.584013 kubelet[2649]: E0621 02:34:03.583938 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.584013 kubelet[2649]: I0621 02:34:03.584001 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7qkj\" (UniqueName: \"kubernetes.io/projected/beef3fba-32a8-4ff1-9b42-beb45fd36a99-kube-api-access-p7qkj\") pod \"csi-node-driver-2p5zs\" (UID: \"beef3fba-32a8-4ff1-9b42-beb45fd36a99\") " pod="calico-system/csi-node-driver-2p5zs" Jun 21 02:34:03.584180 kubelet[2649]: E0621 02:34:03.584164 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.584241 kubelet[2649]: W0621 02:34:03.584230 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.584297 kubelet[2649]: E0621 02:34:03.584285 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.584514 kubelet[2649]: E0621 02:34:03.584500 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.584573 kubelet[2649]: W0621 02:34:03.584562 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.584621 kubelet[2649]: E0621 02:34:03.584611 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.585562 kubelet[2649]: E0621 02:34:03.585546 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.585650 kubelet[2649]: W0621 02:34:03.585636 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.585711 kubelet[2649]: E0621 02:34:03.585700 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.585769 kubelet[2649]: I0621 02:34:03.585758 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/beef3fba-32a8-4ff1-9b42-beb45fd36a99-registration-dir\") pod \"csi-node-driver-2p5zs\" (UID: \"beef3fba-32a8-4ff1-9b42-beb45fd36a99\") " pod="calico-system/csi-node-driver-2p5zs" Jun 21 02:34:03.586065 kubelet[2649]: E0621 02:34:03.586021 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.586065 kubelet[2649]: W0621 02:34:03.586036 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.586065 kubelet[2649]: E0621 02:34:03.586047 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.586315 kubelet[2649]: E0621 02:34:03.586227 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.586315 kubelet[2649]: W0621 02:34:03.586241 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.586315 kubelet[2649]: E0621 02:34:03.586254 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.586408 kubelet[2649]: E0621 02:34:03.586384 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.586408 kubelet[2649]: W0621 02:34:03.586399 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.586408 kubelet[2649]: E0621 02:34:03.586408 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.586647 kubelet[2649]: E0621 02:34:03.586609 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.586647 kubelet[2649]: W0621 02:34:03.586622 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.586647 kubelet[2649]: E0621 02:34:03.586630 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.593015 systemd[1]: Started cri-containerd-d2287f04fd1fa275d73fdae309973d4f0229c4153d583002cb78a8937d50aac3.scope - libcontainer container d2287f04fd1fa275d73fdae309973d4f0229c4153d583002cb78a8937d50aac3. Jun 21 02:34:03.624980 containerd[1524]: time="2025-06-21T02:34:03.624937169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jchj4,Uid:bbd1de6e-a20a-4d17-97d6-ce01d2e6bca9,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2287f04fd1fa275d73fdae309973d4f0229c4153d583002cb78a8937d50aac3\"" Jun 21 02:34:03.687194 kubelet[2649]: E0621 02:34:03.687082 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.687194 kubelet[2649]: W0621 02:34:03.687116 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.687194 kubelet[2649]: E0621 02:34:03.687137 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.687916 kubelet[2649]: E0621 02:34:03.687897 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.688231 kubelet[2649]: W0621 02:34:03.688012 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.688231 kubelet[2649]: E0621 02:34:03.688130 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.688466 kubelet[2649]: E0621 02:34:03.688445 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.688661 kubelet[2649]: W0621 02:34:03.688542 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.688661 kubelet[2649]: E0621 02:34:03.688568 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.689031 kubelet[2649]: E0621 02:34:03.688916 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.689031 kubelet[2649]: W0621 02:34:03.688948 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.689031 kubelet[2649]: E0621 02:34:03.688991 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.689354 kubelet[2649]: E0621 02:34:03.689338 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.689888 kubelet[2649]: W0621 02:34:03.689699 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.689888 kubelet[2649]: E0621 02:34:03.689755 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.690317 kubelet[2649]: E0621 02:34:03.690303 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.690556 kubelet[2649]: W0621 02:34:03.690367 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.690556 kubelet[2649]: E0621 02:34:03.690409 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.691267 kubelet[2649]: E0621 02:34:03.691089 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.691267 kubelet[2649]: W0621 02:34:03.691105 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.691267 kubelet[2649]: E0621 02:34:03.691228 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.691946 kubelet[2649]: E0621 02:34:03.691820 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.692455 kubelet[2649]: W0621 02:34:03.692344 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.692455 kubelet[2649]: E0621 02:34:03.692401 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.694821 kubelet[2649]: E0621 02:34:03.694805 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.695023 kubelet[2649]: W0621 02:34:03.694931 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.695087 kubelet[2649]: E0621 02:34:03.694986 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.695169 kubelet[2649]: E0621 02:34:03.695157 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.695216 kubelet[2649]: W0621 02:34:03.695206 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.695324 kubelet[2649]: E0621 02:34:03.695296 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.695488 kubelet[2649]: E0621 02:34:03.695476 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.695546 kubelet[2649]: W0621 02:34:03.695535 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.695627 kubelet[2649]: E0621 02:34:03.695605 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.696485 kubelet[2649]: E0621 02:34:03.696389 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.696485 kubelet[2649]: W0621 02:34:03.696406 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.696485 kubelet[2649]: E0621 02:34:03.696441 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.697432 kubelet[2649]: E0621 02:34:03.697388 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.697432 kubelet[2649]: W0621 02:34:03.697402 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.697432 kubelet[2649]: E0621 02:34:03.697512 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.697881 kubelet[2649]: E0621 02:34:03.697830 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.697881 kubelet[2649]: W0621 02:34:03.697873 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.698146 kubelet[2649]: E0621 02:34:03.697944 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.698146 kubelet[2649]: E0621 02:34:03.698007 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.698146 kubelet[2649]: W0621 02:34:03.698015 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.698146 kubelet[2649]: E0621 02:34:03.698042 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.698146 kubelet[2649]: E0621 02:34:03.698132 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.698146 kubelet[2649]: W0621 02:34:03.698140 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.699013 kubelet[2649]: E0621 02:34:03.698282 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.699013 kubelet[2649]: W0621 02:34:03.698288 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.699013 kubelet[2649]: E0621 02:34:03.698292 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.699013 kubelet[2649]: E0621 02:34:03.698308 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.699912 kubelet[2649]: E0621 02:34:03.699896 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.699912 kubelet[2649]: W0621 02:34:03.699907 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.700002 kubelet[2649]: E0621 02:34:03.699928 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.700082 kubelet[2649]: E0621 02:34:03.700070 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.700082 kubelet[2649]: W0621 02:34:03.700080 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.700134 kubelet[2649]: E0621 02:34:03.700112 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.700220 kubelet[2649]: E0621 02:34:03.700207 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.700220 kubelet[2649]: W0621 02:34:03.700217 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.700305 kubelet[2649]: E0621 02:34:03.700290 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.700409 kubelet[2649]: E0621 02:34:03.700397 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.700409 kubelet[2649]: W0621 02:34:03.700407 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.700470 kubelet[2649]: E0621 02:34:03.700448 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.700568 kubelet[2649]: E0621 02:34:03.700545 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.700568 kubelet[2649]: W0621 02:34:03.700556 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.700918 kubelet[2649]: E0621 02:34:03.700634 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.700918 kubelet[2649]: E0621 02:34:03.700722 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.700918 kubelet[2649]: W0621 02:34:03.700728 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.700918 kubelet[2649]: E0621 02:34:03.700808 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.701369 kubelet[2649]: E0621 02:34:03.701352 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.701369 kubelet[2649]: W0621 02:34:03.701367 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.701488 kubelet[2649]: E0621 02:34:03.701383 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.702090 kubelet[2649]: E0621 02:34:03.702064 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.702090 kubelet[2649]: W0621 02:34:03.702080 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.702166 kubelet[2649]: E0621 02:34:03.702094 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:03.717725 kubelet[2649]: E0621 02:34:03.717692 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:03.717725 kubelet[2649]: W0621 02:34:03.717713 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:03.717725 kubelet[2649]: E0621 02:34:03.717732 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:04.341103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872815371.mount: Deactivated successfully. Jun 21 02:34:04.967547 kubelet[2649]: E0621 02:34:04.967453 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2p5zs" podUID="beef3fba-32a8-4ff1-9b42-beb45fd36a99" Jun 21 02:34:05.476772 containerd[1524]: time="2025-06-21T02:34:05.476707188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:05.477789 containerd[1524]: time="2025-06-21T02:34:05.477749666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=33070817" Jun 21 02:34:05.478747 containerd[1524]: time="2025-06-21T02:34:05.478721225Z" level=info msg="ImageCreate event name:\"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:05.480792 containerd[1524]: time="2025-06-21T02:34:05.480740943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:05.481457 containerd[1524]: time="2025-06-21T02:34:05.481295662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"33070671\" in 2.043813524s" Jun 21 02:34:05.481457 containerd[1524]: time="2025-06-21T02:34:05.481326622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\"" Jun 21 02:34:05.482136 containerd[1524]: time="2025-06-21T02:34:05.482116221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 21 02:34:05.502515 containerd[1524]: time="2025-06-21T02:34:05.502445317Z" level=info msg="CreateContainer within sandbox \"8a2ec7e09d88a26e9722d04b9bceb9b955671c88ecf87849212859fa2834f10a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 21 02:34:05.511892 containerd[1524]: time="2025-06-21T02:34:05.511011987Z" level=info msg="Container 0e8349192d34039b45404cfee2f7beb599dde068bd94fb468513e3cd6827cd52: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:05.519532 containerd[1524]: time="2025-06-21T02:34:05.519486938Z" level=info msg="CreateContainer within sandbox \"8a2ec7e09d88a26e9722d04b9bceb9b955671c88ecf87849212859fa2834f10a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0e8349192d34039b45404cfee2f7beb599dde068bd94fb468513e3cd6827cd52\"" Jun 21 02:34:05.520128 containerd[1524]: time="2025-06-21T02:34:05.520086537Z" level=info msg="StartContainer for \"0e8349192d34039b45404cfee2f7beb599dde068bd94fb468513e3cd6827cd52\"" Jun 21 02:34:05.523161 containerd[1524]: time="2025-06-21T02:34:05.523089333Z" level=info msg="connecting to shim 0e8349192d34039b45404cfee2f7beb599dde068bd94fb468513e3cd6827cd52" address="unix:///run/containerd/s/74ac7ba8a793b98223d071362262de1af30fcfd88a991477bd859ddabad72653" protocol=ttrpc version=3 Jun 21 02:34:05.546003 systemd[1]: Started cri-containerd-0e8349192d34039b45404cfee2f7beb599dde068bd94fb468513e3cd6827cd52.scope - libcontainer container 0e8349192d34039b45404cfee2f7beb599dde068bd94fb468513e3cd6827cd52. Jun 21 02:34:05.589350 containerd[1524]: time="2025-06-21T02:34:05.589307696Z" level=info msg="StartContainer for \"0e8349192d34039b45404cfee2f7beb599dde068bd94fb468513e3cd6827cd52\" returns successfully" Jun 21 02:34:06.027955 kubelet[2649]: I0621 02:34:06.027764 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8d5b74c85-hv9wv" podStartSLOduration=1.9827013039999999 podStartE2EDuration="4.027746026s" podCreationTimestamp="2025-06-21 02:34:02 +0000 UTC" firstStartedPulling="2025-06-21 02:34:03.436918939 +0000 UTC m=+20.562053062" lastFinishedPulling="2025-06-21 02:34:05.481963701 +0000 UTC m=+22.607097784" observedRunningTime="2025-06-21 02:34:06.027503226 +0000 UTC m=+23.152637309" watchObservedRunningTime="2025-06-21 02:34:06.027746026 +0000 UTC m=+23.152880109" Jun 21 02:34:06.100895 kubelet[2649]: E0621 02:34:06.100863 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.100895 kubelet[2649]: W0621 02:34:06.100886 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.100895 kubelet[2649]: E0621 02:34:06.100904 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.101089 kubelet[2649]: E0621 02:34:06.101074 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.101089 kubelet[2649]: W0621 02:34:06.101085 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.101134 kubelet[2649]: E0621 02:34:06.101095 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.101286 kubelet[2649]: E0621 02:34:06.101264 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.101286 kubelet[2649]: W0621 02:34:06.101276 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.101286 kubelet[2649]: E0621 02:34:06.101285 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.101474 kubelet[2649]: E0621 02:34:06.101436 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.101474 kubelet[2649]: W0621 02:34:06.101461 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.101474 kubelet[2649]: E0621 02:34:06.101470 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.101651 kubelet[2649]: E0621 02:34:06.101638 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.101651 kubelet[2649]: W0621 02:34:06.101649 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.101702 kubelet[2649]: E0621 02:34:06.101658 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.101797 kubelet[2649]: E0621 02:34:06.101787 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.101818 kubelet[2649]: W0621 02:34:06.101797 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.101818 kubelet[2649]: E0621 02:34:06.101806 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.101962 kubelet[2649]: E0621 02:34:06.101951 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.101984 kubelet[2649]: W0621 02:34:06.101961 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.101984 kubelet[2649]: E0621 02:34:06.101971 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.102102 kubelet[2649]: E0621 02:34:06.102092 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.102122 kubelet[2649]: W0621 02:34:06.102101 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.102122 kubelet[2649]: E0621 02:34:06.102109 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.102264 kubelet[2649]: E0621 02:34:06.102253 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.102264 kubelet[2649]: W0621 02:34:06.102262 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.102315 kubelet[2649]: E0621 02:34:06.102270 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.102393 kubelet[2649]: E0621 02:34:06.102382 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.102393 kubelet[2649]: W0621 02:34:06.102392 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.102445 kubelet[2649]: E0621 02:34:06.102400 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.102534 kubelet[2649]: E0621 02:34:06.102524 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.102534 kubelet[2649]: W0621 02:34:06.102533 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.102580 kubelet[2649]: E0621 02:34:06.102541 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.102677 kubelet[2649]: E0621 02:34:06.102666 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.102702 kubelet[2649]: W0621 02:34:06.102677 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.102702 kubelet[2649]: E0621 02:34:06.102685 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.102859 kubelet[2649]: E0621 02:34:06.102827 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.102859 kubelet[2649]: W0621 02:34:06.102858 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.102912 kubelet[2649]: E0621 02:34:06.102867 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.103003 kubelet[2649]: E0621 02:34:06.102991 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.103003 kubelet[2649]: W0621 02:34:06.103001 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.103044 kubelet[2649]: E0621 02:34:06.103008 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.103136 kubelet[2649]: E0621 02:34:06.103126 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.103156 kubelet[2649]: W0621 02:34:06.103135 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.103156 kubelet[2649]: E0621 02:34:06.103143 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.111517 kubelet[2649]: E0621 02:34:06.111487 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.111517 kubelet[2649]: W0621 02:34:06.111507 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.111517 kubelet[2649]: E0621 02:34:06.111520 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.111715 kubelet[2649]: E0621 02:34:06.111693 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.111715 kubelet[2649]: W0621 02:34:06.111705 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.111769 kubelet[2649]: E0621 02:34:06.111719 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.111885 kubelet[2649]: E0621 02:34:06.111870 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.111885 kubelet[2649]: W0621 02:34:06.111880 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.111927 kubelet[2649]: E0621 02:34:06.111894 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.112066 kubelet[2649]: E0621 02:34:06.112047 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.112066 kubelet[2649]: W0621 02:34:06.112058 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.112122 kubelet[2649]: E0621 02:34:06.112072 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.112232 kubelet[2649]: E0621 02:34:06.112219 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.112232 kubelet[2649]: W0621 02:34:06.112229 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.112288 kubelet[2649]: E0621 02:34:06.112242 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.112381 kubelet[2649]: E0621 02:34:06.112370 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.112381 kubelet[2649]: W0621 02:34:06.112380 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.112445 kubelet[2649]: E0621 02:34:06.112393 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.112559 kubelet[2649]: E0621 02:34:06.112549 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.112586 kubelet[2649]: W0621 02:34:06.112559 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.112586 kubelet[2649]: E0621 02:34:06.112572 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.112815 kubelet[2649]: E0621 02:34:06.112797 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.112859 kubelet[2649]: W0621 02:34:06.112817 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.112859 kubelet[2649]: E0621 02:34:06.112846 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.112999 kubelet[2649]: E0621 02:34:06.112988 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.112999 kubelet[2649]: W0621 02:34:06.112998 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.113057 kubelet[2649]: E0621 02:34:06.113024 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.113137 kubelet[2649]: E0621 02:34:06.113127 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.113137 kubelet[2649]: W0621 02:34:06.113136 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.113197 kubelet[2649]: E0621 02:34:06.113149 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.113287 kubelet[2649]: E0621 02:34:06.113277 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.113320 kubelet[2649]: W0621 02:34:06.113287 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.113320 kubelet[2649]: E0621 02:34:06.113299 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.113440 kubelet[2649]: E0621 02:34:06.113425 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.113440 kubelet[2649]: W0621 02:34:06.113438 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.113619 kubelet[2649]: E0621 02:34:06.113453 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.113709 kubelet[2649]: E0621 02:34:06.113692 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.113759 kubelet[2649]: W0621 02:34:06.113747 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.113819 kubelet[2649]: E0621 02:34:06.113807 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.113999 kubelet[2649]: E0621 02:34:06.113983 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.113999 kubelet[2649]: W0621 02:34:06.113995 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.114067 kubelet[2649]: E0621 02:34:06.114009 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.114317 kubelet[2649]: E0621 02:34:06.114304 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.114317 kubelet[2649]: W0621 02:34:06.114316 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.114375 kubelet[2649]: E0621 02:34:06.114330 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.114521 kubelet[2649]: E0621 02:34:06.114496 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.114521 kubelet[2649]: W0621 02:34:06.114508 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.114521 kubelet[2649]: E0621 02:34:06.114517 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.114807 kubelet[2649]: E0621 02:34:06.114793 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.114807 kubelet[2649]: W0621 02:34:06.114805 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.114902 kubelet[2649]: E0621 02:34:06.114813 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.119878 kubelet[2649]: E0621 02:34:06.119829 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 02:34:06.119878 kubelet[2649]: W0621 02:34:06.119873 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 02:34:06.119950 kubelet[2649]: E0621 02:34:06.119885 2649 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 02:34:06.336906 containerd[1524]: time="2025-06-21T02:34:06.336767487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:06.337562 containerd[1524]: time="2025-06-21T02:34:06.337482487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4264319" Jun 21 02:34:06.338816 containerd[1524]: time="2025-06-21T02:34:06.338779645Z" level=info msg="ImageCreate event name:\"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:06.344701 containerd[1524]: time="2025-06-21T02:34:06.344664759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:06.345327 containerd[1524]: time="2025-06-21T02:34:06.345294198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5633520\" in 863.154377ms" Jun 21 02:34:06.345327 containerd[1524]: time="2025-06-21T02:34:06.345322358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\"" Jun 21 02:34:06.347815 containerd[1524]: time="2025-06-21T02:34:06.347783195Z" level=info msg="CreateContainer within sandbox \"d2287f04fd1fa275d73fdae309973d4f0229c4153d583002cb78a8937d50aac3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 21 02:34:06.378072 containerd[1524]: time="2025-06-21T02:34:06.378022122Z" level=info msg="Container e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:06.384181 containerd[1524]: time="2025-06-21T02:34:06.384143636Z" level=info msg="CreateContainer within sandbox \"d2287f04fd1fa275d73fdae309973d4f0229c4153d583002cb78a8937d50aac3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4\"" Jun 21 02:34:06.384743 containerd[1524]: time="2025-06-21T02:34:06.384711915Z" level=info msg="StartContainer for \"e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4\"" Jun 21 02:34:06.386043 containerd[1524]: time="2025-06-21T02:34:06.386010553Z" level=info msg="connecting to shim e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4" address="unix:///run/containerd/s/0df201aa76d113023e74863f5f9fe5b33acd7e23939b717232c26f19aa0d165e" protocol=ttrpc version=3 Jun 21 02:34:06.402992 systemd[1]: Started cri-containerd-e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4.scope - libcontainer container e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4. Jun 21 02:34:06.434789 containerd[1524]: time="2025-06-21T02:34:06.434583500Z" level=info msg="StartContainer for \"e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4\" returns successfully" Jun 21 02:34:06.464009 systemd[1]: cri-containerd-e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4.scope: Deactivated successfully. Jun 21 02:34:06.490183 containerd[1524]: time="2025-06-21T02:34:06.490146999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4\" id:\"e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4\" pid:3328 exited_at:{seconds:1750473246 nanos:481405969}" Jun 21 02:34:06.490911 containerd[1524]: time="2025-06-21T02:34:06.490882399Z" level=info msg="received exit event container_id:\"e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4\" id:\"e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4\" pid:3328 exited_at:{seconds:1750473246 nanos:481405969}" Jun 21 02:34:06.527331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e396cf2072f03f4fcaec772a54551e93ca42bb958acaa501c98299aaf61bb1e4-rootfs.mount: Deactivated successfully. Jun 21 02:34:06.967575 kubelet[2649]: E0621 02:34:06.967451 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2p5zs" podUID="beef3fba-32a8-4ff1-9b42-beb45fd36a99" Jun 21 02:34:07.021069 kubelet[2649]: I0621 02:34:07.021039 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:34:07.022628 containerd[1524]: time="2025-06-21T02:34:07.021948258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 21 02:34:08.967415 kubelet[2649]: E0621 02:34:08.966980 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2p5zs" podUID="beef3fba-32a8-4ff1-9b42-beb45fd36a99" Jun 21 02:34:09.679436 containerd[1524]: time="2025-06-21T02:34:09.679385039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:09.679902 containerd[1524]: time="2025-06-21T02:34:09.679864838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=65872909" Jun 21 02:34:09.680732 containerd[1524]: time="2025-06-21T02:34:09.680708438Z" level=info msg="ImageCreate event name:\"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:09.682546 containerd[1524]: time="2025-06-21T02:34:09.682507476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:09.683229 containerd[1524]: time="2025-06-21T02:34:09.683194715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"67242150\" in 2.660556537s" Jun 21 02:34:09.683317 containerd[1524]: time="2025-06-21T02:34:09.683299355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\"" Jun 21 02:34:09.686029 containerd[1524]: time="2025-06-21T02:34:09.685987433Z" level=info msg="CreateContainer within sandbox \"d2287f04fd1fa275d73fdae309973d4f0229c4153d583002cb78a8937d50aac3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 21 02:34:09.692869 containerd[1524]: time="2025-06-21T02:34:09.692685507Z" level=info msg="Container 435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:09.701512 containerd[1524]: time="2025-06-21T02:34:09.701464779Z" level=info msg="CreateContainer within sandbox \"d2287f04fd1fa275d73fdae309973d4f0229c4153d583002cb78a8937d50aac3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604\"" Jun 21 02:34:09.704913 containerd[1524]: time="2025-06-21T02:34:09.704706056Z" level=info msg="StartContainer for \"435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604\"" Jun 21 02:34:09.706372 containerd[1524]: time="2025-06-21T02:34:09.706341614Z" level=info msg="connecting to shim 435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604" address="unix:///run/containerd/s/0df201aa76d113023e74863f5f9fe5b33acd7e23939b717232c26f19aa0d165e" protocol=ttrpc version=3 Jun 21 02:34:09.726015 systemd[1]: Started cri-containerd-435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604.scope - libcontainer container 435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604. Jun 21 02:34:09.784022 containerd[1524]: time="2025-06-21T02:34:09.783984464Z" level=info msg="StartContainer for \"435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604\" returns successfully" Jun 21 02:34:10.269666 systemd[1]: cri-containerd-435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604.scope: Deactivated successfully. Jun 21 02:34:10.270732 systemd[1]: cri-containerd-435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604.scope: Consumed 455ms CPU time, 176.2M memory peak, 3.3M read from disk, 165.8M written to disk. Jun 21 02:34:10.272145 containerd[1524]: time="2025-06-21T02:34:10.272110119Z" level=info msg="received exit event container_id:\"435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604\" id:\"435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604\" pid:3389 exited_at:{seconds:1750473250 nanos:271824079}" Jun 21 02:34:10.272332 containerd[1524]: time="2025-06-21T02:34:10.272202919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604\" id:\"435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604\" pid:3389 exited_at:{seconds:1750473250 nanos:271824079}" Jun 21 02:34:10.290182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-435a7583b4d3c368387ac33c2670954ed3e9fa9362b367e3ee629017830e2604-rootfs.mount: Deactivated successfully. Jun 21 02:34:10.309451 kubelet[2649]: I0621 02:34:10.309403 2649 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 21 02:34:10.434293 systemd[1]: Created slice kubepods-burstable-pod7bf5741e_a7ae_4f5e_8671_7b89a6c87403.slice - libcontainer container kubepods-burstable-pod7bf5741e_a7ae_4f5e_8671_7b89a6c87403.slice. Jun 21 02:34:10.445299 kubelet[2649]: I0621 02:34:10.445244 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5fd053d8-590f-4c15-99fe-03cf45c58e4e-calico-apiserver-certs\") pod \"calico-apiserver-fb8467986-zzvjw\" (UID: \"5fd053d8-590f-4c15-99fe-03cf45c58e4e\") " pod="calico-apiserver/calico-apiserver-fb8467986-zzvjw" Jun 21 02:34:10.445299 kubelet[2649]: I0621 02:34:10.445305 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zdbw\" (UniqueName: \"kubernetes.io/projected/7e4612c9-67f5-4c6f-b9f5-c619b405d0fb-kube-api-access-8zdbw\") pod \"coredns-7c65d6cfc9-qpplk\" (UID: \"7e4612c9-67f5-4c6f-b9f5-c619b405d0fb\") " pod="kube-system/coredns-7c65d6cfc9-qpplk" Jun 21 02:34:10.445461 kubelet[2649]: I0621 02:34:10.445332 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lljn9\" (UniqueName: \"kubernetes.io/projected/b9a64c74-9e1c-4021-9017-45e1ca7f0ee0-kube-api-access-lljn9\") pod \"calico-apiserver-fb8467986-x5j49\" (UID: \"b9a64c74-9e1c-4021-9017-45e1ca7f0ee0\") " pod="calico-apiserver/calico-apiserver-fb8467986-x5j49" Jun 21 02:34:10.445461 kubelet[2649]: I0621 02:34:10.445354 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b9a64c74-9e1c-4021-9017-45e1ca7f0ee0-calico-apiserver-certs\") pod \"calico-apiserver-fb8467986-x5j49\" (UID: \"b9a64c74-9e1c-4021-9017-45e1ca7f0ee0\") " pod="calico-apiserver/calico-apiserver-fb8467986-x5j49" Jun 21 02:34:10.445461 kubelet[2649]: I0621 02:34:10.445379 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adca60d9-0d17-48e4-af8a-8a78316fc978-whisker-ca-bundle\") pod \"whisker-854c864d55-kcsgm\" (UID: \"adca60d9-0d17-48e4-af8a-8a78316fc978\") " pod="calico-system/whisker-854c864d55-kcsgm" Jun 21 02:34:10.445461 kubelet[2649]: I0621 02:34:10.445404 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkk44\" (UniqueName: \"kubernetes.io/projected/babe5487-2d59-4548-a174-e937458943cc-kube-api-access-hkk44\") pod \"calico-kube-controllers-67c745b8b4-6j8z7\" (UID: \"babe5487-2d59-4548-a174-e937458943cc\") " pod="calico-system/calico-kube-controllers-67c745b8b4-6j8z7" Jun 21 02:34:10.445461 kubelet[2649]: I0621 02:34:10.445435 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz9vl\" (UniqueName: \"kubernetes.io/projected/5fd053d8-590f-4c15-99fe-03cf45c58e4e-kube-api-access-dz9vl\") pod \"calico-apiserver-fb8467986-zzvjw\" (UID: \"5fd053d8-590f-4c15-99fe-03cf45c58e4e\") " pod="calico-apiserver/calico-apiserver-fb8467986-zzvjw" Jun 21 02:34:10.445569 kubelet[2649]: I0621 02:34:10.445466 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccf82a03-d334-4a00-9e9a-c652b874cc34-config\") pod \"goldmane-dc7b455cb-dbmg9\" (UID: \"ccf82a03-d334-4a00-9e9a-c652b874cc34\") " pod="calico-system/goldmane-dc7b455cb-dbmg9" Jun 21 02:34:10.445569 kubelet[2649]: I0621 02:34:10.445495 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccf82a03-d334-4a00-9e9a-c652b874cc34-goldmane-ca-bundle\") pod \"goldmane-dc7b455cb-dbmg9\" (UID: \"ccf82a03-d334-4a00-9e9a-c652b874cc34\") " pod="calico-system/goldmane-dc7b455cb-dbmg9" Jun 21 02:34:10.445569 kubelet[2649]: I0621 02:34:10.445514 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8mp6\" (UniqueName: \"kubernetes.io/projected/ccf82a03-d334-4a00-9e9a-c652b874cc34-kube-api-access-d8mp6\") pod \"goldmane-dc7b455cb-dbmg9\" (UID: \"ccf82a03-d334-4a00-9e9a-c652b874cc34\") " pod="calico-system/goldmane-dc7b455cb-dbmg9" Jun 21 02:34:10.445569 kubelet[2649]: I0621 02:34:10.445538 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/adca60d9-0d17-48e4-af8a-8a78316fc978-whisker-backend-key-pair\") pod \"whisker-854c864d55-kcsgm\" (UID: \"adca60d9-0d17-48e4-af8a-8a78316fc978\") " pod="calico-system/whisker-854c864d55-kcsgm" Jun 21 02:34:10.445569 kubelet[2649]: I0621 02:34:10.445560 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7bf5741e-a7ae-4f5e-8671-7b89a6c87403-config-volume\") pod \"coredns-7c65d6cfc9-wrbxh\" (UID: \"7bf5741e-a7ae-4f5e-8671-7b89a6c87403\") " pod="kube-system/coredns-7c65d6cfc9-wrbxh" Jun 21 02:34:10.445672 kubelet[2649]: I0621 02:34:10.445579 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446mx\" (UniqueName: \"kubernetes.io/projected/7bf5741e-a7ae-4f5e-8671-7b89a6c87403-kube-api-access-446mx\") pod \"coredns-7c65d6cfc9-wrbxh\" (UID: \"7bf5741e-a7ae-4f5e-8671-7b89a6c87403\") " pod="kube-system/coredns-7c65d6cfc9-wrbxh" Jun 21 02:34:10.445672 kubelet[2649]: I0621 02:34:10.445599 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ccf82a03-d334-4a00-9e9a-c652b874cc34-goldmane-key-pair\") pod \"goldmane-dc7b455cb-dbmg9\" (UID: \"ccf82a03-d334-4a00-9e9a-c652b874cc34\") " pod="calico-system/goldmane-dc7b455cb-dbmg9" Jun 21 02:34:10.445672 kubelet[2649]: I0621 02:34:10.445618 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j9q2\" (UniqueName: \"kubernetes.io/projected/adca60d9-0d17-48e4-af8a-8a78316fc978-kube-api-access-2j9q2\") pod \"whisker-854c864d55-kcsgm\" (UID: \"adca60d9-0d17-48e4-af8a-8a78316fc978\") " pod="calico-system/whisker-854c864d55-kcsgm" Jun 21 02:34:10.446611 kubelet[2649]: I0621 02:34:10.445856 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e4612c9-67f5-4c6f-b9f5-c619b405d0fb-config-volume\") pod \"coredns-7c65d6cfc9-qpplk\" (UID: \"7e4612c9-67f5-4c6f-b9f5-c619b405d0fb\") " pod="kube-system/coredns-7c65d6cfc9-qpplk" Jun 21 02:34:10.446611 kubelet[2649]: I0621 02:34:10.445904 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/babe5487-2d59-4548-a174-e937458943cc-tigera-ca-bundle\") pod \"calico-kube-controllers-67c745b8b4-6j8z7\" (UID: \"babe5487-2d59-4548-a174-e937458943cc\") " pod="calico-system/calico-kube-controllers-67c745b8b4-6j8z7" Jun 21 02:34:10.451285 systemd[1]: Created slice kubepods-burstable-pod7e4612c9_67f5_4c6f_b9f5_c619b405d0fb.slice - libcontainer container kubepods-burstable-pod7e4612c9_67f5_4c6f_b9f5_c619b405d0fb.slice. Jun 21 02:34:10.458407 systemd[1]: Created slice kubepods-besteffort-podbabe5487_2d59_4548_a174_e937458943cc.slice - libcontainer container kubepods-besteffort-podbabe5487_2d59_4548_a174_e937458943cc.slice. Jun 21 02:34:10.463446 systemd[1]: Created slice kubepods-besteffort-podb9a64c74_9e1c_4021_9017_45e1ca7f0ee0.slice - libcontainer container kubepods-besteffort-podb9a64c74_9e1c_4021_9017_45e1ca7f0ee0.slice. Jun 21 02:34:10.469889 systemd[1]: Created slice kubepods-besteffort-pod5fd053d8_590f_4c15_99fe_03cf45c58e4e.slice - libcontainer container kubepods-besteffort-pod5fd053d8_590f_4c15_99fe_03cf45c58e4e.slice. Jun 21 02:34:10.475754 systemd[1]: Created slice kubepods-besteffort-podccf82a03_d334_4a00_9e9a_c652b874cc34.slice - libcontainer container kubepods-besteffort-podccf82a03_d334_4a00_9e9a_c652b874cc34.slice. Jun 21 02:34:10.482548 systemd[1]: Created slice kubepods-besteffort-podadca60d9_0d17_48e4_af8a_8a78316fc978.slice - libcontainer container kubepods-besteffort-podadca60d9_0d17_48e4_af8a_8a78316fc978.slice. Jun 21 02:34:10.746928 containerd[1524]: time="2025-06-21T02:34:10.746801358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wrbxh,Uid:7bf5741e-a7ae-4f5e-8671-7b89a6c87403,Namespace:kube-system,Attempt:0,}" Jun 21 02:34:10.755723 containerd[1524]: time="2025-06-21T02:34:10.755676150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qpplk,Uid:7e4612c9-67f5-4c6f-b9f5-c619b405d0fb,Namespace:kube-system,Attempt:0,}" Jun 21 02:34:10.797580 containerd[1524]: time="2025-06-21T02:34:10.790509281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-854c864d55-kcsgm,Uid:adca60d9-0d17-48e4-af8a-8a78316fc978,Namespace:calico-system,Attempt:0,}" Jun 21 02:34:10.797580 containerd[1524]: time="2025-06-21T02:34:10.791061000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fb8467986-zzvjw,Uid:5fd053d8-590f-4c15-99fe-03cf45c58e4e,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:34:10.797580 containerd[1524]: time="2025-06-21T02:34:10.791242080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c745b8b4-6j8z7,Uid:babe5487-2d59-4548-a174-e937458943cc,Namespace:calico-system,Attempt:0,}" Jun 21 02:34:10.797580 containerd[1524]: time="2025-06-21T02:34:10.791546040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-dbmg9,Uid:ccf82a03-d334-4a00-9e9a-c652b874cc34,Namespace:calico-system,Attempt:0,}" Jun 21 02:34:10.797580 containerd[1524]: time="2025-06-21T02:34:10.791721160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fb8467986-x5j49,Uid:b9a64c74-9e1c-4021-9017-45e1ca7f0ee0,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:34:10.978152 systemd[1]: Created slice kubepods-besteffort-podbeef3fba_32a8_4ff1_9b42_beb45fd36a99.slice - libcontainer container kubepods-besteffort-podbeef3fba_32a8_4ff1_9b42_beb45fd36a99.slice. Jun 21 02:34:10.992291 containerd[1524]: time="2025-06-21T02:34:10.988641833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2p5zs,Uid:beef3fba-32a8-4ff1-9b42-beb45fd36a99,Namespace:calico-system,Attempt:0,}" Jun 21 02:34:11.049103 containerd[1524]: time="2025-06-21T02:34:11.048958585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 21 02:34:11.157668 containerd[1524]: time="2025-06-21T02:34:11.156000900Z" level=error msg="Failed to destroy network for sandbox \"4d901275da0e72afe03faaf25ae80550d275b4f4f1a10aedea7da9fd64b2fd68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.158266 containerd[1524]: time="2025-06-21T02:34:11.158224498Z" level=error msg="Failed to destroy network for sandbox \"e1fd5f71f7c8ad256e74e687080f995a0bf0cc6163d52df5fe1b0a186cd29d38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.161015 containerd[1524]: time="2025-06-21T02:34:11.160945656Z" level=error msg="Failed to destroy network for sandbox \"53b9289b298edcae7cc46380b1eb44c36973e528c14a8b399ece0905c60ccfd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.162637 containerd[1524]: time="2025-06-21T02:34:11.162592934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qpplk,Uid:7e4612c9-67f5-4c6f-b9f5-c619b405d0fb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d901275da0e72afe03faaf25ae80550d275b4f4f1a10aedea7da9fd64b2fd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.165468 kubelet[2649]: E0621 02:34:11.164738 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d901275da0e72afe03faaf25ae80550d275b4f4f1a10aedea7da9fd64b2fd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.165593 containerd[1524]: time="2025-06-21T02:34:11.165072012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wrbxh,Uid:7bf5741e-a7ae-4f5e-8671-7b89a6c87403,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1fd5f71f7c8ad256e74e687080f995a0bf0cc6163d52df5fe1b0a186cd29d38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.165654 kubelet[2649]: E0621 02:34:11.165482 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1fd5f71f7c8ad256e74e687080f995a0bf0cc6163d52df5fe1b0a186cd29d38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.165654 kubelet[2649]: E0621 02:34:11.165543 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1fd5f71f7c8ad256e74e687080f995a0bf0cc6163d52df5fe1b0a186cd29d38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wrbxh" Jun 21 02:34:11.165654 kubelet[2649]: E0621 02:34:11.165561 2649 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1fd5f71f7c8ad256e74e687080f995a0bf0cc6163d52df5fe1b0a186cd29d38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wrbxh" Jun 21 02:34:11.165723 kubelet[2649]: E0621 02:34:11.165599 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wrbxh_kube-system(7bf5741e-a7ae-4f5e-8671-7b89a6c87403)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wrbxh_kube-system(7bf5741e-a7ae-4f5e-8671-7b89a6c87403)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1fd5f71f7c8ad256e74e687080f995a0bf0cc6163d52df5fe1b0a186cd29d38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wrbxh" podUID="7bf5741e-a7ae-4f5e-8671-7b89a6c87403" Jun 21 02:34:11.166426 containerd[1524]: time="2025-06-21T02:34:11.166372491Z" level=error msg="Failed to destroy network for sandbox \"94fb03a02ec7aa8c3c33a0a26317109c5dde662017b7da5dfd5933f165269af2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.166918 kubelet[2649]: E0621 02:34:11.166879 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d901275da0e72afe03faaf25ae80550d275b4f4f1a10aedea7da9fd64b2fd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qpplk" Jun 21 02:34:11.166988 kubelet[2649]: E0621 02:34:11.166935 2649 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d901275da0e72afe03faaf25ae80550d275b4f4f1a10aedea7da9fd64b2fd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qpplk" Jun 21 02:34:11.167018 kubelet[2649]: E0621 02:34:11.166971 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qpplk_kube-system(7e4612c9-67f5-4c6f-b9f5-c619b405d0fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qpplk_kube-system(7e4612c9-67f5-4c6f-b9f5-c619b405d0fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d901275da0e72afe03faaf25ae80550d275b4f4f1a10aedea7da9fd64b2fd68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qpplk" podUID="7e4612c9-67f5-4c6f-b9f5-c619b405d0fb" Jun 21 02:34:11.167449 containerd[1524]: time="2025-06-21T02:34:11.167410651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c745b8b4-6j8z7,Uid:babe5487-2d59-4548-a174-e937458943cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b9289b298edcae7cc46380b1eb44c36973e528c14a8b399ece0905c60ccfd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.167819 kubelet[2649]: E0621 02:34:11.167559 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b9289b298edcae7cc46380b1eb44c36973e528c14a8b399ece0905c60ccfd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.167819 kubelet[2649]: E0621 02:34:11.167592 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b9289b298edcae7cc46380b1eb44c36973e528c14a8b399ece0905c60ccfd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67c745b8b4-6j8z7" Jun 21 02:34:11.167819 kubelet[2649]: E0621 02:34:11.167622 2649 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b9289b298edcae7cc46380b1eb44c36973e528c14a8b399ece0905c60ccfd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67c745b8b4-6j8z7" Jun 21 02:34:11.168535 kubelet[2649]: E0621 02:34:11.167661 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67c745b8b4-6j8z7_calico-system(babe5487-2d59-4548-a174-e937458943cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67c745b8b4-6j8z7_calico-system(babe5487-2d59-4548-a174-e937458943cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53b9289b298edcae7cc46380b1eb44c36973e528c14a8b399ece0905c60ccfd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67c745b8b4-6j8z7" podUID="babe5487-2d59-4548-a174-e937458943cc" Jun 21 02:34:11.168627 containerd[1524]: time="2025-06-21T02:34:11.168303570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fb8467986-x5j49,Uid:b9a64c74-9e1c-4021-9017-45e1ca7f0ee0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fb03a02ec7aa8c3c33a0a26317109c5dde662017b7da5dfd5933f165269af2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.168925 kubelet[2649]: E0621 02:34:11.168886 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fb03a02ec7aa8c3c33a0a26317109c5dde662017b7da5dfd5933f165269af2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.168991 kubelet[2649]: E0621 02:34:11.168935 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fb03a02ec7aa8c3c33a0a26317109c5dde662017b7da5dfd5933f165269af2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fb8467986-x5j49" Jun 21 02:34:11.168991 kubelet[2649]: E0621 02:34:11.168951 2649 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fb03a02ec7aa8c3c33a0a26317109c5dde662017b7da5dfd5933f165269af2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fb8467986-x5j49" Jun 21 02:34:11.168991 kubelet[2649]: E0621 02:34:11.168978 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fb8467986-x5j49_calico-apiserver(b9a64c74-9e1c-4021-9017-45e1ca7f0ee0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fb8467986-x5j49_calico-apiserver(b9a64c74-9e1c-4021-9017-45e1ca7f0ee0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94fb03a02ec7aa8c3c33a0a26317109c5dde662017b7da5dfd5933f165269af2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fb8467986-x5j49" podUID="b9a64c74-9e1c-4021-9017-45e1ca7f0ee0" Jun 21 02:34:11.172962 containerd[1524]: time="2025-06-21T02:34:11.172899166Z" level=error msg="Failed to destroy network for sandbox \"3b1e61084548af1948c93a8b76a42a904b5be3176820d0da1b815f77eb22b1b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.173953 containerd[1524]: time="2025-06-21T02:34:11.173897565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fb8467986-zzvjw,Uid:5fd053d8-590f-4c15-99fe-03cf45c58e4e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b1e61084548af1948c93a8b76a42a904b5be3176820d0da1b815f77eb22b1b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.174374 kubelet[2649]: E0621 02:34:11.174156 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b1e61084548af1948c93a8b76a42a904b5be3176820d0da1b815f77eb22b1b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.174374 kubelet[2649]: E0621 02:34:11.174206 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b1e61084548af1948c93a8b76a42a904b5be3176820d0da1b815f77eb22b1b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fb8467986-zzvjw" Jun 21 02:34:11.174374 kubelet[2649]: E0621 02:34:11.174221 2649 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b1e61084548af1948c93a8b76a42a904b5be3176820d0da1b815f77eb22b1b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fb8467986-zzvjw" Jun 21 02:34:11.174473 kubelet[2649]: E0621 02:34:11.174269 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fb8467986-zzvjw_calico-apiserver(5fd053d8-590f-4c15-99fe-03cf45c58e4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fb8467986-zzvjw_calico-apiserver(5fd053d8-590f-4c15-99fe-03cf45c58e4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b1e61084548af1948c93a8b76a42a904b5be3176820d0da1b815f77eb22b1b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fb8467986-zzvjw" podUID="5fd053d8-590f-4c15-99fe-03cf45c58e4e" Jun 21 02:34:11.176704 containerd[1524]: time="2025-06-21T02:34:11.176653483Z" level=error msg="Failed to destroy network for sandbox \"012c9e76017bc266f2071e2f2ddc6318749531fdd236a66a27960a94870610d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.177259 containerd[1524]: time="2025-06-21T02:34:11.177234123Z" level=error msg="Failed to destroy network for sandbox \"7d9f884883078016b8b57554ee1d992c76dea7c014e8449af52fa98cde34d5ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.177762 containerd[1524]: time="2025-06-21T02:34:11.177631923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-854c864d55-kcsgm,Uid:adca60d9-0d17-48e4-af8a-8a78316fc978,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"012c9e76017bc266f2071e2f2ddc6318749531fdd236a66a27960a94870610d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.178034 kubelet[2649]: E0621 02:34:11.177988 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"012c9e76017bc266f2071e2f2ddc6318749531fdd236a66a27960a94870610d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.178106 kubelet[2649]: E0621 02:34:11.178035 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"012c9e76017bc266f2071e2f2ddc6318749531fdd236a66a27960a94870610d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-854c864d55-kcsgm" Jun 21 02:34:11.178106 kubelet[2649]: E0621 02:34:11.178079 2649 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"012c9e76017bc266f2071e2f2ddc6318749531fdd236a66a27960a94870610d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-854c864d55-kcsgm" Jun 21 02:34:11.178374 kubelet[2649]: E0621 02:34:11.178117 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-854c864d55-kcsgm_calico-system(adca60d9-0d17-48e4-af8a-8a78316fc978)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-854c864d55-kcsgm_calico-system(adca60d9-0d17-48e4-af8a-8a78316fc978)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"012c9e76017bc266f2071e2f2ddc6318749531fdd236a66a27960a94870610d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-854c864d55-kcsgm" podUID="adca60d9-0d17-48e4-af8a-8a78316fc978" Jun 21 02:34:11.178555 containerd[1524]: time="2025-06-21T02:34:11.178520322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-dbmg9,Uid:ccf82a03-d334-4a00-9e9a-c652b874cc34,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9f884883078016b8b57554ee1d992c76dea7c014e8449af52fa98cde34d5ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.178736 containerd[1524]: time="2025-06-21T02:34:11.178598522Z" level=error msg="Failed to destroy network for sandbox \"3a18d0461f78fafc88e95ff149dbcaa8aaff2016a28f771c85177264e25957fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.178917 kubelet[2649]: E0621 02:34:11.178682 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9f884883078016b8b57554ee1d992c76dea7c014e8449af52fa98cde34d5ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.178917 kubelet[2649]: E0621 02:34:11.178711 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9f884883078016b8b57554ee1d992c76dea7c014e8449af52fa98cde34d5ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-dc7b455cb-dbmg9" Jun 21 02:34:11.178917 kubelet[2649]: E0621 02:34:11.178729 2649 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9f884883078016b8b57554ee1d992c76dea7c014e8449af52fa98cde34d5ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-dc7b455cb-dbmg9" Jun 21 02:34:11.178996 kubelet[2649]: E0621 02:34:11.178782 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-dc7b455cb-dbmg9_calico-system(ccf82a03-d334-4a00-9e9a-c652b874cc34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-dc7b455cb-dbmg9_calico-system(ccf82a03-d334-4a00-9e9a-c652b874cc34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d9f884883078016b8b57554ee1d992c76dea7c014e8449af52fa98cde34d5ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-dc7b455cb-dbmg9" podUID="ccf82a03-d334-4a00-9e9a-c652b874cc34" Jun 21 02:34:11.179990 containerd[1524]: time="2025-06-21T02:34:11.179945521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2p5zs,Uid:beef3fba-32a8-4ff1-9b42-beb45fd36a99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a18d0461f78fafc88e95ff149dbcaa8aaff2016a28f771c85177264e25957fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.180202 kubelet[2649]: E0621 02:34:11.180077 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a18d0461f78fafc88e95ff149dbcaa8aaff2016a28f771c85177264e25957fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 02:34:11.180202 kubelet[2649]: E0621 02:34:11.180107 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a18d0461f78fafc88e95ff149dbcaa8aaff2016a28f771c85177264e25957fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2p5zs" Jun 21 02:34:11.180202 kubelet[2649]: E0621 02:34:11.180121 2649 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a18d0461f78fafc88e95ff149dbcaa8aaff2016a28f771c85177264e25957fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2p5zs" Jun 21 02:34:11.180323 kubelet[2649]: E0621 02:34:11.180151 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2p5zs_calico-system(beef3fba-32a8-4ff1-9b42-beb45fd36a99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2p5zs_calico-system(beef3fba-32a8-4ff1-9b42-beb45fd36a99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a18d0461f78fafc88e95ff149dbcaa8aaff2016a28f771c85177264e25957fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2p5zs" podUID="beef3fba-32a8-4ff1-9b42-beb45fd36a99" Jun 21 02:34:11.696198 systemd[1]: run-netns-cni\x2d0886b14b\x2da920\x2dab21\x2d8e3d\x2d80776e883820.mount: Deactivated successfully. Jun 21 02:34:11.696284 systemd[1]: run-netns-cni\x2df536f16d\x2d11bf\x2dce19\x2d6259\x2dfe491a6e8f79.mount: Deactivated successfully. Jun 21 02:34:11.696339 systemd[1]: run-netns-cni\x2da0d4356d\x2dc11f\x2d3a53\x2dc80d\x2d78ddb7a06731.mount: Deactivated successfully. Jun 21 02:34:11.696383 systemd[1]: run-netns-cni\x2d2d5e9aa6\x2dce5d\x2d42fa\x2d77b7\x2de2cf9942690e.mount: Deactivated successfully. Jun 21 02:34:13.624726 kubelet[2649]: I0621 02:34:13.624537 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:34:18.585562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount400539622.mount: Deactivated successfully. Jun 21 02:34:18.785078 containerd[1524]: time="2025-06-21T02:34:18.785013534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=150542367" Jun 21 02:34:18.796716 containerd[1524]: time="2025-06-21T02:34:18.796669968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"150542229\" in 7.747669623s" Jun 21 02:34:18.796716 containerd[1524]: time="2025-06-21T02:34:18.796713728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\"" Jun 21 02:34:18.804029 containerd[1524]: time="2025-06-21T02:34:18.803976485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:18.804606 containerd[1524]: time="2025-06-21T02:34:18.804583444Z" level=info msg="ImageCreate event name:\"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:18.805097 containerd[1524]: time="2025-06-21T02:34:18.805075324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:18.812932 containerd[1524]: time="2025-06-21T02:34:18.812440880Z" level=info msg="CreateContainer within sandbox \"d2287f04fd1fa275d73fdae309973d4f0229c4153d583002cb78a8937d50aac3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 21 02:34:18.819638 containerd[1524]: time="2025-06-21T02:34:18.819597957Z" level=info msg="Container 5a10cbc715eeeb9b25ef5cc8d7f894583c6c538bd9edbd21a79172dbccb40f35: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:18.840501 containerd[1524]: time="2025-06-21T02:34:18.840387546Z" level=info msg="CreateContainer within sandbox \"d2287f04fd1fa275d73fdae309973d4f0229c4153d583002cb78a8937d50aac3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5a10cbc715eeeb9b25ef5cc8d7f894583c6c538bd9edbd21a79172dbccb40f35\"" Jun 21 02:34:18.841476 containerd[1524]: time="2025-06-21T02:34:18.841447826Z" level=info msg="StartContainer for \"5a10cbc715eeeb9b25ef5cc8d7f894583c6c538bd9edbd21a79172dbccb40f35\"" Jun 21 02:34:18.843746 containerd[1524]: time="2025-06-21T02:34:18.843191065Z" level=info msg="connecting to shim 5a10cbc715eeeb9b25ef5cc8d7f894583c6c538bd9edbd21a79172dbccb40f35" address="unix:///run/containerd/s/0df201aa76d113023e74863f5f9fe5b33acd7e23939b717232c26f19aa0d165e" protocol=ttrpc version=3 Jun 21 02:34:18.862718 systemd[1]: Started cri-containerd-5a10cbc715eeeb9b25ef5cc8d7f894583c6c538bd9edbd21a79172dbccb40f35.scope - libcontainer container 5a10cbc715eeeb9b25ef5cc8d7f894583c6c538bd9edbd21a79172dbccb40f35. Jun 21 02:34:18.917645 containerd[1524]: time="2025-06-21T02:34:18.917600387Z" level=info msg="StartContainer for \"5a10cbc715eeeb9b25ef5cc8d7f894583c6c538bd9edbd21a79172dbccb40f35\" returns successfully" Jun 21 02:34:19.109861 kubelet[2649]: I0621 02:34:19.109729 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jchj4" podStartSLOduration=0.938433773 podStartE2EDuration="16.109706414s" podCreationTimestamp="2025-06-21 02:34:03 +0000 UTC" firstStartedPulling="2025-06-21 02:34:03.626062327 +0000 UTC m=+20.751196410" lastFinishedPulling="2025-06-21 02:34:18.797334968 +0000 UTC m=+35.922469051" observedRunningTime="2025-06-21 02:34:19.108892494 +0000 UTC m=+36.234026577" watchObservedRunningTime="2025-06-21 02:34:19.109706414 +0000 UTC m=+36.234840497" Jun 21 02:34:19.157941 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 21 02:34:19.158072 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 21 02:34:19.299332 kubelet[2649]: I0621 02:34:19.299296 2649 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adca60d9-0d17-48e4-af8a-8a78316fc978-whisker-ca-bundle\") pod \"adca60d9-0d17-48e4-af8a-8a78316fc978\" (UID: \"adca60d9-0d17-48e4-af8a-8a78316fc978\") " Jun 21 02:34:19.299332 kubelet[2649]: I0621 02:34:19.299338 2649 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/adca60d9-0d17-48e4-af8a-8a78316fc978-whisker-backend-key-pair\") pod \"adca60d9-0d17-48e4-af8a-8a78316fc978\" (UID: \"adca60d9-0d17-48e4-af8a-8a78316fc978\") " Jun 21 02:34:19.299552 kubelet[2649]: I0621 02:34:19.299376 2649 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j9q2\" (UniqueName: \"kubernetes.io/projected/adca60d9-0d17-48e4-af8a-8a78316fc978-kube-api-access-2j9q2\") pod \"adca60d9-0d17-48e4-af8a-8a78316fc978\" (UID: \"adca60d9-0d17-48e4-af8a-8a78316fc978\") " Jun 21 02:34:19.315829 kubelet[2649]: I0621 02:34:19.315762 2649 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adca60d9-0d17-48e4-af8a-8a78316fc978-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "adca60d9-0d17-48e4-af8a-8a78316fc978" (UID: "adca60d9-0d17-48e4-af8a-8a78316fc978"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 21 02:34:19.317250 kubelet[2649]: I0621 02:34:19.316878 2649 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adca60d9-0d17-48e4-af8a-8a78316fc978-kube-api-access-2j9q2" (OuterVolumeSpecName: "kube-api-access-2j9q2") pod "adca60d9-0d17-48e4-af8a-8a78316fc978" (UID: "adca60d9-0d17-48e4-af8a-8a78316fc978"). InnerVolumeSpecName "kube-api-access-2j9q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 02:34:19.323226 kubelet[2649]: I0621 02:34:19.323170 2649 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adca60d9-0d17-48e4-af8a-8a78316fc978-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "adca60d9-0d17-48e4-af8a-8a78316fc978" (UID: "adca60d9-0d17-48e4-af8a-8a78316fc978"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 21 02:34:19.400877 kubelet[2649]: I0621 02:34:19.400670 2649 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j9q2\" (UniqueName: \"kubernetes.io/projected/adca60d9-0d17-48e4-af8a-8a78316fc978-kube-api-access-2j9q2\") on node \"localhost\" DevicePath \"\"" Jun 21 02:34:19.400877 kubelet[2649]: I0621 02:34:19.400704 2649 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adca60d9-0d17-48e4-af8a-8a78316fc978-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 21 02:34:19.400877 kubelet[2649]: I0621 02:34:19.400714 2649 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/adca60d9-0d17-48e4-af8a-8a78316fc978-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jun 21 02:34:19.589896 systemd[1]: var-lib-kubelet-pods-adca60d9\x2d0d17\x2d48e4\x2daf8a\x2d8a78316fc978-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2j9q2.mount: Deactivated successfully. Jun 21 02:34:19.589988 systemd[1]: var-lib-kubelet-pods-adca60d9\x2d0d17\x2d48e4\x2daf8a\x2d8a78316fc978-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 21 02:34:20.078210 kubelet[2649]: I0621 02:34:20.078143 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:34:20.083007 systemd[1]: Removed slice kubepods-besteffort-podadca60d9_0d17_48e4_af8a_8a78316fc978.slice - libcontainer container kubepods-besteffort-podadca60d9_0d17_48e4_af8a_8a78316fc978.slice. Jun 21 02:34:20.126224 systemd[1]: Created slice kubepods-besteffort-pod9fd1041c_22f6_4229_8566_0db87c1812d3.slice - libcontainer container kubepods-besteffort-pod9fd1041c_22f6_4229_8566_0db87c1812d3.slice. Jun 21 02:34:20.204882 kubelet[2649]: I0621 02:34:20.204816 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgqpg\" (UniqueName: \"kubernetes.io/projected/9fd1041c-22f6-4229-8566-0db87c1812d3-kube-api-access-dgqpg\") pod \"whisker-7dd4d4bf88-fb2gb\" (UID: \"9fd1041c-22f6-4229-8566-0db87c1812d3\") " pod="calico-system/whisker-7dd4d4bf88-fb2gb" Jun 21 02:34:20.205563 kubelet[2649]: I0621 02:34:20.204892 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9fd1041c-22f6-4229-8566-0db87c1812d3-whisker-backend-key-pair\") pod \"whisker-7dd4d4bf88-fb2gb\" (UID: \"9fd1041c-22f6-4229-8566-0db87c1812d3\") " pod="calico-system/whisker-7dd4d4bf88-fb2gb" Jun 21 02:34:20.205563 kubelet[2649]: I0621 02:34:20.204937 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fd1041c-22f6-4229-8566-0db87c1812d3-whisker-ca-bundle\") pod \"whisker-7dd4d4bf88-fb2gb\" (UID: \"9fd1041c-22f6-4229-8566-0db87c1812d3\") " pod="calico-system/whisker-7dd4d4bf88-fb2gb" Jun 21 02:34:20.431046 containerd[1524]: time="2025-06-21T02:34:20.430937361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dd4d4bf88-fb2gb,Uid:9fd1041c-22f6-4229-8566-0db87c1812d3,Namespace:calico-system,Attempt:0,}" Jun 21 02:34:20.778216 systemd-networkd[1433]: calieb705240a85: Link UP Jun 21 02:34:20.778463 systemd-networkd[1433]: calieb705240a85: Gained carrier Jun 21 02:34:20.790781 containerd[1524]: 2025-06-21 02:34:20.512 [INFO][3787] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 02:34:20.790781 containerd[1524]: 2025-06-21 02:34:20.596 [INFO][3787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0 whisker-7dd4d4bf88- calico-system 9fd1041c-22f6-4229-8566-0db87c1812d3 901 0 2025-06-21 02:34:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7dd4d4bf88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7dd4d4bf88-fb2gb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calieb705240a85 [] [] }} ContainerID="57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" Namespace="calico-system" Pod="whisker-7dd4d4bf88-fb2gb" WorkloadEndpoint="localhost-k8s-whisker--7dd4d4bf88--fb2gb-" Jun 21 02:34:20.790781 containerd[1524]: 2025-06-21 02:34:20.596 [INFO][3787] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" Namespace="calico-system" Pod="whisker-7dd4d4bf88-fb2gb" WorkloadEndpoint="localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0" Jun 21 02:34:20.790781 containerd[1524]: 2025-06-21 02:34:20.732 [INFO][3875] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" HandleID="k8s-pod-network.57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" Workload="localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0" Jun 21 02:34:20.791071 containerd[1524]: 2025-06-21 02:34:20.732 [INFO][3875] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" HandleID="k8s-pod-network.57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" Workload="localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dd40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7dd4d4bf88-fb2gb", "timestamp":"2025-06-21 02:34:20.732219308 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:34:20.791071 containerd[1524]: 2025-06-21 02:34:20.732 [INFO][3875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:34:20.791071 containerd[1524]: 2025-06-21 02:34:20.732 [INFO][3875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:34:20.791071 containerd[1524]: 2025-06-21 02:34:20.732 [INFO][3875] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:34:20.791071 containerd[1524]: 2025-06-21 02:34:20.745 [INFO][3875] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" host="localhost" Jun 21 02:34:20.791071 containerd[1524]: 2025-06-21 02:34:20.751 [INFO][3875] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:34:20.791071 containerd[1524]: 2025-06-21 02:34:20.755 [INFO][3875] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:34:20.791071 containerd[1524]: 2025-06-21 02:34:20.757 [INFO][3875] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:20.791071 containerd[1524]: 2025-06-21 02:34:20.759 [INFO][3875] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:20.791071 containerd[1524]: 2025-06-21 02:34:20.759 [INFO][3875] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" host="localhost" Jun 21 02:34:20.791280 containerd[1524]: 2025-06-21 02:34:20.760 [INFO][3875] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d Jun 21 02:34:20.791280 containerd[1524]: 2025-06-21 02:34:20.764 [INFO][3875] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" host="localhost" Jun 21 02:34:20.791280 containerd[1524]: 2025-06-21 02:34:20.769 [INFO][3875] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" host="localhost" Jun 21 02:34:20.791280 containerd[1524]: 2025-06-21 02:34:20.769 [INFO][3875] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" host="localhost" Jun 21 02:34:20.791280 containerd[1524]: 2025-06-21 02:34:20.769 [INFO][3875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:34:20.791280 containerd[1524]: 2025-06-21 02:34:20.769 [INFO][3875] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" HandleID="k8s-pod-network.57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" Workload="localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0" Jun 21 02:34:20.791406 containerd[1524]: 2025-06-21 02:34:20.771 [INFO][3787] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" Namespace="calico-system" Pod="whisker-7dd4d4bf88-fb2gb" WorkloadEndpoint="localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0", GenerateName:"whisker-7dd4d4bf88-", Namespace:"calico-system", SelfLink:"", UID:"9fd1041c-22f6-4229-8566-0db87c1812d3", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dd4d4bf88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7dd4d4bf88-fb2gb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieb705240a85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:20.791406 containerd[1524]: 2025-06-21 02:34:20.772 [INFO][3787] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" Namespace="calico-system" Pod="whisker-7dd4d4bf88-fb2gb" WorkloadEndpoint="localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0" Jun 21 02:34:20.791476 containerd[1524]: 2025-06-21 02:34:20.772 [INFO][3787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb705240a85 ContainerID="57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" Namespace="calico-system" Pod="whisker-7dd4d4bf88-fb2gb" WorkloadEndpoint="localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0" Jun 21 02:34:20.791476 containerd[1524]: 2025-06-21 02:34:20.778 [INFO][3787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" Namespace="calico-system" Pod="whisker-7dd4d4bf88-fb2gb" WorkloadEndpoint="localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0" Jun 21 02:34:20.791518 containerd[1524]: 2025-06-21 02:34:20.778 [INFO][3787] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" Namespace="calico-system" Pod="whisker-7dd4d4bf88-fb2gb" WorkloadEndpoint="localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0", GenerateName:"whisker-7dd4d4bf88-", Namespace:"calico-system", SelfLink:"", UID:"9fd1041c-22f6-4229-8566-0db87c1812d3", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dd4d4bf88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d", Pod:"whisker-7dd4d4bf88-fb2gb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieb705240a85", MAC:"c6:e6:f6:6d:47:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:20.791566 containerd[1524]: 2025-06-21 02:34:20.788 [INFO][3787] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" Namespace="calico-system" Pod="whisker-7dd4d4bf88-fb2gb" WorkloadEndpoint="localhost-k8s-whisker--7dd4d4bf88--fb2gb-eth0" Jun 21 02:34:20.895234 containerd[1524]: time="2025-06-21T02:34:20.895134195Z" level=info msg="connecting to shim 57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d" address="unix:///run/containerd/s/7a7d0e1b3f6c546497e6cc11f0c179631f78bc9dbe64b742178fbc50a4cc682b" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:34:20.916522 systemd-networkd[1433]: vxlan.calico: Link UP Jun 21 02:34:20.916528 systemd-networkd[1433]: vxlan.calico: Gained carrier Jun 21 02:34:20.950069 systemd[1]: Started cri-containerd-57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d.scope - libcontainer container 57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d. Jun 21 02:34:20.960575 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:34:20.970312 kubelet[2649]: I0621 02:34:20.970155 2649 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adca60d9-0d17-48e4-af8a-8a78316fc978" path="/var/lib/kubelet/pods/adca60d9-0d17-48e4-af8a-8a78316fc978/volumes" Jun 21 02:34:20.979929 containerd[1524]: time="2025-06-21T02:34:20.979883158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dd4d4bf88-fb2gb,Uid:9fd1041c-22f6-4229-8566-0db87c1812d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d\"" Jun 21 02:34:20.983309 containerd[1524]: time="2025-06-21T02:34:20.983272156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 21 02:34:21.891380 containerd[1524]: time="2025-06-21T02:34:21.891308738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:21.891876 containerd[1524]: time="2025-06-21T02:34:21.891848618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4605623" Jun 21 02:34:21.892976 containerd[1524]: time="2025-06-21T02:34:21.892864337Z" level=info msg="ImageCreate event name:\"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:21.895213 containerd[1524]: time="2025-06-21T02:34:21.895127736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:21.895868 containerd[1524]: time="2025-06-21T02:34:21.895799176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"5974856\" in 912.49182ms" Jun 21 02:34:21.895868 containerd[1524]: time="2025-06-21T02:34:21.895845816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\"" Jun 21 02:34:21.898384 containerd[1524]: time="2025-06-21T02:34:21.898330055Z" level=info msg="CreateContainer within sandbox \"57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 21 02:34:21.909308 containerd[1524]: time="2025-06-21T02:34:21.909266610Z" level=info msg="Container 8c8ce04ffb3257c0813c2b975998c0031d159adf860408eb71bbe4bd661fe14d: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:21.916342 containerd[1524]: time="2025-06-21T02:34:21.916303808Z" level=info msg="CreateContainer within sandbox \"57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8c8ce04ffb3257c0813c2b975998c0031d159adf860408eb71bbe4bd661fe14d\"" Jun 21 02:34:21.918040 containerd[1524]: time="2025-06-21T02:34:21.916916007Z" level=info msg="StartContainer for \"8c8ce04ffb3257c0813c2b975998c0031d159adf860408eb71bbe4bd661fe14d\"" Jun 21 02:34:21.918271 containerd[1524]: time="2025-06-21T02:34:21.918232287Z" level=info msg="connecting to shim 8c8ce04ffb3257c0813c2b975998c0031d159adf860408eb71bbe4bd661fe14d" address="unix:///run/containerd/s/7a7d0e1b3f6c546497e6cc11f0c179631f78bc9dbe64b742178fbc50a4cc682b" protocol=ttrpc version=3 Jun 21 02:34:21.938008 systemd[1]: Started cri-containerd-8c8ce04ffb3257c0813c2b975998c0031d159adf860408eb71bbe4bd661fe14d.scope - libcontainer container 8c8ce04ffb3257c0813c2b975998c0031d159adf860408eb71bbe4bd661fe14d. Jun 21 02:34:21.968003 containerd[1524]: time="2025-06-21T02:34:21.967952146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qpplk,Uid:7e4612c9-67f5-4c6f-b9f5-c619b405d0fb,Namespace:kube-system,Attempt:0,}" Jun 21 02:34:21.985851 containerd[1524]: time="2025-06-21T02:34:21.985771219Z" level=info msg="StartContainer for \"8c8ce04ffb3257c0813c2b975998c0031d159adf860408eb71bbe4bd661fe14d\" returns successfully" Jun 21 02:34:22.007423 containerd[1524]: time="2025-06-21T02:34:22.007015010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 21 02:34:22.086787 systemd-networkd[1433]: cali8063eb62f7d: Link UP Jun 21 02:34:22.087289 systemd-networkd[1433]: cali8063eb62f7d: Gained carrier Jun 21 02:34:22.100182 containerd[1524]: 2025-06-21 02:34:22.026 [INFO][4075] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0 coredns-7c65d6cfc9- kube-system 7e4612c9-67f5-4c6f-b9f5-c619b405d0fb 823 0 2025-06-21 02:33:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-qpplk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8063eb62f7d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qpplk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qpplk-" Jun 21 02:34:22.100182 containerd[1524]: 2025-06-21 02:34:22.026 [INFO][4075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qpplk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0" Jun 21 02:34:22.100182 containerd[1524]: 2025-06-21 02:34:22.049 [INFO][4094] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" HandleID="k8s-pod-network.b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" Workload="localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0" Jun 21 02:34:22.100376 containerd[1524]: 2025-06-21 02:34:22.049 [INFO][4094] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" HandleID="k8s-pod-network.b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" Workload="localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-qpplk", "timestamp":"2025-06-21 02:34:22.049576833 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:34:22.100376 containerd[1524]: 2025-06-21 02:34:22.049 [INFO][4094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:34:22.100376 containerd[1524]: 2025-06-21 02:34:22.049 [INFO][4094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:34:22.100376 containerd[1524]: 2025-06-21 02:34:22.049 [INFO][4094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:34:22.100376 containerd[1524]: 2025-06-21 02:34:22.058 [INFO][4094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" host="localhost" Jun 21 02:34:22.100376 containerd[1524]: 2025-06-21 02:34:22.062 [INFO][4094] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:34:22.100376 containerd[1524]: 2025-06-21 02:34:22.066 [INFO][4094] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:34:22.100376 containerd[1524]: 2025-06-21 02:34:22.067 [INFO][4094] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:22.100376 containerd[1524]: 2025-06-21 02:34:22.070 [INFO][4094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:22.100376 containerd[1524]: 2025-06-21 02:34:22.070 [INFO][4094] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" host="localhost" Jun 21 02:34:22.100585 containerd[1524]: 2025-06-21 02:34:22.072 [INFO][4094] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27 Jun 21 02:34:22.100585 containerd[1524]: 2025-06-21 02:34:22.075 [INFO][4094] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" host="localhost" Jun 21 02:34:22.100585 containerd[1524]: 2025-06-21 02:34:22.080 [INFO][4094] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" host="localhost" Jun 21 02:34:22.100585 containerd[1524]: 2025-06-21 02:34:22.080 [INFO][4094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" host="localhost" Jun 21 02:34:22.100585 containerd[1524]: 2025-06-21 02:34:22.080 [INFO][4094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:34:22.100585 containerd[1524]: 2025-06-21 02:34:22.080 [INFO][4094] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" HandleID="k8s-pod-network.b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" Workload="localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0" Jun 21 02:34:22.100696 containerd[1524]: 2025-06-21 02:34:22.084 [INFO][4075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qpplk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7e4612c9-67f5-4c6f-b9f5-c619b405d0fb", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-qpplk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8063eb62f7d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:22.100761 containerd[1524]: 2025-06-21 02:34:22.084 [INFO][4075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qpplk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0" Jun 21 02:34:22.100761 containerd[1524]: 2025-06-21 02:34:22.084 [INFO][4075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8063eb62f7d ContainerID="b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qpplk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0" Jun 21 02:34:22.100761 containerd[1524]: 2025-06-21 02:34:22.087 [INFO][4075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qpplk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0" Jun 21 02:34:22.100821 containerd[1524]: 2025-06-21 02:34:22.087 [INFO][4075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qpplk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7e4612c9-67f5-4c6f-b9f5-c619b405d0fb", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27", Pod:"coredns-7c65d6cfc9-qpplk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8063eb62f7d", MAC:"62:91:20:75:bd:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:22.100821 containerd[1524]: 2025-06-21 02:34:22.097 [INFO][4075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qpplk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qpplk-eth0" Jun 21 02:34:22.124207 containerd[1524]: time="2025-06-21T02:34:22.124145284Z" level=info msg="connecting to shim b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27" address="unix:///run/containerd/s/4f2fc5f532cf79e9415a81bd5dcdeabaf455e8494d5402b86d37c69328ccb3d2" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:34:22.150038 systemd[1]: Started cri-containerd-b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27.scope - libcontainer container b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27. Jun 21 02:34:22.160823 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:34:22.183578 containerd[1524]: time="2025-06-21T02:34:22.183537741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qpplk,Uid:7e4612c9-67f5-4c6f-b9f5-c619b405d0fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27\"" Jun 21 02:34:22.187018 containerd[1524]: time="2025-06-21T02:34:22.186984380Z" level=info msg="CreateContainer within sandbox \"b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 02:34:22.195871 containerd[1524]: time="2025-06-21T02:34:22.195452857Z" level=info msg="Container 55300dfcf516c2673a4b160b70aa1d2950ea30b6c5e3a3c94774a00f01b155d8: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:22.200164 containerd[1524]: time="2025-06-21T02:34:22.200129495Z" level=info msg="CreateContainer within sandbox \"b3edc344858f2ce40c21aa83f3c73c739ce11d5b930b572904cf707fc79b7e27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55300dfcf516c2673a4b160b70aa1d2950ea30b6c5e3a3c94774a00f01b155d8\"" Jun 21 02:34:22.200583 containerd[1524]: time="2025-06-21T02:34:22.200556335Z" level=info msg="StartContainer for \"55300dfcf516c2673a4b160b70aa1d2950ea30b6c5e3a3c94774a00f01b155d8\"" Jun 21 02:34:22.201445 containerd[1524]: time="2025-06-21T02:34:22.201419454Z" level=info msg="connecting to shim 55300dfcf516c2673a4b160b70aa1d2950ea30b6c5e3a3c94774a00f01b155d8" address="unix:///run/containerd/s/4f2fc5f532cf79e9415a81bd5dcdeabaf455e8494d5402b86d37c69328ccb3d2" protocol=ttrpc version=3 Jun 21 02:34:22.220995 systemd[1]: Started cri-containerd-55300dfcf516c2673a4b160b70aa1d2950ea30b6c5e3a3c94774a00f01b155d8.scope - libcontainer container 55300dfcf516c2673a4b160b70aa1d2950ea30b6c5e3a3c94774a00f01b155d8. Jun 21 02:34:22.259079 containerd[1524]: time="2025-06-21T02:34:22.257596232Z" level=info msg="StartContainer for \"55300dfcf516c2673a4b160b70aa1d2950ea30b6c5e3a3c94774a00f01b155d8\" returns successfully" Jun 21 02:34:22.396993 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Jun 21 02:34:22.589036 systemd-networkd[1433]: calieb705240a85: Gained IPv6LL Jun 21 02:34:23.100516 kubelet[2649]: I0621 02:34:23.100460 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qpplk" podStartSLOduration=33.100443746 podStartE2EDuration="33.100443746s" podCreationTimestamp="2025-06-21 02:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:34:23.100363186 +0000 UTC m=+40.225497229" watchObservedRunningTime="2025-06-21 02:34:23.100443746 +0000 UTC m=+40.225577829" Jun 21 02:34:23.243647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350526795.mount: Deactivated successfully. Jun 21 02:34:23.255964 containerd[1524]: time="2025-06-21T02:34:23.255918649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:23.257202 containerd[1524]: time="2025-06-21T02:34:23.257158809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=30829716" Jun 21 02:34:23.257919 containerd[1524]: time="2025-06-21T02:34:23.257887608Z" level=info msg="ImageCreate event name:\"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:23.260454 containerd[1524]: time="2025-06-21T02:34:23.260419448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:23.261472 containerd[1524]: time="2025-06-21T02:34:23.261437127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"30829546\" in 1.254384077s" Jun 21 02:34:23.261512 containerd[1524]: time="2025-06-21T02:34:23.261472607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\"" Jun 21 02:34:23.263499 containerd[1524]: time="2025-06-21T02:34:23.263467726Z" level=info msg="CreateContainer within sandbox \"57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 21 02:34:23.269271 containerd[1524]: time="2025-06-21T02:34:23.269041884Z" level=info msg="Container a848d0fb6692551c080dd19df09ea4701a7dc1cf9d29455c8b23305e87e0d5ba: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:23.277796 containerd[1524]: time="2025-06-21T02:34:23.277750881Z" level=info msg="CreateContainer within sandbox \"57147e0eb7e4987a4a84e74aea13f3ce5e31c43a9f84d8660304ecdf98cd8b5d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a848d0fb6692551c080dd19df09ea4701a7dc1cf9d29455c8b23305e87e0d5ba\"" Jun 21 02:34:23.278308 containerd[1524]: time="2025-06-21T02:34:23.278241641Z" level=info msg="StartContainer for \"a848d0fb6692551c080dd19df09ea4701a7dc1cf9d29455c8b23305e87e0d5ba\"" Jun 21 02:34:23.279477 containerd[1524]: time="2025-06-21T02:34:23.279452841Z" level=info msg="connecting to shim a848d0fb6692551c080dd19df09ea4701a7dc1cf9d29455c8b23305e87e0d5ba" address="unix:///run/containerd/s/7a7d0e1b3f6c546497e6cc11f0c179631f78bc9dbe64b742178fbc50a4cc682b" protocol=ttrpc version=3 Jun 21 02:34:23.299001 systemd[1]: Started cri-containerd-a848d0fb6692551c080dd19df09ea4701a7dc1cf9d29455c8b23305e87e0d5ba.scope - libcontainer container a848d0fb6692551c080dd19df09ea4701a7dc1cf9d29455c8b23305e87e0d5ba. Jun 21 02:34:23.335741 containerd[1524]: time="2025-06-21T02:34:23.335693740Z" level=info msg="StartContainer for \"a848d0fb6692551c080dd19df09ea4701a7dc1cf9d29455c8b23305e87e0d5ba\" returns successfully" Jun 21 02:34:23.548989 systemd-networkd[1433]: cali8063eb62f7d: Gained IPv6LL Jun 21 02:34:23.967645 containerd[1524]: time="2025-06-21T02:34:23.967532669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fb8467986-zzvjw,Uid:5fd053d8-590f-4c15-99fe-03cf45c58e4e,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:34:23.967962 containerd[1524]: time="2025-06-21T02:34:23.967545989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wrbxh,Uid:7bf5741e-a7ae-4f5e-8671-7b89a6c87403,Namespace:kube-system,Attempt:0,}" Jun 21 02:34:23.968503 containerd[1524]: time="2025-06-21T02:34:23.968414669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-dbmg9,Uid:ccf82a03-d334-4a00-9e9a-c652b874cc34,Namespace:calico-system,Attempt:0,}" Jun 21 02:34:23.987077 kubelet[2649]: I0621 02:34:23.987026 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:34:24.113212 systemd-networkd[1433]: cali050de2c4fe0: Link UP Jun 21 02:34:24.114784 systemd-networkd[1433]: cali050de2c4fe0: Gained carrier Jun 21 02:34:24.118506 kubelet[2649]: I0621 02:34:24.118433 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7dd4d4bf88-fb2gb" podStartSLOduration=1.838983086 podStartE2EDuration="4.118191177s" podCreationTimestamp="2025-06-21 02:34:20 +0000 UTC" firstStartedPulling="2025-06-21 02:34:20.983021516 +0000 UTC m=+38.108155599" lastFinishedPulling="2025-06-21 02:34:23.262229607 +0000 UTC m=+40.387363690" observedRunningTime="2025-06-21 02:34:24.116424457 +0000 UTC m=+41.241558540" watchObservedRunningTime="2025-06-21 02:34:24.118191177 +0000 UTC m=+41.243325260" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.029 [INFO][4254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0 coredns-7c65d6cfc9- kube-system 7bf5741e-a7ae-4f5e-8671-7b89a6c87403 820 0 2025-06-21 02:33:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-wrbxh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali050de2c4fe0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrbxh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrbxh-" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.029 [INFO][4254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrbxh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.062 [INFO][4306] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" HandleID="k8s-pod-network.c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" Workload="localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.062 [INFO][4306] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" HandleID="k8s-pod-network.c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" Workload="localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005083e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-wrbxh", "timestamp":"2025-06-21 02:34:24.062140756 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.062 [INFO][4306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.063 [INFO][4306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.063 [INFO][4306] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.076 [INFO][4306] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" host="localhost" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.082 [INFO][4306] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.086 [INFO][4306] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.088 [INFO][4306] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.090 [INFO][4306] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.090 [INFO][4306] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" host="localhost" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.093 [INFO][4306] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37 Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.097 [INFO][4306] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" host="localhost" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.105 [INFO][4306] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" host="localhost" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.105 [INFO][4306] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" host="localhost" Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.106 [INFO][4306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:34:24.141364 containerd[1524]: 2025-06-21 02:34:24.106 [INFO][4306] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" HandleID="k8s-pod-network.c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" Workload="localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0" Jun 21 02:34:24.142399 containerd[1524]: 2025-06-21 02:34:24.110 [INFO][4254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrbxh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7bf5741e-a7ae-4f5e-8671-7b89a6c87403", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-wrbxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali050de2c4fe0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:24.142399 containerd[1524]: 2025-06-21 02:34:24.110 [INFO][4254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrbxh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0" Jun 21 02:34:24.142399 containerd[1524]: 2025-06-21 02:34:24.110 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali050de2c4fe0 ContainerID="c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrbxh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0" Jun 21 02:34:24.142399 containerd[1524]: 2025-06-21 02:34:24.114 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrbxh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0" Jun 21 02:34:24.142399 containerd[1524]: 2025-06-21 02:34:24.117 [INFO][4254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrbxh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7bf5741e-a7ae-4f5e-8671-7b89a6c87403", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37", Pod:"coredns-7c65d6cfc9-wrbxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali050de2c4fe0", MAC:"72:83:b4:6e:4b:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:24.142399 containerd[1524]: 2025-06-21 02:34:24.133 [INFO][4254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrbxh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrbxh-eth0" Jun 21 02:34:24.176387 containerd[1524]: time="2025-06-21T02:34:24.176323037Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a10cbc715eeeb9b25ef5cc8d7f894583c6c538bd9edbd21a79172dbccb40f35\" id:\"6bde403383250cb0f30b76da7a93a5bea958a48f10252da97770d09bc7ce9db5\" pid:4325 exited_at:{seconds:1750473264 nanos:176058317}" Jun 21 02:34:24.192004 containerd[1524]: time="2025-06-21T02:34:24.191348952Z" level=info msg="connecting to shim c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37" address="unix:///run/containerd/s/a5ed157b5d4fb9fd13db3b2386b2aaa775765011a38fdf9def1c55a4572a4786" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:34:24.224450 systemd-networkd[1433]: cali80a49ae9831: Link UP Jun 21 02:34:24.227077 systemd-networkd[1433]: cali80a49ae9831: Gained carrier Jun 21 02:34:24.232124 systemd[1]: Started cri-containerd-c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37.scope - libcontainer container c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37. Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.026 [INFO][4264] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0 goldmane-dc7b455cb- calico-system ccf82a03-d334-4a00-9e9a-c652b874cc34 826 0 2025-06-21 02:34:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:dc7b455cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-dc7b455cb-dbmg9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali80a49ae9831 [] [] }} ContainerID="9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" Namespace="calico-system" Pod="goldmane-dc7b455cb-dbmg9" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--dbmg9-" Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.027 [INFO][4264] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" Namespace="calico-system" Pod="goldmane-dc7b455cb-dbmg9" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0" Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.063 [INFO][4292] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" HandleID="k8s-pod-network.9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" Workload="localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0" Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.063 [INFO][4292] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" HandleID="k8s-pod-network.9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" Workload="localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000255600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-dc7b455cb-dbmg9", "timestamp":"2025-06-21 02:34:24.063812475 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.064 [INFO][4292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.106 [INFO][4292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.106 [INFO][4292] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.177 [INFO][4292] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" host="localhost" Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.184 [INFO][4292] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.192 [INFO][4292] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.194 [INFO][4292] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.198 [INFO][4292] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.198 [INFO][4292] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" host="localhost" Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.200 [INFO][4292] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3 Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.203 [INFO][4292] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" host="localhost" Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.214 [INFO][4292] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" host="localhost" Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.214 [INFO][4292] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" host="localhost" Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.214 [INFO][4292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:34:24.243438 containerd[1524]: 2025-06-21 02:34:24.214 [INFO][4292] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" HandleID="k8s-pod-network.9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" Workload="localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0" Jun 21 02:34:24.244513 containerd[1524]: 2025-06-21 02:34:24.220 [INFO][4264] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" Namespace="calico-system" Pod="goldmane-dc7b455cb-dbmg9" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0", GenerateName:"goldmane-dc7b455cb-", Namespace:"calico-system", SelfLink:"", UID:"ccf82a03-d334-4a00-9e9a-c652b874cc34", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"dc7b455cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-dc7b455cb-dbmg9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali80a49ae9831", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:24.244513 containerd[1524]: 2025-06-21 02:34:24.220 [INFO][4264] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" Namespace="calico-system" Pod="goldmane-dc7b455cb-dbmg9" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0" Jun 21 02:34:24.244513 containerd[1524]: 2025-06-21 02:34:24.220 [INFO][4264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80a49ae9831 ContainerID="9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" Namespace="calico-system" Pod="goldmane-dc7b455cb-dbmg9" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0" Jun 21 02:34:24.244513 containerd[1524]: 2025-06-21 02:34:24.227 [INFO][4264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" Namespace="calico-system" Pod="goldmane-dc7b455cb-dbmg9" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0" Jun 21 02:34:24.244513 containerd[1524]: 2025-06-21 02:34:24.228 [INFO][4264] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" Namespace="calico-system" Pod="goldmane-dc7b455cb-dbmg9" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0", GenerateName:"goldmane-dc7b455cb-", Namespace:"calico-system", SelfLink:"", UID:"ccf82a03-d334-4a00-9e9a-c652b874cc34", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"dc7b455cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3", Pod:"goldmane-dc7b455cb-dbmg9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali80a49ae9831", MAC:"d2:ad:05:a8:d6:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:24.244513 containerd[1524]: 2025-06-21 02:34:24.240 [INFO][4264] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" Namespace="calico-system" Pod="goldmane-dc7b455cb-dbmg9" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--dbmg9-eth0" Jun 21 02:34:24.251302 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:34:24.268414 containerd[1524]: time="2025-06-21T02:34:24.268375845Z" level=info msg="connecting to shim 9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3" address="unix:///run/containerd/s/d9192fa53dbf709b7a4c966be68e47cda834d79e47e5a50f16eb486ed7c0434e" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:34:24.278099 containerd[1524]: time="2025-06-21T02:34:24.278032962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wrbxh,Uid:7bf5741e-a7ae-4f5e-8671-7b89a6c87403,Namespace:kube-system,Attempt:0,} returns sandbox id \"c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37\"" Jun 21 02:34:24.283289 containerd[1524]: time="2025-06-21T02:34:24.283200680Z" level=info msg="CreateContainer within sandbox \"c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 02:34:24.293306 containerd[1524]: time="2025-06-21T02:34:24.293248277Z" level=info msg="Container ff089b860297d7a8fa0fd99af23501eb5e7b6c30998efb40d1591f664595d95e: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:24.301138 containerd[1524]: time="2025-06-21T02:34:24.301092914Z" level=info msg="CreateContainer within sandbox \"c885c8d77439b33194c3a8b667eae74ff92d6da56c621b92496394e1f2dbbb37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff089b860297d7a8fa0fd99af23501eb5e7b6c30998efb40d1591f664595d95e\"" Jun 21 02:34:24.301890 containerd[1524]: time="2025-06-21T02:34:24.301831674Z" level=info msg="StartContainer for \"ff089b860297d7a8fa0fd99af23501eb5e7b6c30998efb40d1591f664595d95e\"" Jun 21 02:34:24.302724 containerd[1524]: time="2025-06-21T02:34:24.302698313Z" level=info msg="connecting to shim ff089b860297d7a8fa0fd99af23501eb5e7b6c30998efb40d1591f664595d95e" address="unix:///run/containerd/s/a5ed157b5d4fb9fd13db3b2386b2aaa775765011a38fdf9def1c55a4572a4786" protocol=ttrpc version=3 Jun 21 02:34:24.303389 systemd[1]: Started cri-containerd-9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3.scope - libcontainer container 9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3. Jun 21 02:34:24.307455 containerd[1524]: time="2025-06-21T02:34:24.307392712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a10cbc715eeeb9b25ef5cc8d7f894583c6c538bd9edbd21a79172dbccb40f35\" id:\"c93f439b49eeddf6d9277c4954fcd0046be8acbab9b00e09538b3075fe36a552\" pid:4397 exited_at:{seconds:1750473264 nanos:306997392}" Jun 21 02:34:24.323401 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:34:24.324587 systemd-networkd[1433]: califf406cafd7d: Link UP Jun 21 02:34:24.325770 systemd-networkd[1433]: califf406cafd7d: Gained carrier Jun 21 02:34:24.328031 systemd[1]: Started cri-containerd-ff089b860297d7a8fa0fd99af23501eb5e7b6c30998efb40d1591f664595d95e.scope - libcontainer container ff089b860297d7a8fa0fd99af23501eb5e7b6c30998efb40d1591f664595d95e. Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.027 [INFO][4248] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0 calico-apiserver-fb8467986- calico-apiserver 5fd053d8-590f-4c15-99fe-03cf45c58e4e 821 0 2025-06-21 02:33:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fb8467986 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fb8467986-zzvjw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califf406cafd7d [] [] }} ContainerID="f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-zzvjw" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--zzvjw-" Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.027 [INFO][4248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-zzvjw" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0" Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.067 [INFO][4288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" HandleID="k8s-pod-network.f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" Workload="localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0" Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.067 [INFO][4288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" HandleID="k8s-pod-network.f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" Workload="localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c440), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-fb8467986-zzvjw", "timestamp":"2025-06-21 02:34:24.067184914 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.067 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.216 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.217 [INFO][4288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.278 [INFO][4288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" host="localhost" Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.284 [INFO][4288] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.293 [INFO][4288] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.300 [INFO][4288] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.304 [INFO][4288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.304 [INFO][4288] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" host="localhost" Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.307 [INFO][4288] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37 Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.311 [INFO][4288] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" host="localhost" Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.318 [INFO][4288] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" host="localhost" Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.318 [INFO][4288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" host="localhost" Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.318 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:34:24.342811 containerd[1524]: 2025-06-21 02:34:24.318 [INFO][4288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" HandleID="k8s-pod-network.f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" Workload="localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0" Jun 21 02:34:24.343331 containerd[1524]: 2025-06-21 02:34:24.320 [INFO][4248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-zzvjw" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0", GenerateName:"calico-apiserver-fb8467986-", Namespace:"calico-apiserver", SelfLink:"", UID:"5fd053d8-590f-4c15-99fe-03cf45c58e4e", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fb8467986", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fb8467986-zzvjw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf406cafd7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:24.343331 containerd[1524]: 2025-06-21 02:34:24.320 [INFO][4248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-zzvjw" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0" Jun 21 02:34:24.343331 containerd[1524]: 2025-06-21 02:34:24.320 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf406cafd7d ContainerID="f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-zzvjw" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0" Jun 21 02:34:24.343331 containerd[1524]: 2025-06-21 02:34:24.325 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-zzvjw" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0" Jun 21 02:34:24.343331 containerd[1524]: 2025-06-21 02:34:24.326 [INFO][4248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-zzvjw" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0", GenerateName:"calico-apiserver-fb8467986-", Namespace:"calico-apiserver", SelfLink:"", UID:"5fd053d8-590f-4c15-99fe-03cf45c58e4e", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fb8467986", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37", Pod:"calico-apiserver-fb8467986-zzvjw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf406cafd7d", MAC:"36:ab:29:02:09:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:24.343331 containerd[1524]: 2025-06-21 02:34:24.339 [INFO][4248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-zzvjw" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--zzvjw-eth0" Jun 21 02:34:24.368807 containerd[1524]: time="2025-06-21T02:34:24.368770531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-dbmg9,Uid:ccf82a03-d334-4a00-9e9a-c652b874cc34,Namespace:calico-system,Attempt:0,} returns sandbox id \"9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3\"" Jun 21 02:34:24.369745 containerd[1524]: time="2025-06-21T02:34:24.369710890Z" level=info msg="connecting to shim f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37" address="unix:///run/containerd/s/f07342b4e8d04abd4efddaf44d7f282c16ab3b3bae08fae55a671eb8c419dbce" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:34:24.373172 containerd[1524]: time="2025-06-21T02:34:24.373145529Z" level=info msg="StartContainer for \"ff089b860297d7a8fa0fd99af23501eb5e7b6c30998efb40d1591f664595d95e\" returns successfully" Jun 21 02:34:24.373879 containerd[1524]: time="2025-06-21T02:34:24.373813809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 21 02:34:24.419011 systemd[1]: Started cri-containerd-f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37.scope - libcontainer container f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37. Jun 21 02:34:24.431124 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:34:24.450481 containerd[1524]: time="2025-06-21T02:34:24.450438303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fb8467986-zzvjw,Uid:5fd053d8-590f-4c15-99fe-03cf45c58e4e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37\"" Jun 21 02:34:24.967420 containerd[1524]: time="2025-06-21T02:34:24.967377526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fb8467986-x5j49,Uid:b9a64c74-9e1c-4021-9017-45e1ca7f0ee0,Namespace:calico-apiserver,Attempt:0,}" Jun 21 02:34:24.967591 containerd[1524]: time="2025-06-21T02:34:24.967395686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c745b8b4-6j8z7,Uid:babe5487-2d59-4548-a174-e937458943cc,Namespace:calico-system,Attempt:0,}" Jun 21 02:34:24.967898 containerd[1524]: time="2025-06-21T02:34:24.967873405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2p5zs,Uid:beef3fba-32a8-4ff1-9b42-beb45fd36a99,Namespace:calico-system,Attempt:0,}" Jun 21 02:34:25.041439 systemd[1]: Started sshd@7-10.0.0.140:22-10.0.0.1:52634.service - OpenSSH per-connection server daemon (10.0.0.1:52634). Jun 21 02:34:25.104465 systemd-networkd[1433]: cali525ffad5c8f: Link UP Jun 21 02:34:25.105333 systemd-networkd[1433]: cali525ffad5c8f: Gained carrier Jun 21 02:34:25.119579 sshd[4640]: Accepted publickey for core from 10.0.0.1 port 52634 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.009 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0 calico-apiserver-fb8467986- calico-apiserver b9a64c74-9e1c-4021-9017-45e1ca7f0ee0 825 0 2025-06-21 02:33:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fb8467986 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fb8467986-x5j49 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali525ffad5c8f [] [] }} ContainerID="799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-x5j49" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--x5j49-" Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.009 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-x5j49" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0" Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.054 [INFO][4620] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" HandleID="k8s-pod-network.799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" Workload="localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0" Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.054 [INFO][4620] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" HandleID="k8s-pod-network.799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" Workload="localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-fb8467986-x5j49", "timestamp":"2025-06-21 02:34:25.054433137 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.054 [INFO][4620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.054 [INFO][4620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.054 [INFO][4620] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.072 [INFO][4620] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" host="localhost" Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.077 [INFO][4620] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.081 [INFO][4620] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.082 [INFO][4620] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.084 [INFO][4620] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.084 [INFO][4620] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" host="localhost" Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.086 [INFO][4620] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.089 [INFO][4620] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" host="localhost" Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.094 [INFO][4620] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" host="localhost" Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.095 [INFO][4620] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" host="localhost" Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.095 [INFO][4620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:34:25.120087 containerd[1524]: 2025-06-21 02:34:25.095 [INFO][4620] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" HandleID="k8s-pod-network.799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" Workload="localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0" Jun 21 02:34:25.121879 containerd[1524]: 2025-06-21 02:34:25.098 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-x5j49" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0", GenerateName:"calico-apiserver-fb8467986-", Namespace:"calico-apiserver", SelfLink:"", UID:"b9a64c74-9e1c-4021-9017-45e1ca7f0ee0", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fb8467986", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fb8467986-x5j49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali525ffad5c8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:25.121879 containerd[1524]: 2025-06-21 02:34:25.099 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-x5j49" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0" Jun 21 02:34:25.121879 containerd[1524]: 2025-06-21 02:34:25.099 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali525ffad5c8f ContainerID="799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-x5j49" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0" Jun 21 02:34:25.121879 containerd[1524]: 2025-06-21 02:34:25.105 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-x5j49" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0" Jun 21 02:34:25.121879 containerd[1524]: 2025-06-21 02:34:25.106 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-x5j49" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0", GenerateName:"calico-apiserver-fb8467986-", Namespace:"calico-apiserver", SelfLink:"", UID:"b9a64c74-9e1c-4021-9017-45e1ca7f0ee0", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 33, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fb8467986", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb", Pod:"calico-apiserver-fb8467986-x5j49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali525ffad5c8f", MAC:"b6:77:50:ca:10:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:25.121879 containerd[1524]: 2025-06-21 02:34:25.116 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" Namespace="calico-apiserver" Pod="calico-apiserver-fb8467986-x5j49" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb8467986--x5j49-eth0" Jun 21 02:34:25.123458 kubelet[2649]: I0621 02:34:25.123375 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wrbxh" podStartSLOduration=35.123356915 podStartE2EDuration="35.123356915s" podCreationTimestamp="2025-06-21 02:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 02:34:25.121990875 +0000 UTC m=+42.247124998" watchObservedRunningTime="2025-06-21 02:34:25.123356915 +0000 UTC m=+42.248490998" Jun 21 02:34:25.124388 sshd-session[4640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:34:25.136951 systemd-logind[1507]: New session 8 of user core. Jun 21 02:34:25.141049 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 02:34:25.170401 containerd[1524]: time="2025-06-21T02:34:25.169779460Z" level=info msg="connecting to shim 799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb" address="unix:///run/containerd/s/87d86c9e2279ee352c110ffc5a52df174e772f307bb0908c395932e989891a81" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:34:25.203066 systemd[1]: Started cri-containerd-799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb.scope - libcontainer container 799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb. Jun 21 02:34:25.238309 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:34:25.240120 systemd-networkd[1433]: cali9a2fbac5590: Link UP Jun 21 02:34:25.240284 systemd-networkd[1433]: cali9a2fbac5590: Gained carrier Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.016 [INFO][4600] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0 calico-kube-controllers-67c745b8b4- calico-system babe5487-2d59-4548-a174-e937458943cc 824 0 2025-06-21 02:34:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67c745b8b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67c745b8b4-6j8z7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9a2fbac5590 [] [] }} ContainerID="4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" Namespace="calico-system" Pod="calico-kube-controllers-67c745b8b4-6j8z7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-" Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.016 [INFO][4600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" Namespace="calico-system" Pod="calico-kube-controllers-67c745b8b4-6j8z7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0" Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.060 [INFO][4626] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" HandleID="k8s-pod-network.4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" Workload="localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0" Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.060 [INFO][4626] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" HandleID="k8s-pod-network.4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" Workload="localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059ec90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67c745b8b4-6j8z7", "timestamp":"2025-06-21 02:34:25.060775335 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.060 [INFO][4626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.095 [INFO][4626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.095 [INFO][4626] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.173 [INFO][4626] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" host="localhost" Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.185 [INFO][4626] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.192 [INFO][4626] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.197 [INFO][4626] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.200 [INFO][4626] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.201 [INFO][4626] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" host="localhost" Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.205 [INFO][4626] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00 Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.214 [INFO][4626] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" host="localhost" Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.227 [INFO][4626] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" host="localhost" Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.227 [INFO][4626] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" host="localhost" Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.227 [INFO][4626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:34:25.264219 containerd[1524]: 2025-06-21 02:34:25.227 [INFO][4626] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" HandleID="k8s-pod-network.4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" Workload="localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0" Jun 21 02:34:25.264970 containerd[1524]: 2025-06-21 02:34:25.233 [INFO][4600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" Namespace="calico-system" Pod="calico-kube-controllers-67c745b8b4-6j8z7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0", GenerateName:"calico-kube-controllers-67c745b8b4-", Namespace:"calico-system", SelfLink:"", UID:"babe5487-2d59-4548-a174-e937458943cc", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c745b8b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67c745b8b4-6j8z7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9a2fbac5590", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:25.264970 containerd[1524]: 2025-06-21 02:34:25.234 [INFO][4600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" Namespace="calico-system" Pod="calico-kube-controllers-67c745b8b4-6j8z7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0" Jun 21 02:34:25.264970 containerd[1524]: 2025-06-21 02:34:25.234 [INFO][4600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a2fbac5590 ContainerID="4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" Namespace="calico-system" Pod="calico-kube-controllers-67c745b8b4-6j8z7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0" Jun 21 02:34:25.264970 containerd[1524]: 2025-06-21 02:34:25.238 [INFO][4600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" Namespace="calico-system" Pod="calico-kube-controllers-67c745b8b4-6j8z7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0" Jun 21 02:34:25.264970 containerd[1524]: 2025-06-21 02:34:25.239 [INFO][4600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" Namespace="calico-system" Pod="calico-kube-controllers-67c745b8b4-6j8z7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0", GenerateName:"calico-kube-controllers-67c745b8b4-", Namespace:"calico-system", SelfLink:"", UID:"babe5487-2d59-4548-a174-e937458943cc", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c745b8b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00", Pod:"calico-kube-controllers-67c745b8b4-6j8z7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9a2fbac5590", MAC:"ee:c2:c5:40:bf:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:25.264970 containerd[1524]: 2025-06-21 02:34:25.258 [INFO][4600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" Namespace="calico-system" Pod="calico-kube-controllers-67c745b8b4-6j8z7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67c745b8b4--6j8z7-eth0" Jun 21 02:34:25.299192 containerd[1524]: time="2025-06-21T02:34:25.299044418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fb8467986-x5j49,Uid:b9a64c74-9e1c-4021-9017-45e1ca7f0ee0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb\"" Jun 21 02:34:25.345394 systemd-networkd[1433]: cali4d921d179e1: Link UP Jun 21 02:34:25.345544 systemd-networkd[1433]: cali4d921d179e1: Gained carrier Jun 21 02:34:25.347789 containerd[1524]: time="2025-06-21T02:34:25.347745523Z" level=info msg="connecting to shim 4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00" address="unix:///run/containerd/s/a51e70fbaac4fc549b739c66eaf28eaaf401c596b612dda8fefae4fed2a47947" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.030 [INFO][4588] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2p5zs-eth0 csi-node-driver- calico-system beef3fba-32a8-4ff1-9b42-beb45fd36a99 698 0 2025-06-21 02:34:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:896496fb5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2p5zs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4d921d179e1 [] [] }} ContainerID="ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" Namespace="calico-system" Pod="csi-node-driver-2p5zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p5zs-" Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.031 [INFO][4588] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" Namespace="calico-system" Pod="csi-node-driver-2p5zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p5zs-eth0" Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.067 [INFO][4635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" HandleID="k8s-pod-network.ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" Workload="localhost-k8s-csi--node--driver--2p5zs-eth0" Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.067 [INFO][4635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" HandleID="k8s-pod-network.ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" Workload="localhost-k8s-csi--node--driver--2p5zs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005844e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2p5zs", "timestamp":"2025-06-21 02:34:25.067263773 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.067 [INFO][4635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.229 [INFO][4635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.229 [INFO][4635] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.279 [INFO][4635] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" host="localhost" Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.288 [INFO][4635] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.297 [INFO][4635] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.301 [INFO][4635] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.311 [INFO][4635] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.311 [INFO][4635] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" host="localhost" Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.314 [INFO][4635] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158 Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.324 [INFO][4635] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" host="localhost" Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.339 [INFO][4635] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" host="localhost" Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.339 [INFO][4635] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" host="localhost" Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.340 [INFO][4635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 02:34:25.371404 containerd[1524]: 2025-06-21 02:34:25.340 [INFO][4635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" HandleID="k8s-pod-network.ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" Workload="localhost-k8s-csi--node--driver--2p5zs-eth0" Jun 21 02:34:25.371992 containerd[1524]: 2025-06-21 02:34:25.343 [INFO][4588] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" Namespace="calico-system" Pod="csi-node-driver-2p5zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p5zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2p5zs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"beef3fba-32a8-4ff1-9b42-beb45fd36a99", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"896496fb5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2p5zs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d921d179e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:25.371992 containerd[1524]: 2025-06-21 02:34:25.343 [INFO][4588] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" Namespace="calico-system" Pod="csi-node-driver-2p5zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p5zs-eth0" Jun 21 02:34:25.371992 containerd[1524]: 2025-06-21 02:34:25.343 [INFO][4588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d921d179e1 ContainerID="ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" Namespace="calico-system" Pod="csi-node-driver-2p5zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p5zs-eth0" Jun 21 02:34:25.371992 containerd[1524]: 2025-06-21 02:34:25.345 [INFO][4588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" Namespace="calico-system" Pod="csi-node-driver-2p5zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p5zs-eth0" Jun 21 02:34:25.371992 containerd[1524]: 2025-06-21 02:34:25.354 [INFO][4588] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" Namespace="calico-system" Pod="csi-node-driver-2p5zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p5zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2p5zs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"beef3fba-32a8-4ff1-9b42-beb45fd36a99", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 2, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"896496fb5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158", Pod:"csi-node-driver-2p5zs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d921d179e1", MAC:"66:0b:46:bc:a5:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 02:34:25.371992 containerd[1524]: 2025-06-21 02:34:25.368 [INFO][4588] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" Namespace="calico-system" Pod="csi-node-driver-2p5zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p5zs-eth0" Jun 21 02:34:25.388045 systemd[1]: Started cri-containerd-4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00.scope - libcontainer container 4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00. Jun 21 02:34:25.422669 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:34:25.425467 containerd[1524]: time="2025-06-21T02:34:25.425413818Z" level=info msg="connecting to shim ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158" address="unix:///run/containerd/s/61725774a5482ad86a493709201b7cd97f3dd648219b25a071d7a1f9a21a7a9d" namespace=k8s.io protocol=ttrpc version=3 Jun 21 02:34:25.455330 containerd[1524]: time="2025-06-21T02:34:25.455287288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c745b8b4-6j8z7,Uid:babe5487-2d59-4548-a174-e937458943cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00\"" Jun 21 02:34:25.458023 systemd[1]: Started cri-containerd-ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158.scope - libcontainer container ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158. Jun 21 02:34:25.476274 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 02:34:25.497034 containerd[1524]: time="2025-06-21T02:34:25.496770875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2p5zs,Uid:beef3fba-32a8-4ff1-9b42-beb45fd36a99,Namespace:calico-system,Attempt:0,} returns sandbox id \"ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158\"" Jun 21 02:34:25.521162 sshd[4656]: Connection closed by 10.0.0.1 port 52634 Jun 21 02:34:25.521955 sshd-session[4640]: pam_unix(sshd:session): session closed for user core Jun 21 02:34:25.525873 systemd[1]: sshd@7-10.0.0.140:22-10.0.0.1:52634.service: Deactivated successfully. Jun 21 02:34:25.529317 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 02:34:25.530793 systemd-logind[1507]: Session 8 logged out. Waiting for processes to exit. Jun 21 02:34:25.532561 systemd-logind[1507]: Removed session 8. Jun 21 02:34:25.981979 systemd-networkd[1433]: cali050de2c4fe0: Gained IPv6LL Jun 21 02:34:25.982762 systemd-networkd[1433]: cali80a49ae9831: Gained IPv6LL Jun 21 02:34:26.045112 systemd-networkd[1433]: califf406cafd7d: Gained IPv6LL Jun 21 02:34:26.125514 containerd[1524]: time="2025-06-21T02:34:26.125462955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=61832718" Jun 21 02:34:26.128518 containerd[1524]: time="2025-06-21T02:34:26.128453314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:26.131820 containerd[1524]: time="2025-06-21T02:34:26.131780433Z" level=info msg="ImageCreate event name:\"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:26.132688 containerd[1524]: time="2025-06-21T02:34:26.132648313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"61832564\" in 1.758308864s" Jun 21 02:34:26.132688 containerd[1524]: time="2025-06-21T02:34:26.132683553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\"" Jun 21 02:34:26.133547 containerd[1524]: time="2025-06-21T02:34:26.133510273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:26.134227 containerd[1524]: time="2025-06-21T02:34:26.134033113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 02:34:26.135794 containerd[1524]: time="2025-06-21T02:34:26.135745792Z" level=info msg="CreateContainer within sandbox \"9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 21 02:34:26.146262 containerd[1524]: time="2025-06-21T02:34:26.146192429Z" level=info msg="Container 2bb18a33d7edda7102472c477aa9c5d03b7418016f2b286b0f78e7b8b8aec765: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:26.154086 containerd[1524]: time="2025-06-21T02:34:26.154023027Z" level=info msg="CreateContainer within sandbox \"9285142551077e29fbd9b74e4f5c505303597d7c3573c612af0a6886e5016fb3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2bb18a33d7edda7102472c477aa9c5d03b7418016f2b286b0f78e7b8b8aec765\"" Jun 21 02:34:26.154991 containerd[1524]: time="2025-06-21T02:34:26.154911066Z" level=info msg="StartContainer for \"2bb18a33d7edda7102472c477aa9c5d03b7418016f2b286b0f78e7b8b8aec765\"" Jun 21 02:34:26.156718 containerd[1524]: time="2025-06-21T02:34:26.156678466Z" level=info msg="connecting to shim 2bb18a33d7edda7102472c477aa9c5d03b7418016f2b286b0f78e7b8b8aec765" address="unix:///run/containerd/s/d9192fa53dbf709b7a4c966be68e47cda834d79e47e5a50f16eb486ed7c0434e" protocol=ttrpc version=3 Jun 21 02:34:26.178021 systemd[1]: Started cri-containerd-2bb18a33d7edda7102472c477aa9c5d03b7418016f2b286b0f78e7b8b8aec765.scope - libcontainer container 2bb18a33d7edda7102472c477aa9c5d03b7418016f2b286b0f78e7b8b8aec765. Jun 21 02:34:26.223575 containerd[1524]: time="2025-06-21T02:34:26.223503166Z" level=info msg="StartContainer for \"2bb18a33d7edda7102472c477aa9c5d03b7418016f2b286b0f78e7b8b8aec765\" returns successfully" Jun 21 02:34:26.557325 systemd-networkd[1433]: cali525ffad5c8f: Gained IPv6LL Jun 21 02:34:26.685077 systemd-networkd[1433]: cali9a2fbac5590: Gained IPv6LL Jun 21 02:34:27.005037 systemd-networkd[1433]: cali4d921d179e1: Gained IPv6LL Jun 21 02:34:27.137215 kubelet[2649]: I0621 02:34:27.137164 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-dc7b455cb-dbmg9" podStartSLOduration=22.376925149 podStartE2EDuration="24.137133493s" podCreationTimestamp="2025-06-21 02:34:03 +0000 UTC" firstStartedPulling="2025-06-21 02:34:24.373510689 +0000 UTC m=+41.498644732" lastFinishedPulling="2025-06-21 02:34:26.133719033 +0000 UTC m=+43.258853076" observedRunningTime="2025-06-21 02:34:27.137047333 +0000 UTC m=+44.262181416" watchObservedRunningTime="2025-06-21 02:34:27.137133493 +0000 UTC m=+44.262267576" Jun 21 02:34:27.236146 containerd[1524]: time="2025-06-21T02:34:27.236103585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bb18a33d7edda7102472c477aa9c5d03b7418016f2b286b0f78e7b8b8aec765\" id:\"40e0c745b78067f855f4e08f645e83efe774270ef4f7a95af808251987954dc3\" pid:4892 exit_status:1 exited_at:{seconds:1750473267 nanos:235299785}" Jun 21 02:34:27.621981 containerd[1524]: time="2025-06-21T02:34:27.621927756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:27.622518 containerd[1524]: time="2025-06-21T02:34:27.622486316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=44514850" Jun 21 02:34:27.623380 containerd[1524]: time="2025-06-21T02:34:27.623349516Z" level=info msg="ImageCreate event name:\"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:27.625126 containerd[1524]: time="2025-06-21T02:34:27.625071475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:27.626073 containerd[1524]: time="2025-06-21T02:34:27.625987475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 1.491920442s" Jun 21 02:34:27.626073 containerd[1524]: time="2025-06-21T02:34:27.626021235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 21 02:34:27.627047 containerd[1524]: time="2025-06-21T02:34:27.627018875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 02:34:27.631636 containerd[1524]: time="2025-06-21T02:34:27.631598593Z" level=info msg="CreateContainer within sandbox \"f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 02:34:27.639854 containerd[1524]: time="2025-06-21T02:34:27.639060111Z" level=info msg="Container 04aac735179080cb3269a7442ddf2961ed7e1a04b1917a0160afdba773b94832: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:27.645666 containerd[1524]: time="2025-06-21T02:34:27.645621230Z" level=info msg="CreateContainer within sandbox \"f1b0c2aa5e1519a9615c5d908b5ed3de4a3c831937d95fe14364d2e94c353f37\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"04aac735179080cb3269a7442ddf2961ed7e1a04b1917a0160afdba773b94832\"" Jun 21 02:34:27.646430 containerd[1524]: time="2025-06-21T02:34:27.646381149Z" level=info msg="StartContainer for \"04aac735179080cb3269a7442ddf2961ed7e1a04b1917a0160afdba773b94832\"" Jun 21 02:34:27.648679 containerd[1524]: time="2025-06-21T02:34:27.648607589Z" level=info msg="connecting to shim 04aac735179080cb3269a7442ddf2961ed7e1a04b1917a0160afdba773b94832" address="unix:///run/containerd/s/f07342b4e8d04abd4efddaf44d7f282c16ab3b3bae08fae55a671eb8c419dbce" protocol=ttrpc version=3 Jun 21 02:34:27.676068 systemd[1]: Started cri-containerd-04aac735179080cb3269a7442ddf2961ed7e1a04b1917a0160afdba773b94832.scope - libcontainer container 04aac735179080cb3269a7442ddf2961ed7e1a04b1917a0160afdba773b94832. Jun 21 02:34:27.760093 containerd[1524]: time="2025-06-21T02:34:27.760055437Z" level=info msg="StartContainer for \"04aac735179080cb3269a7442ddf2961ed7e1a04b1917a0160afdba773b94832\" returns successfully" Jun 21 02:34:27.885053 containerd[1524]: time="2025-06-21T02:34:27.884947402Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:27.885909 containerd[1524]: time="2025-06-21T02:34:27.885622682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 21 02:34:27.887951 containerd[1524]: time="2025-06-21T02:34:27.887914841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 260.862446ms" Jun 21 02:34:27.888009 containerd[1524]: time="2025-06-21T02:34:27.887947801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 21 02:34:27.890009 containerd[1524]: time="2025-06-21T02:34:27.889985281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 21 02:34:27.891696 containerd[1524]: time="2025-06-21T02:34:27.891648000Z" level=info msg="CreateContainer within sandbox \"799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 02:34:27.898993 containerd[1524]: time="2025-06-21T02:34:27.898952118Z" level=info msg="Container 93abda6dc8fd7bdc060ecb9ecca5241796b338bdb43d3d6f70f88cb26fa0185f: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:27.906660 containerd[1524]: time="2025-06-21T02:34:27.906613676Z" level=info msg="CreateContainer within sandbox \"799b53019a7b90bb798ebf3c91837e2f1f9bcb8fa5c2dbc46941f2cd89ac48cb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"93abda6dc8fd7bdc060ecb9ecca5241796b338bdb43d3d6f70f88cb26fa0185f\"" Jun 21 02:34:27.907277 containerd[1524]: time="2025-06-21T02:34:27.907195636Z" level=info msg="StartContainer for \"93abda6dc8fd7bdc060ecb9ecca5241796b338bdb43d3d6f70f88cb26fa0185f\"" Jun 21 02:34:27.908744 containerd[1524]: time="2025-06-21T02:34:27.908669115Z" level=info msg="connecting to shim 93abda6dc8fd7bdc060ecb9ecca5241796b338bdb43d3d6f70f88cb26fa0185f" address="unix:///run/containerd/s/87d86c9e2279ee352c110ffc5a52df174e772f307bb0908c395932e989891a81" protocol=ttrpc version=3 Jun 21 02:34:27.948882 systemd[1]: Started cri-containerd-93abda6dc8fd7bdc060ecb9ecca5241796b338bdb43d3d6f70f88cb26fa0185f.scope - libcontainer container 93abda6dc8fd7bdc060ecb9ecca5241796b338bdb43d3d6f70f88cb26fa0185f. Jun 21 02:34:27.998390 containerd[1524]: time="2025-06-21T02:34:27.998317530Z" level=info msg="StartContainer for \"93abda6dc8fd7bdc060ecb9ecca5241796b338bdb43d3d6f70f88cb26fa0185f\" returns successfully" Jun 21 02:34:28.139690 kubelet[2649]: I0621 02:34:28.139369 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fb8467986-zzvjw" podStartSLOduration=25.9640946 podStartE2EDuration="29.139352893s" podCreationTimestamp="2025-06-21 02:33:59 +0000 UTC" firstStartedPulling="2025-06-21 02:34:24.451512262 +0000 UTC m=+41.576646305" lastFinishedPulling="2025-06-21 02:34:27.626770515 +0000 UTC m=+44.751904598" observedRunningTime="2025-06-21 02:34:28.139146973 +0000 UTC m=+45.264281056" watchObservedRunningTime="2025-06-21 02:34:28.139352893 +0000 UTC m=+45.264486976" Jun 21 02:34:28.170526 kubelet[2649]: I0621 02:34:28.169592 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fb8467986-x5j49" podStartSLOduration=26.584534781 podStartE2EDuration="29.169574885s" podCreationTimestamp="2025-06-21 02:33:59 +0000 UTC" firstStartedPulling="2025-06-21 02:34:25.303730217 +0000 UTC m=+42.428864300" lastFinishedPulling="2025-06-21 02:34:27.888770361 +0000 UTC m=+45.013904404" observedRunningTime="2025-06-21 02:34:28.167791085 +0000 UTC m=+45.292925168" watchObservedRunningTime="2025-06-21 02:34:28.169574885 +0000 UTC m=+45.294708968" Jun 21 02:34:28.214282 containerd[1524]: time="2025-06-21T02:34:28.214211153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bb18a33d7edda7102472c477aa9c5d03b7418016f2b286b0f78e7b8b8aec765\" id:\"1145959b01709bc8f698debb3de18d0a726fa0aca38f9c4a9b28959bfdb8b640\" pid:4999 exit_status:1 exited_at:{seconds:1750473268 nanos:213822873}" Jun 21 02:34:29.137389 kubelet[2649]: I0621 02:34:29.137351 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:34:29.284361 containerd[1524]: time="2025-06-21T02:34:29.284306794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:29.285273 containerd[1524]: time="2025-06-21T02:34:29.285239674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=48129475" Jun 21 02:34:29.286065 containerd[1524]: time="2025-06-21T02:34:29.286037954Z" level=info msg="ImageCreate event name:\"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:29.288091 containerd[1524]: time="2025-06-21T02:34:29.288057713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:29.288678 containerd[1524]: time="2025-06-21T02:34:29.288651153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"49498684\" in 1.398636872s" Jun 21 02:34:29.288711 containerd[1524]: time="2025-06-21T02:34:29.288682273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\"" Jun 21 02:34:29.289772 containerd[1524]: time="2025-06-21T02:34:29.289734313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 21 02:34:29.299904 containerd[1524]: time="2025-06-21T02:34:29.299864630Z" level=info msg="CreateContainer within sandbox \"4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 21 02:34:29.308361 containerd[1524]: time="2025-06-21T02:34:29.308076668Z" level=info msg="Container 88b57b170c438c9d2c7f3f2c064686bb98c29c896af0824384b13d19bb636c66: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:29.311744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910831543.mount: Deactivated successfully. Jun 21 02:34:29.316239 containerd[1524]: time="2025-06-21T02:34:29.316193346Z" level=info msg="CreateContainer within sandbox \"4238f972001926afdc2609d52a297416a1b475bc908cc3a9bab1d00b3829cf00\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"88b57b170c438c9d2c7f3f2c064686bb98c29c896af0824384b13d19bb636c66\"" Jun 21 02:34:29.318482 containerd[1524]: time="2025-06-21T02:34:29.317380706Z" level=info msg="StartContainer for \"88b57b170c438c9d2c7f3f2c064686bb98c29c896af0824384b13d19bb636c66\"" Jun 21 02:34:29.319851 containerd[1524]: time="2025-06-21T02:34:29.319762945Z" level=info msg="connecting to shim 88b57b170c438c9d2c7f3f2c064686bb98c29c896af0824384b13d19bb636c66" address="unix:///run/containerd/s/a51e70fbaac4fc549b739c66eaf28eaaf401c596b612dda8fefae4fed2a47947" protocol=ttrpc version=3 Jun 21 02:34:29.358037 systemd[1]: Started cri-containerd-88b57b170c438c9d2c7f3f2c064686bb98c29c896af0824384b13d19bb636c66.scope - libcontainer container 88b57b170c438c9d2c7f3f2c064686bb98c29c896af0824384b13d19bb636c66. Jun 21 02:34:29.418401 containerd[1524]: time="2025-06-21T02:34:29.417900761Z" level=info msg="StartContainer for \"88b57b170c438c9d2c7f3f2c064686bb98c29c896af0824384b13d19bb636c66\" returns successfully" Jun 21 02:34:30.153134 kubelet[2649]: I0621 02:34:30.153060 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67c745b8b4-6j8z7" podStartSLOduration=23.320428556 podStartE2EDuration="27.153042421s" podCreationTimestamp="2025-06-21 02:34:03 +0000 UTC" firstStartedPulling="2025-06-21 02:34:25.456939248 +0000 UTC m=+42.582073331" lastFinishedPulling="2025-06-21 02:34:29.289553113 +0000 UTC m=+46.414687196" observedRunningTime="2025-06-21 02:34:30.152828901 +0000 UTC m=+47.277962984" watchObservedRunningTime="2025-06-21 02:34:30.153042421 +0000 UTC m=+47.278176504" Jun 21 02:34:30.372968 containerd[1524]: time="2025-06-21T02:34:30.372918850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:30.373430 containerd[1524]: time="2025-06-21T02:34:30.372973850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8226240" Jun 21 02:34:30.374368 containerd[1524]: time="2025-06-21T02:34:30.374340289Z" level=info msg="ImageCreate event name:\"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:30.376503 containerd[1524]: time="2025-06-21T02:34:30.376474049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:30.376946 containerd[1524]: time="2025-06-21T02:34:30.376918969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"9595481\" in 1.087154096s" Jun 21 02:34:30.376989 containerd[1524]: time="2025-06-21T02:34:30.376955049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\"" Jun 21 02:34:30.380878 containerd[1524]: time="2025-06-21T02:34:30.380844768Z" level=info msg="CreateContainer within sandbox \"ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 21 02:34:30.393373 containerd[1524]: time="2025-06-21T02:34:30.393315125Z" level=info msg="Container 3a0bb6fbb35c46f207315da7cd3f1d1c340f5e9e97586993dcb151625d39dd2b: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:30.398793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073336926.mount: Deactivated successfully. Jun 21 02:34:30.404206 containerd[1524]: time="2025-06-21T02:34:30.404105642Z" level=info msg="CreateContainer within sandbox \"ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3a0bb6fbb35c46f207315da7cd3f1d1c340f5e9e97586993dcb151625d39dd2b\"" Jun 21 02:34:30.405126 containerd[1524]: time="2025-06-21T02:34:30.404690482Z" level=info msg="StartContainer for \"3a0bb6fbb35c46f207315da7cd3f1d1c340f5e9e97586993dcb151625d39dd2b\"" Jun 21 02:34:30.406490 containerd[1524]: time="2025-06-21T02:34:30.406458362Z" level=info msg="connecting to shim 3a0bb6fbb35c46f207315da7cd3f1d1c340f5e9e97586993dcb151625d39dd2b" address="unix:///run/containerd/s/61725774a5482ad86a493709201b7cd97f3dd648219b25a071d7a1f9a21a7a9d" protocol=ttrpc version=3 Jun 21 02:34:30.425009 systemd[1]: Started cri-containerd-3a0bb6fbb35c46f207315da7cd3f1d1c340f5e9e97586993dcb151625d39dd2b.scope - libcontainer container 3a0bb6fbb35c46f207315da7cd3f1d1c340f5e9e97586993dcb151625d39dd2b. Jun 21 02:34:30.500788 containerd[1524]: time="2025-06-21T02:34:30.500747260Z" level=info msg="StartContainer for \"3a0bb6fbb35c46f207315da7cd3f1d1c340f5e9e97586993dcb151625d39dd2b\" returns successfully" Jun 21 02:34:30.502717 containerd[1524]: time="2025-06-21T02:34:30.502684820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 21 02:34:30.535972 systemd[1]: Started sshd@8-10.0.0.140:22-10.0.0.1:52646.service - OpenSSH per-connection server daemon (10.0.0.1:52646). Jun 21 02:34:30.612112 sshd[5103]: Accepted publickey for core from 10.0.0.1 port 52646 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:34:30.614192 sshd-session[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:34:30.620504 systemd-logind[1507]: New session 9 of user core. Jun 21 02:34:30.628083 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 02:34:30.811152 sshd[5105]: Connection closed by 10.0.0.1 port 52646 Jun 21 02:34:30.812180 sshd-session[5103]: pam_unix(sshd:session): session closed for user core Jun 21 02:34:30.816508 systemd[1]: sshd@8-10.0.0.140:22-10.0.0.1:52646.service: Deactivated successfully. Jun 21 02:34:30.819025 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 02:34:30.820387 systemd-logind[1507]: Session 9 logged out. Waiting for processes to exit. Jun 21 02:34:30.822228 systemd-logind[1507]: Removed session 9. Jun 21 02:34:31.181577 containerd[1524]: time="2025-06-21T02:34:31.181360704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88b57b170c438c9d2c7f3f2c064686bb98c29c896af0824384b13d19bb636c66\" id:\"1afa868728f4f4eabbff9cdf323b969c0247361c3226ed13877d0feffc176706\" pid:5135 exited_at:{seconds:1750473271 nanos:180845704}" Jun 21 02:34:32.537116 containerd[1524]: time="2025-06-21T02:34:32.537055896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:32.537808 containerd[1524]: time="2025-06-21T02:34:32.537733696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=13749925" Jun 21 02:34:32.539862 containerd[1524]: time="2025-06-21T02:34:32.539233535Z" level=info msg="ImageCreate event name:\"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:32.541482 containerd[1524]: time="2025-06-21T02:34:32.541451015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 02:34:32.542356 containerd[1524]: time="2025-06-21T02:34:32.542324975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"15119118\" in 2.039608036s" Jun 21 02:34:32.542356 containerd[1524]: time="2025-06-21T02:34:32.542356815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\"" Jun 21 02:34:32.544934 containerd[1524]: time="2025-06-21T02:34:32.544891974Z" level=info msg="CreateContainer within sandbox \"ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 21 02:34:32.557543 containerd[1524]: time="2025-06-21T02:34:32.557493372Z" level=info msg="Container 46a46f2d5698978d9db81f8f099e4ba9aa1f2924df32c7d499f1bace517ce8f0: CDI devices from CRI Config.CDIDevices: []" Jun 21 02:34:32.573892 containerd[1524]: time="2025-06-21T02:34:32.573829608Z" level=info msg="CreateContainer within sandbox \"ddc014918dbf262a8cfbf753af40e2f7ba9cc533a3ab8eab4e94affd6051a158\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"46a46f2d5698978d9db81f8f099e4ba9aa1f2924df32c7d499f1bace517ce8f0\"" Jun 21 02:34:32.575669 containerd[1524]: time="2025-06-21T02:34:32.575380688Z" level=info msg="StartContainer for \"46a46f2d5698978d9db81f8f099e4ba9aa1f2924df32c7d499f1bace517ce8f0\"" Jun 21 02:34:32.578384 containerd[1524]: time="2025-06-21T02:34:32.577758607Z" level=info msg="connecting to shim 46a46f2d5698978d9db81f8f099e4ba9aa1f2924df32c7d499f1bace517ce8f0" address="unix:///run/containerd/s/61725774a5482ad86a493709201b7cd97f3dd648219b25a071d7a1f9a21a7a9d" protocol=ttrpc version=3 Jun 21 02:34:32.600209 systemd[1]: Started cri-containerd-46a46f2d5698978d9db81f8f099e4ba9aa1f2924df32c7d499f1bace517ce8f0.scope - libcontainer container 46a46f2d5698978d9db81f8f099e4ba9aa1f2924df32c7d499f1bace517ce8f0. Jun 21 02:34:32.650497 containerd[1524]: time="2025-06-21T02:34:32.648828313Z" level=info msg="StartContainer for \"46a46f2d5698978d9db81f8f099e4ba9aa1f2924df32c7d499f1bace517ce8f0\" returns successfully" Jun 21 02:34:33.035917 kubelet[2649]: I0621 02:34:33.035869 2649 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 21 02:34:33.041601 kubelet[2649]: I0621 02:34:33.041574 2649 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 21 02:34:33.170163 kubelet[2649]: I0621 02:34:33.169907 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2p5zs" podStartSLOduration=23.126118348 podStartE2EDuration="30.169887688s" podCreationTimestamp="2025-06-21 02:34:03 +0000 UTC" firstStartedPulling="2025-06-21 02:34:25.499479794 +0000 UTC m=+42.624613877" lastFinishedPulling="2025-06-21 02:34:32.543249134 +0000 UTC m=+49.668383217" observedRunningTime="2025-06-21 02:34:33.169745129 +0000 UTC m=+50.294879212" watchObservedRunningTime="2025-06-21 02:34:33.169887688 +0000 UTC m=+50.295021811" Jun 21 02:34:35.828592 systemd[1]: Started sshd@9-10.0.0.140:22-10.0.0.1:34934.service - OpenSSH per-connection server daemon (10.0.0.1:34934). Jun 21 02:34:35.887080 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 34934 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:34:35.888477 sshd-session[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:34:35.892428 systemd-logind[1507]: New session 10 of user core. Jun 21 02:34:35.904003 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 02:34:36.112402 sshd[5194]: Connection closed by 10.0.0.1 port 34934 Jun 21 02:34:36.112291 sshd-session[5192]: pam_unix(sshd:session): session closed for user core Jun 21 02:34:36.126079 systemd[1]: sshd@9-10.0.0.140:22-10.0.0.1:34934.service: Deactivated successfully. Jun 21 02:34:36.128958 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 02:34:36.130420 systemd-logind[1507]: Session 10 logged out. Waiting for processes to exit. Jun 21 02:34:36.133096 systemd[1]: Started sshd@10-10.0.0.140:22-10.0.0.1:34936.service - OpenSSH per-connection server daemon (10.0.0.1:34936). Jun 21 02:34:36.133688 systemd-logind[1507]: Removed session 10. Jun 21 02:34:36.187654 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 34936 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:34:36.190063 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:34:36.196048 systemd-logind[1507]: New session 11 of user core. Jun 21 02:34:36.206053 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 02:34:36.478235 sshd[5211]: Connection closed by 10.0.0.1 port 34936 Jun 21 02:34:36.479446 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Jun 21 02:34:36.487810 systemd[1]: sshd@10-10.0.0.140:22-10.0.0.1:34936.service: Deactivated successfully. Jun 21 02:34:36.491197 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 02:34:36.492292 systemd-logind[1507]: Session 11 logged out. Waiting for processes to exit. Jun 21 02:34:36.497131 systemd[1]: Started sshd@11-10.0.0.140:22-10.0.0.1:34952.service - OpenSSH per-connection server daemon (10.0.0.1:34952). Jun 21 02:34:36.499600 systemd-logind[1507]: Removed session 11. Jun 21 02:34:36.547017 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 34952 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:34:36.548225 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:34:36.552689 systemd-logind[1507]: New session 12 of user core. Jun 21 02:34:36.563990 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 02:34:36.698902 sshd[5229]: Connection closed by 10.0.0.1 port 34952 Jun 21 02:34:36.699219 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Jun 21 02:34:36.702768 systemd[1]: sshd@11-10.0.0.140:22-10.0.0.1:34952.service: Deactivated successfully. Jun 21 02:34:36.704569 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 02:34:36.705272 systemd-logind[1507]: Session 12 logged out. Waiting for processes to exit. Jun 21 02:34:36.706275 systemd-logind[1507]: Removed session 12. Jun 21 02:34:41.714476 systemd[1]: Started sshd@12-10.0.0.140:22-10.0.0.1:34962.service - OpenSSH per-connection server daemon (10.0.0.1:34962). Jun 21 02:34:41.779700 sshd[5249]: Accepted publickey for core from 10.0.0.1 port 34962 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:34:41.780980 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:34:41.785066 systemd-logind[1507]: New session 13 of user core. Jun 21 02:34:41.794974 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 02:34:41.908698 containerd[1524]: time="2025-06-21T02:34:41.908658220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bb18a33d7edda7102472c477aa9c5d03b7418016f2b286b0f78e7b8b8aec765\" id:\"3ed1aad944a45a8d20ce5fc957676ab2162b0a2770b13e6f6eb540e3fab317a7\" pid:5264 exited_at:{seconds:1750473281 nanos:908364220}" Jun 21 02:34:41.950924 sshd[5251]: Connection closed by 10.0.0.1 port 34962 Jun 21 02:34:41.951578 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Jun 21 02:34:41.959693 systemd[1]: sshd@12-10.0.0.140:22-10.0.0.1:34962.service: Deactivated successfully. Jun 21 02:34:41.961776 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 02:34:41.962736 systemd-logind[1507]: Session 13 logged out. Waiting for processes to exit. Jun 21 02:34:41.966659 systemd[1]: Started sshd@13-10.0.0.140:22-10.0.0.1:34978.service - OpenSSH per-connection server daemon (10.0.0.1:34978). Jun 21 02:34:41.967574 systemd-logind[1507]: Removed session 13. Jun 21 02:34:42.032031 sshd[5289]: Accepted publickey for core from 10.0.0.1 port 34978 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:34:42.033377 sshd-session[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:34:42.037823 systemd-logind[1507]: New session 14 of user core. Jun 21 02:34:42.049046 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 02:34:42.270453 sshd[5291]: Connection closed by 10.0.0.1 port 34978 Jun 21 02:34:42.271114 sshd-session[5289]: pam_unix(sshd:session): session closed for user core Jun 21 02:34:42.284410 systemd[1]: sshd@13-10.0.0.140:22-10.0.0.1:34978.service: Deactivated successfully. Jun 21 02:34:42.286365 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 02:34:42.287199 systemd-logind[1507]: Session 14 logged out. Waiting for processes to exit. Jun 21 02:34:42.290005 systemd[1]: Started sshd@14-10.0.0.140:22-10.0.0.1:34994.service - OpenSSH per-connection server daemon (10.0.0.1:34994). Jun 21 02:34:42.290859 systemd-logind[1507]: Removed session 14. Jun 21 02:34:42.342887 sshd[5302]: Accepted publickey for core from 10.0.0.1 port 34994 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:34:42.345404 sshd-session[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:34:42.349658 systemd-logind[1507]: New session 15 of user core. Jun 21 02:34:42.360013 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 02:34:44.075772 sshd[5304]: Connection closed by 10.0.0.1 port 34994 Jun 21 02:34:44.076799 sshd-session[5302]: pam_unix(sshd:session): session closed for user core Jun 21 02:34:44.089757 systemd[1]: sshd@14-10.0.0.140:22-10.0.0.1:34994.service: Deactivated successfully. Jun 21 02:34:44.091366 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 02:34:44.091568 systemd[1]: session-15.scope: Consumed 571ms CPU time, 75.4M memory peak. Jun 21 02:34:44.092117 systemd-logind[1507]: Session 15 logged out. Waiting for processes to exit. Jun 21 02:34:44.095577 systemd[1]: Started sshd@15-10.0.0.140:22-10.0.0.1:42596.service - OpenSSH per-connection server daemon (10.0.0.1:42596). Jun 21 02:34:44.096742 systemd-logind[1507]: Removed session 15. Jun 21 02:34:44.158515 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 42596 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:34:44.159706 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:34:44.163547 systemd-logind[1507]: New session 16 of user core. Jun 21 02:34:44.171992 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 02:34:44.527027 sshd[5329]: Connection closed by 10.0.0.1 port 42596 Jun 21 02:34:44.528490 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Jun 21 02:34:44.537119 systemd[1]: sshd@15-10.0.0.140:22-10.0.0.1:42596.service: Deactivated successfully. Jun 21 02:34:44.539377 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 02:34:44.540538 systemd-logind[1507]: Session 16 logged out. Waiting for processes to exit. Jun 21 02:34:44.543872 systemd[1]: Started sshd@16-10.0.0.140:22-10.0.0.1:42608.service - OpenSSH per-connection server daemon (10.0.0.1:42608). Jun 21 02:34:44.545033 systemd-logind[1507]: Removed session 16. Jun 21 02:34:44.600571 sshd[5340]: Accepted publickey for core from 10.0.0.1 port 42608 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:34:44.601819 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:34:44.605884 systemd-logind[1507]: New session 17 of user core. Jun 21 02:34:44.616007 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 02:34:44.773718 sshd[5342]: Connection closed by 10.0.0.1 port 42608 Jun 21 02:34:44.773566 sshd-session[5340]: pam_unix(sshd:session): session closed for user core Jun 21 02:34:44.777207 systemd[1]: sshd@16-10.0.0.140:22-10.0.0.1:42608.service: Deactivated successfully. Jun 21 02:34:44.779161 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 02:34:44.780200 systemd-logind[1507]: Session 17 logged out. Waiting for processes to exit. Jun 21 02:34:44.781593 systemd-logind[1507]: Removed session 17. Jun 21 02:34:46.470874 containerd[1524]: time="2025-06-21T02:34:46.470815220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bb18a33d7edda7102472c477aa9c5d03b7418016f2b286b0f78e7b8b8aec765\" id:\"e1dba7b98b5106b4b17b70e57ee68e3e35f384df1a30f6b60f434d72a2db9fcd\" pid:5368 exited_at:{seconds:1750473286 nanos:470362940}" Jun 21 02:34:49.790441 systemd[1]: Started sshd@17-10.0.0.140:22-10.0.0.1:42624.service - OpenSSH per-connection server daemon (10.0.0.1:42624). Jun 21 02:34:49.846939 sshd[5382]: Accepted publickey for core from 10.0.0.1 port 42624 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:34:49.848370 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:34:49.854973 systemd-logind[1507]: New session 18 of user core. Jun 21 02:34:49.861143 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 02:34:50.008536 sshd[5385]: Connection closed by 10.0.0.1 port 42624 Jun 21 02:34:50.008873 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Jun 21 02:34:50.012577 systemd[1]: sshd@17-10.0.0.140:22-10.0.0.1:42624.service: Deactivated successfully. Jun 21 02:34:50.014427 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 02:34:50.015485 systemd-logind[1507]: Session 18 logged out. Waiting for processes to exit. Jun 21 02:34:50.016559 systemd-logind[1507]: Removed session 18. Jun 21 02:34:51.593672 kubelet[2649]: I0621 02:34:51.593622 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 02:34:52.818912 containerd[1524]: time="2025-06-21T02:34:52.818818868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88b57b170c438c9d2c7f3f2c064686bb98c29c896af0824384b13d19bb636c66\" id:\"bd1173831b2897ab2ab4f976ab397b6eddb33140d21dbd0b35943e0d0ce9ae1d\" pid:5416 exited_at:{seconds:1750473292 nanos:818601947}" Jun 21 02:34:54.055410 containerd[1524]: time="2025-06-21T02:34:54.055364981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a10cbc715eeeb9b25ef5cc8d7f894583c6c538bd9edbd21a79172dbccb40f35\" id:\"e414b94acd46eb746c9bdeb52373e1b006ee23493e28f9581b71929964fd46ed\" pid:5438 exited_at:{seconds:1750473294 nanos:54958219}" Jun 21 02:34:55.020098 systemd[1]: Started sshd@18-10.0.0.140:22-10.0.0.1:54742.service - OpenSSH per-connection server daemon (10.0.0.1:54742). Jun 21 02:34:55.081398 sshd[5451]: Accepted publickey for core from 10.0.0.1 port 54742 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:34:55.082713 sshd-session[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:34:55.086715 systemd-logind[1507]: New session 19 of user core. Jun 21 02:34:55.097009 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 02:34:55.212887 sshd[5453]: Connection closed by 10.0.0.1 port 54742 Jun 21 02:34:55.213183 sshd-session[5451]: pam_unix(sshd:session): session closed for user core Jun 21 02:34:55.216626 systemd[1]: sshd@18-10.0.0.140:22-10.0.0.1:54742.service: Deactivated successfully. Jun 21 02:34:55.218322 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 02:34:55.219008 systemd-logind[1507]: Session 19 logged out. Waiting for processes to exit. Jun 21 02:34:55.220056 systemd-logind[1507]: Removed session 19. Jun 21 02:34:57.202719 containerd[1524]: time="2025-06-21T02:34:57.202600831Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88b57b170c438c9d2c7f3f2c064686bb98c29c896af0824384b13d19bb636c66\" id:\"3af54f2ee5be7b4d0c11c643f9f5290533d70f8966435349f907fe04292a4c4b\" pid:5479 exited_at:{seconds:1750473297 nanos:202089907}" Jun 21 02:35:00.229287 systemd[1]: Started sshd@19-10.0.0.140:22-10.0.0.1:54746.service - OpenSSH per-connection server daemon (10.0.0.1:54746). Jun 21 02:35:00.298286 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 54746 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:35:00.299768 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:35:00.304063 systemd-logind[1507]: New session 20 of user core. Jun 21 02:35:00.317007 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 02:35:00.511439 sshd[5492]: Connection closed by 10.0.0.1 port 54746 Jun 21 02:35:00.511764 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Jun 21 02:35:00.515014 systemd[1]: sshd@19-10.0.0.140:22-10.0.0.1:54746.service: Deactivated successfully. Jun 21 02:35:00.516742 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 02:35:00.519396 systemd-logind[1507]: Session 20 logged out. Waiting for processes to exit. Jun 21 02:35:00.520557 systemd-logind[1507]: Removed session 20. Jun 21 02:35:05.523328 systemd[1]: Started sshd@20-10.0.0.140:22-10.0.0.1:45170.service - OpenSSH per-connection server daemon (10.0.0.1:45170). Jun 21 02:35:05.579709 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 45170 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:35:05.582253 sshd-session[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:35:05.587574 systemd-logind[1507]: New session 21 of user core. Jun 21 02:35:05.596006 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 02:35:05.779992 sshd[5515]: Connection closed by 10.0.0.1 port 45170 Jun 21 02:35:05.780630 sshd-session[5513]: pam_unix(sshd:session): session closed for user core Jun 21 02:35:05.784469 systemd[1]: sshd@20-10.0.0.140:22-10.0.0.1:45170.service: Deactivated successfully. Jun 21 02:35:05.786414 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 02:35:05.790624 systemd-logind[1507]: Session 21 logged out. Waiting for processes to exit. Jun 21 02:35:05.791898 systemd-logind[1507]: Removed session 21. Jun 21 02:35:10.796799 systemd[1]: Started sshd@21-10.0.0.140:22-10.0.0.1:45178.service - OpenSSH per-connection server daemon (10.0.0.1:45178). Jun 21 02:35:10.859342 sshd[5529]: Accepted publickey for core from 10.0.0.1 port 45178 ssh2: RSA SHA256:cK5ARV3AJBHTmh81JhwZP4PCHdHkiRblNCYNaKoXxA8 Jun 21 02:35:10.860768 sshd-session[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 02:35:10.864716 systemd-logind[1507]: New session 22 of user core. Jun 21 02:35:10.874999 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 02:35:11.024848 sshd[5531]: Connection closed by 10.0.0.1 port 45178 Jun 21 02:35:11.025368 sshd-session[5529]: pam_unix(sshd:session): session closed for user core Jun 21 02:35:11.028732 systemd[1]: sshd@21-10.0.0.140:22-10.0.0.1:45178.service: Deactivated successfully. Jun 21 02:35:11.030605 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 02:35:11.031268 systemd-logind[1507]: Session 22 logged out. Waiting for processes to exit. Jun 21 02:35:11.032774 systemd-logind[1507]: Removed session 22. Jun 21 02:35:11.894267 containerd[1524]: time="2025-06-21T02:35:11.894226441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bb18a33d7edda7102472c477aa9c5d03b7418016f2b286b0f78e7b8b8aec765\" id:\"41dd12442051077a4c912c62deeb0974d3ab9e6f8f1ae50f2983a2c6d3fd1165\" pid:5555 exited_at:{seconds:1750473311 nanos:893963400}"