Jul 15 23:31:59.784555 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 15 23:31:59.784575 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 22:00:45 -00 2025 Jul 15 23:31:59.784585 kernel: KASLR enabled Jul 15 23:31:59.784591 kernel: efi: EFI v2.7 by EDK II Jul 15 23:31:59.784596 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Jul 15 23:31:59.784602 kernel: random: crng init done Jul 15 23:31:59.784609 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 15 23:31:59.784614 kernel: secureboot: Secure boot enabled Jul 15 23:31:59.784620 kernel: ACPI: Early table checksum verification disabled Jul 15 23:31:59.784627 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Jul 15 23:31:59.784634 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 15 23:31:59.784639 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:31:59.784645 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:31:59.784651 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:31:59.784658 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:31:59.784665 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:31:59.784672 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:31:59.784678 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:31:59.784684 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:31:59.784690 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:31:59.784696 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 15 23:31:59.784702 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 23:31:59.784709 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:31:59.784715 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Jul 15 23:31:59.784721 kernel: Zone ranges: Jul 15 23:31:59.784728 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:31:59.784734 kernel: DMA32 empty Jul 15 23:31:59.784740 kernel: Normal empty Jul 15 23:31:59.784746 kernel: Device empty Jul 15 23:31:59.784752 kernel: Movable zone start for each node Jul 15 23:31:59.784757 kernel: Early memory node ranges Jul 15 23:31:59.784764 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Jul 15 23:31:59.784770 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Jul 15 23:31:59.784776 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Jul 15 23:31:59.784789 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Jul 15 23:31:59.784795 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Jul 15 23:31:59.784801 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Jul 15 23:31:59.784809 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Jul 15 23:31:59.784815 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Jul 15 23:31:59.784821 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 15 23:31:59.784830 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:31:59.784837 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 15 23:31:59.784843 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Jul 15 23:31:59.784850 kernel: psci: probing for conduit method from ACPI. Jul 15 23:31:59.784858 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 23:31:59.784864 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 23:31:59.784870 kernel: psci: Trusted OS migration not required Jul 15 23:31:59.784877 kernel: psci: SMC Calling Convention v1.1 Jul 15 23:31:59.784883 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 15 23:31:59.784890 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 23:31:59.784896 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 23:31:59.784903 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 15 23:31:59.784909 kernel: Detected PIPT I-cache on CPU0 Jul 15 23:31:59.784917 kernel: CPU features: detected: GIC system register CPU interface Jul 15 23:31:59.784931 kernel: CPU features: detected: Spectre-v4 Jul 15 23:31:59.784938 kernel: CPU features: detected: Spectre-BHB Jul 15 23:31:59.784945 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 23:31:59.784952 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 23:31:59.784958 kernel: CPU features: detected: ARM erratum 1418040 Jul 15 23:31:59.784965 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 23:31:59.784971 kernel: alternatives: applying boot alternatives Jul 15 23:31:59.784979 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:31:59.784986 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:31:59.784993 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:31:59.785001 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:31:59.785008 kernel: Fallback order for Node 0: 0 Jul 15 23:31:59.785014 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 15 23:31:59.785020 kernel: Policy zone: DMA Jul 15 23:31:59.785027 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:31:59.785033 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 15 23:31:59.785040 kernel: software IO TLB: area num 4. Jul 15 23:31:59.785046 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 15 23:31:59.785053 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Jul 15 23:31:59.785059 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 23:31:59.785066 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:31:59.785073 kernel: rcu: RCU event tracing is enabled. Jul 15 23:31:59.785081 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 23:31:59.785087 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:31:59.785094 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:31:59.785100 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:31:59.785107 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 23:31:59.785114 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:31:59.785120 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:31:59.785127 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 23:31:59.785133 kernel: GICv3: 256 SPIs implemented Jul 15 23:31:59.785140 kernel: GICv3: 0 Extended SPIs implemented Jul 15 23:31:59.785146 kernel: Root IRQ handler: gic_handle_irq Jul 15 23:31:59.785154 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 15 23:31:59.785160 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 23:31:59.785167 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 15 23:31:59.785173 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 15 23:31:59.785180 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 15 23:31:59.785187 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 15 23:31:59.785193 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 15 23:31:59.785200 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 15 23:31:59.785206 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:31:59.785213 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:31:59.785219 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 15 23:31:59.785226 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 15 23:31:59.785234 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 15 23:31:59.785241 kernel: arm-pv: using stolen time PV Jul 15 23:31:59.785247 kernel: Console: colour dummy device 80x25 Jul 15 23:31:59.785254 kernel: ACPI: Core revision 20240827 Jul 15 23:31:59.785261 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 15 23:31:59.785268 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:31:59.785275 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:31:59.785281 kernel: landlock: Up and running. Jul 15 23:31:59.785288 kernel: SELinux: Initializing. Jul 15 23:31:59.785296 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:31:59.785303 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:31:59.785309 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:31:59.785316 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:31:59.785323 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:31:59.785329 kernel: Remapping and enabling EFI services. Jul 15 23:31:59.785336 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:31:59.785342 kernel: Detected PIPT I-cache on CPU1 Jul 15 23:31:59.785349 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 15 23:31:59.785357 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 15 23:31:59.785367 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:31:59.785374 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 15 23:31:59.785383 kernel: Detected PIPT I-cache on CPU2 Jul 15 23:31:59.785390 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 15 23:31:59.785396 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 15 23:31:59.785403 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:31:59.785410 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 15 23:31:59.785417 kernel: Detected PIPT I-cache on CPU3 Jul 15 23:31:59.785425 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 15 23:31:59.785432 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 15 23:31:59.785439 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:31:59.785445 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 15 23:31:59.785452 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 23:31:59.785459 kernel: SMP: Total of 4 processors activated. Jul 15 23:31:59.785466 kernel: CPU: All CPU(s) started at EL1 Jul 15 23:31:59.785472 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 23:31:59.785479 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 23:31:59.785487 kernel: CPU features: detected: Common not Private translations Jul 15 23:31:59.785494 kernel: CPU features: detected: CRC32 instructions Jul 15 23:31:59.785501 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 15 23:31:59.785508 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 23:31:59.785515 kernel: CPU features: detected: LSE atomic instructions Jul 15 23:31:59.785521 kernel: CPU features: detected: Privileged Access Never Jul 15 23:31:59.785528 kernel: CPU features: detected: RAS Extension Support Jul 15 23:31:59.785535 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 23:31:59.785542 kernel: alternatives: applying system-wide alternatives Jul 15 23:31:59.785550 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 15 23:31:59.785557 kernel: Memory: 2421860K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 128092K reserved, 16384K cma-reserved) Jul 15 23:31:59.785564 kernel: devtmpfs: initialized Jul 15 23:31:59.785571 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:31:59.785578 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 23:31:59.785585 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 23:31:59.785592 kernel: 0 pages in range for non-PLT usage Jul 15 23:31:59.785599 kernel: 508432 pages in range for PLT usage Jul 15 23:31:59.785605 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:31:59.785613 kernel: SMBIOS 3.0.0 present. Jul 15 23:31:59.785620 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 15 23:31:59.785627 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:31:59.785634 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:31:59.785641 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 23:31:59.785648 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 23:31:59.785655 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 23:31:59.785662 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:31:59.785668 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 15 23:31:59.785677 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:31:59.785684 kernel: cpuidle: using governor menu Jul 15 23:31:59.785690 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 23:31:59.785697 kernel: ASID allocator initialised with 32768 entries Jul 15 23:31:59.785704 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:31:59.785711 kernel: Serial: AMBA PL011 UART driver Jul 15 23:31:59.785718 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:31:59.785725 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:31:59.785731 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 23:31:59.785740 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 23:31:59.785746 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:31:59.785753 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:31:59.785760 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 23:31:59.785767 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 23:31:59.785774 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:31:59.785785 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:31:59.785792 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:31:59.785799 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 23:31:59.785807 kernel: ACPI: Interpreter enabled Jul 15 23:31:59.785814 kernel: ACPI: Using GIC for interrupt routing Jul 15 23:31:59.785821 kernel: ACPI: MCFG table detected, 1 entries Jul 15 23:31:59.785828 kernel: ACPI: CPU0 has been hot-added Jul 15 23:31:59.785834 kernel: ACPI: CPU1 has been hot-added Jul 15 23:31:59.785841 kernel: ACPI: CPU2 has been hot-added Jul 15 23:31:59.785848 kernel: ACPI: CPU3 has been hot-added Jul 15 23:31:59.785855 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 15 23:31:59.785861 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 23:31:59.785870 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:31:59.786015 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:31:59.786092 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 23:31:59.786175 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 23:31:59.786232 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 15 23:31:59.786287 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 15 23:31:59.786296 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 15 23:31:59.786306 kernel: PCI host bridge to bus 0000:00 Jul 15 23:31:59.786369 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 15 23:31:59.786421 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 23:31:59.786471 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 15 23:31:59.786520 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:31:59.786592 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:31:59.786658 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 23:31:59.786719 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 15 23:31:59.786776 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 15 23:31:59.786846 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 23:31:59.786904 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 15 23:31:59.787021 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 15 23:31:59.787081 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 15 23:31:59.787134 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 15 23:31:59.787189 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 23:31:59.787239 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 15 23:31:59.787248 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 23:31:59.787255 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 23:31:59.787262 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 23:31:59.787268 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 23:31:59.787275 kernel: iommu: Default domain type: Translated Jul 15 23:31:59.787282 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 23:31:59.787291 kernel: efivars: Registered efivars operations Jul 15 23:31:59.787297 kernel: vgaarb: loaded Jul 15 23:31:59.787304 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 23:31:59.787311 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:31:59.787318 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:31:59.787325 kernel: pnp: PnP ACPI init Jul 15 23:31:59.787388 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 15 23:31:59.787398 kernel: pnp: PnP ACPI: found 1 devices Jul 15 23:31:59.787407 kernel: NET: Registered PF_INET protocol family Jul 15 23:31:59.787414 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:31:59.787421 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 23:31:59.787428 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:31:59.787435 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:31:59.787442 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 23:31:59.787449 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 23:31:59.787455 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:31:59.787463 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:31:59.787470 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:31:59.787477 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:31:59.787484 kernel: kvm [1]: HYP mode not available Jul 15 23:31:59.787491 kernel: Initialise system trusted keyrings Jul 15 23:31:59.787497 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 23:31:59.787505 kernel: Key type asymmetric registered Jul 15 23:31:59.787511 kernel: Asymmetric key parser 'x509' registered Jul 15 23:31:59.787518 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 23:31:59.787525 kernel: io scheduler mq-deadline registered Jul 15 23:31:59.787533 kernel: io scheduler kyber registered Jul 15 23:31:59.787540 kernel: io scheduler bfq registered Jul 15 23:31:59.787547 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 23:31:59.787554 kernel: ACPI: button: Power Button [PWRB] Jul 15 23:31:59.787561 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 23:31:59.787618 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 15 23:31:59.787627 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:31:59.787635 kernel: thunder_xcv, ver 1.0 Jul 15 23:31:59.787642 kernel: thunder_bgx, ver 1.0 Jul 15 23:31:59.787651 kernel: nicpf, ver 1.0 Jul 15 23:31:59.787658 kernel: nicvf, ver 1.0 Jul 15 23:31:59.787728 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 23:31:59.787794 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T23:31:59 UTC (1752622319) Jul 15 23:31:59.787804 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 23:31:59.787811 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 23:31:59.787818 kernel: watchdog: NMI not fully supported Jul 15 23:31:59.787825 kernel: watchdog: Hard watchdog permanently disabled Jul 15 23:31:59.787834 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:31:59.787841 kernel: Segment Routing with IPv6 Jul 15 23:31:59.787848 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:31:59.787854 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:31:59.787861 kernel: Key type dns_resolver registered Jul 15 23:31:59.787868 kernel: registered taskstats version 1 Jul 15 23:31:59.787875 kernel: Loading compiled-in X.509 certificates Jul 15 23:31:59.787882 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 2e049b1166d7080a2074348abe7e86e115624bdd' Jul 15 23:31:59.787889 kernel: Demotion targets for Node 0: null Jul 15 23:31:59.787896 kernel: Key type .fscrypt registered Jul 15 23:31:59.787903 kernel: Key type fscrypt-provisioning registered Jul 15 23:31:59.787910 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:31:59.787917 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:31:59.787932 kernel: ima: No architecture policies found Jul 15 23:31:59.787949 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 23:31:59.787956 kernel: clk: Disabling unused clocks Jul 15 23:31:59.787963 kernel: PM: genpd: Disabling unused power domains Jul 15 23:31:59.787970 kernel: Warning: unable to open an initial console. Jul 15 23:31:59.787979 kernel: Freeing unused kernel memory: 39488K Jul 15 23:31:59.787986 kernel: Run /init as init process Jul 15 23:31:59.787992 kernel: with arguments: Jul 15 23:31:59.787999 kernel: /init Jul 15 23:31:59.788005 kernel: with environment: Jul 15 23:31:59.788012 kernel: HOME=/ Jul 15 23:31:59.788019 kernel: TERM=linux Jul 15 23:31:59.788025 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:31:59.788033 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:31:59.788044 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:31:59.788052 systemd[1]: Detected virtualization kvm. Jul 15 23:31:59.788059 systemd[1]: Detected architecture arm64. Jul 15 23:31:59.788066 systemd[1]: Running in initrd. Jul 15 23:31:59.788073 systemd[1]: No hostname configured, using default hostname. Jul 15 23:31:59.788081 systemd[1]: Hostname set to . Jul 15 23:31:59.788088 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:31:59.788096 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:31:59.788104 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:31:59.788111 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:31:59.788119 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:31:59.788127 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:31:59.788134 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:31:59.788142 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:31:59.788152 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:31:59.788159 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:31:59.788167 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:31:59.788174 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:31:59.788182 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:31:59.788189 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:31:59.788196 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:31:59.788204 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:31:59.788212 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:31:59.788220 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:31:59.788227 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:31:59.788235 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:31:59.788242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:31:59.788250 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:31:59.788257 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:31:59.788265 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:31:59.788272 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:31:59.788281 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:31:59.788289 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:31:59.788296 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:31:59.788304 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:31:59.788311 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:31:59.788318 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:31:59.788326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:31:59.788333 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:31:59.788342 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:31:59.788350 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:31:59.788357 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:31:59.788380 systemd-journald[245]: Collecting audit messages is disabled. Jul 15 23:31:59.788400 systemd-journald[245]: Journal started Jul 15 23:31:59.788418 systemd-journald[245]: Runtime Journal (/run/log/journal/d308b6b76fd34f5c9440aadaba28fb5d) is 6M, max 48.5M, 42.4M free. Jul 15 23:31:59.782439 systemd-modules-load[246]: Inserted module 'overlay' Jul 15 23:31:59.791967 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:31:59.792142 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:31:59.794884 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:31:59.796284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:31:59.798157 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:31:59.802528 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:31:59.806490 systemd-modules-load[246]: Inserted module 'br_netfilter' Jul 15 23:31:59.807164 kernel: Bridge firewalling registered Jul 15 23:31:59.809069 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:31:59.810021 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:31:59.812827 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:31:59.813658 systemd-tmpfiles[264]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:31:59.818130 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:31:59.819479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:31:59.830048 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:31:59.831669 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:31:59.838001 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:31:59.842097 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:31:59.846614 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:31:59.881209 systemd-resolved[291]: Positive Trust Anchors: Jul 15 23:31:59.881225 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:31:59.881256 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:31:59.885917 systemd-resolved[291]: Defaulting to hostname 'linux'. Jul 15 23:31:59.886775 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:31:59.887948 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:31:59.917953 kernel: SCSI subsystem initialized Jul 15 23:31:59.922945 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:31:59.929968 kernel: iscsi: registered transport (tcp) Jul 15 23:31:59.941959 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:31:59.941977 kernel: QLogic iSCSI HBA Driver Jul 15 23:31:59.957007 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:31:59.973746 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:31:59.974991 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:32:00.018769 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:32:00.020642 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:32:00.087957 kernel: raid6: neonx8 gen() 15650 MB/s Jul 15 23:32:00.104958 kernel: raid6: neonx4 gen() 15792 MB/s Jul 15 23:32:00.121951 kernel: raid6: neonx2 gen() 13215 MB/s Jul 15 23:32:00.138952 kernel: raid6: neonx1 gen() 10409 MB/s Jul 15 23:32:00.155952 kernel: raid6: int64x8 gen() 6890 MB/s Jul 15 23:32:00.172951 kernel: raid6: int64x4 gen() 7344 MB/s Jul 15 23:32:00.189944 kernel: raid6: int64x2 gen() 6096 MB/s Jul 15 23:32:00.206947 kernel: raid6: int64x1 gen() 5049 MB/s Jul 15 23:32:00.206960 kernel: raid6: using algorithm neonx4 gen() 15792 MB/s Jul 15 23:32:00.223954 kernel: raid6: .... xor() 12327 MB/s, rmw enabled Jul 15 23:32:00.223985 kernel: raid6: using neon recovery algorithm Jul 15 23:32:00.228949 kernel: xor: measuring software checksum speed Jul 15 23:32:00.228968 kernel: 8regs : 21636 MB/sec Jul 15 23:32:00.228981 kernel: 32regs : 19552 MB/sec Jul 15 23:32:00.230245 kernel: arm64_neon : 28118 MB/sec Jul 15 23:32:00.230260 kernel: xor: using function: arm64_neon (28118 MB/sec) Jul 15 23:32:00.290961 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:32:00.296558 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:32:00.298674 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:32:00.334297 systemd-udevd[497]: Using default interface naming scheme 'v255'. Jul 15 23:32:00.338329 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:32:00.339817 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:32:00.366892 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Jul 15 23:32:00.386047 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:32:00.387824 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:32:00.434103 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:32:00.437237 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:32:00.479955 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 15 23:32:00.486868 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 23:32:00.489965 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:32:00.489998 kernel: GPT:9289727 != 19775487 Jul 15 23:32:00.490008 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:32:00.490516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:32:00.492255 kernel: GPT:9289727 != 19775487 Jul 15 23:32:00.492271 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:32:00.492285 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:32:00.490626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:32:00.494191 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:32:00.495917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:32:00.519119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:32:00.531898 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 23:32:00.534020 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:32:00.541870 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 23:32:00.549272 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 23:32:00.550141 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 23:32:00.558032 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:32:00.558901 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:32:00.560377 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:32:00.561827 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:32:00.564170 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:32:00.565588 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:32:00.582205 disk-uuid[587]: Primary Header is updated. Jul 15 23:32:00.582205 disk-uuid[587]: Secondary Entries is updated. Jul 15 23:32:00.582205 disk-uuid[587]: Secondary Header is updated. Jul 15 23:32:00.586152 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:32:00.588120 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:32:01.594493 disk-uuid[590]: The operation has completed successfully. Jul 15 23:32:01.595642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:32:01.621828 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:32:01.622749 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:32:01.646745 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:32:01.674708 sh[609]: Success Jul 15 23:32:01.687945 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:32:01.688109 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:32:01.688123 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:32:01.701227 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 23:32:01.720273 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:32:01.722795 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:32:01.736102 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:32:01.741289 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:32:01.741315 kernel: BTRFS: device fsid e70e9257-c19d-4e0a-b2ee-631da7d0eb2b devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (621) Jul 15 23:32:01.742510 kernel: BTRFS info (device dm-0): first mount of filesystem e70e9257-c19d-4e0a-b2ee-631da7d0eb2b Jul 15 23:32:01.743278 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:32:01.743292 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:32:01.747198 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:32:01.748215 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:32:01.749172 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:32:01.749881 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:32:01.752377 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:32:01.771617 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (652) Jul 15 23:32:01.771667 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:32:01.771692 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:32:01.772955 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:32:01.777946 kernel: BTRFS info (device vda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:32:01.780061 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:32:01.782687 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:32:01.850159 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:32:01.854174 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:32:01.897905 systemd-networkd[796]: lo: Link UP Jul 15 23:32:01.897916 systemd-networkd[796]: lo: Gained carrier Jul 15 23:32:01.898577 systemd-networkd[796]: Enumeration completed Jul 15 23:32:01.898982 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:32:01.898985 systemd-networkd[796]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:32:01.899332 systemd-networkd[796]: eth0: Link UP Jul 15 23:32:01.899334 systemd-networkd[796]: eth0: Gained carrier Jul 15 23:32:01.899341 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:32:01.901255 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:32:01.903956 systemd[1]: Reached target network.target - Network. Jul 15 23:32:01.917985 systemd-networkd[796]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:32:01.948158 ignition[693]: Ignition 2.21.0 Jul 15 23:32:01.948173 ignition[693]: Stage: fetch-offline Jul 15 23:32:01.948217 ignition[693]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:32:01.948225 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:32:01.948408 ignition[693]: parsed url from cmdline: "" Jul 15 23:32:01.948411 ignition[693]: no config URL provided Jul 15 23:32:01.948416 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:32:01.948422 ignition[693]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:32:01.948442 ignition[693]: op(1): [started] loading QEMU firmware config module Jul 15 23:32:01.948446 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 23:32:01.955587 ignition[693]: op(1): [finished] loading QEMU firmware config module Jul 15 23:32:01.994798 ignition[693]: parsing config with SHA512: 86a0d7e6ca2791690b1288b387015eab650734361f09e092e0841e2ca524a9fba3cdd09062f11c9ff29e152c3538f671fcd3c338da8bbc0492b72ced7fbef4e5 Jul 15 23:32:02.001017 unknown[693]: fetched base config from "system" Jul 15 23:32:02.001028 unknown[693]: fetched user config from "qemu" Jul 15 23:32:02.001391 ignition[693]: fetch-offline: fetch-offline passed Jul 15 23:32:02.001446 ignition[693]: Ignition finished successfully Jul 15 23:32:02.003748 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:32:02.005048 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 23:32:02.005790 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:32:02.028983 ignition[813]: Ignition 2.21.0 Jul 15 23:32:02.028996 ignition[813]: Stage: kargs Jul 15 23:32:02.029225 ignition[813]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:32:02.029238 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:32:02.031036 ignition[813]: kargs: kargs passed Jul 15 23:32:02.031094 ignition[813]: Ignition finished successfully Jul 15 23:32:02.034854 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:32:02.036729 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:32:02.067911 ignition[821]: Ignition 2.21.0 Jul 15 23:32:02.067948 ignition[821]: Stage: disks Jul 15 23:32:02.068074 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:32:02.068082 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:32:02.070043 ignition[821]: disks: disks passed Jul 15 23:32:02.070099 ignition[821]: Ignition finished successfully Jul 15 23:32:02.071768 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:32:02.072779 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:32:02.073984 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:32:02.075463 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:32:02.077002 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:32:02.078656 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:32:02.081404 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:32:02.106806 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 23:32:02.111406 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:32:02.114515 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:32:02.186999 kernel: EXT4-fs (vda9): mounted filesystem db08fdf6-07fd-45a1-bb3b-a7d0399d70fd r/w with ordered data mode. Quota mode: none. Jul 15 23:32:02.187787 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:32:02.189118 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:32:02.191528 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:32:02.193106 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:32:02.193876 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:32:02.193942 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:32:02.193968 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:32:02.204056 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:32:02.206310 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:32:02.208937 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (839) Jul 15 23:32:02.210955 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:32:02.210978 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:32:02.210989 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:32:02.213009 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:32:02.257661 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:32:02.261269 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:32:02.265162 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:32:02.269108 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:32:02.343259 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:32:02.345471 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:32:02.346855 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:32:02.367949 kernel: BTRFS info (device vda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:32:02.384003 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:32:02.387779 ignition[953]: INFO : Ignition 2.21.0 Jul 15 23:32:02.387779 ignition[953]: INFO : Stage: mount Jul 15 23:32:02.388955 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:32:02.388955 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:32:02.388955 ignition[953]: INFO : mount: mount passed Jul 15 23:32:02.388955 ignition[953]: INFO : Ignition finished successfully Jul 15 23:32:02.390417 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:32:02.392634 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:32:02.870634 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:32:02.872105 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:32:02.902949 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (965) Jul 15 23:32:02.903960 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:32:02.903987 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:32:02.905069 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:32:02.907458 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:32:02.933933 ignition[982]: INFO : Ignition 2.21.0 Jul 15 23:32:02.933933 ignition[982]: INFO : Stage: files Jul 15 23:32:02.935178 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:32:02.935178 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:32:02.937006 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:32:02.937739 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:32:02.937739 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:32:02.940245 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:32:02.941207 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:32:02.941207 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:32:02.940812 unknown[982]: wrote ssh authorized keys file for user: core Jul 15 23:32:02.943862 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 23:32:02.945324 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 15 23:32:03.018163 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:32:03.036062 systemd-networkd[796]: eth0: Gained IPv6LL Jul 15 23:32:03.233070 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 23:32:03.233070 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:32:03.236030 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:32:03.236030 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:32:03.236030 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:32:03.236030 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:32:03.236030 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:32:03.236030 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:32:03.236030 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:32:03.245811 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:32:03.245811 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:32:03.245811 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:32:03.245811 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:32:03.245811 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:32:03.245811 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 15 23:32:03.629434 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 23:32:04.044894 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:32:04.044894 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 23:32:04.048714 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:32:04.051908 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:32:04.051908 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 23:32:04.051908 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 15 23:32:04.056672 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:32:04.056672 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:32:04.056672 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 15 23:32:04.056672 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 23:32:04.071282 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:32:04.074945 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:32:04.077774 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 23:32:04.077774 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:32:04.077774 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:32:04.077774 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:32:04.077774 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:32:04.077774 ignition[982]: INFO : files: files passed Jul 15 23:32:04.077774 ignition[982]: INFO : Ignition finished successfully Jul 15 23:32:04.078722 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:32:04.081356 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:32:04.083272 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:32:04.097804 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:32:04.097921 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:32:04.100560 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 23:32:04.103291 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:32:04.103291 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:32:04.105782 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:32:04.106774 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:32:04.108103 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:32:04.110477 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:32:04.137125 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:32:04.137255 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:32:04.138880 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:32:04.140206 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:32:04.141494 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:32:04.142273 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:32:04.166840 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:32:04.169017 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:32:04.188386 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:32:04.189404 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:32:04.190799 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:32:04.192069 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:32:04.192194 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:32:04.194091 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:32:04.195493 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:32:04.196648 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:32:04.197912 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:32:04.199398 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:32:04.200773 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:32:04.202255 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:32:04.203574 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:32:04.204958 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:32:04.206402 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:32:04.207684 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:32:04.208898 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:32:04.209035 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:32:04.210770 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:32:04.212178 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:32:04.213689 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:32:04.217001 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:32:04.217909 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:32:04.218047 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:32:04.220100 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:32:04.220217 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:32:04.221743 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:32:04.222853 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:32:04.222988 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:32:04.224553 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:32:04.225606 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:32:04.226828 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:32:04.226911 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:32:04.228629 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:32:04.228701 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:32:04.229808 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:32:04.229935 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:32:04.231138 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:32:04.231235 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:32:04.233025 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:32:04.234572 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:32:04.235226 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:32:04.235334 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:32:04.236806 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:32:04.236896 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:32:04.241386 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:32:04.248046 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:32:04.256913 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:32:04.260792 ignition[1037]: INFO : Ignition 2.21.0 Jul 15 23:32:04.260792 ignition[1037]: INFO : Stage: umount Jul 15 23:32:04.263013 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:32:04.263013 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:32:04.263013 ignition[1037]: INFO : umount: umount passed Jul 15 23:32:04.263013 ignition[1037]: INFO : Ignition finished successfully Jul 15 23:32:04.264891 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:32:04.265037 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:32:04.266068 systemd[1]: Stopped target network.target - Network. Jul 15 23:32:04.267251 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:32:04.267304 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:32:04.268671 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:32:04.268711 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:32:04.270045 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:32:04.270090 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:32:04.271279 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:32:04.271314 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:32:04.272759 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:32:04.274116 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:32:04.281945 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:32:04.282065 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:32:04.285345 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:32:04.285550 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:32:04.285635 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:32:04.288380 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:32:04.288846 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:32:04.290343 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:32:04.290377 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:32:04.292639 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:32:04.293545 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:32:04.293592 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:32:04.295539 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:32:04.295577 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:32:04.298310 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:32:04.298349 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:32:04.300316 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:32:04.300361 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:32:04.303504 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:32:04.307046 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:32:04.307115 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:32:04.307393 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:32:04.307468 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:32:04.309757 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:32:04.309858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:32:04.317595 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:32:04.317716 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:32:04.323563 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:32:04.323691 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:32:04.325283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:32:04.325315 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:32:04.326624 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:32:04.326653 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:32:04.327948 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:32:04.327988 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:32:04.329946 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:32:04.329987 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:32:04.331880 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:32:04.331923 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:32:04.334789 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:32:04.336116 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:32:04.336163 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:32:04.338639 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:32:04.338680 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:32:04.341074 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 23:32:04.341111 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:32:04.346048 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:32:04.346098 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:32:04.347595 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:32:04.347634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:32:04.350536 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:32:04.350580 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 23:32:04.350608 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:32:04.350636 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:32:04.350894 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:32:04.351018 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:32:04.352582 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:32:04.354551 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:32:04.368245 systemd[1]: Switching root. Jul 15 23:32:04.394827 systemd-journald[245]: Journal stopped Jul 15 23:32:05.180209 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jul 15 23:32:05.180256 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:32:05.180274 kernel: SELinux: policy capability open_perms=1 Jul 15 23:32:05.180283 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:32:05.180294 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:32:05.180307 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:32:05.180317 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:32:05.180326 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:32:05.180337 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:32:05.180347 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:32:05.180357 systemd[1]: Successfully loaded SELinux policy in 51.064ms. Jul 15 23:32:05.180376 kernel: audit: type=1403 audit(1752622324.565:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:32:05.180391 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.586ms. Jul 15 23:32:05.180401 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:32:05.180412 systemd[1]: Detected virtualization kvm. Jul 15 23:32:05.180423 systemd[1]: Detected architecture arm64. Jul 15 23:32:05.180432 systemd[1]: Detected first boot. Jul 15 23:32:05.180442 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:32:05.180452 zram_generator::config[1085]: No configuration found. Jul 15 23:32:05.180463 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:32:05.180473 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:32:05.180485 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:32:05.180495 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:32:05.180512 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:32:05.180523 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:32:05.180532 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:32:05.180543 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:32:05.180552 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:32:05.180562 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:32:05.180574 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:32:05.180584 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:32:05.180594 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:32:05.180605 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:32:05.180616 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:32:05.180627 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:32:05.180637 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:32:05.180647 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:32:05.180657 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:32:05.180669 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:32:05.180679 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 23:32:05.180690 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:32:05.180702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:32:05.180713 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:32:05.180723 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:32:05.180733 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:32:05.180743 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:32:05.180754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:32:05.180772 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:32:05.180784 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:32:05.180794 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:32:05.180804 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:32:05.180814 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:32:05.180824 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:32:05.180834 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:32:05.180844 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:32:05.180856 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:32:05.180866 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:32:05.180876 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:32:05.180886 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:32:05.180896 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:32:05.180906 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:32:05.180916 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:32:05.180940 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:32:05.180954 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:32:05.180966 systemd[1]: Reached target machines.target - Containers. Jul 15 23:32:05.180976 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:32:05.180986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:32:05.180996 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:32:05.181006 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:32:05.181016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:32:05.181026 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:32:05.181036 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:32:05.181048 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:32:05.181057 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:32:05.181067 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:32:05.181077 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:32:05.181087 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:32:05.181097 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:32:05.181106 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:32:05.181117 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:32:05.181128 kernel: loop: module loaded Jul 15 23:32:05.181138 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:32:05.181148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:32:05.181158 kernel: fuse: init (API version 7.41) Jul 15 23:32:05.181167 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:32:05.181177 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:32:05.181188 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:32:05.181197 kernel: ACPI: bus type drm_connector registered Jul 15 23:32:05.181206 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:32:05.181216 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:32:05.181228 systemd[1]: Stopped verity-setup.service. Jul 15 23:32:05.181237 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:32:05.181247 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:32:05.181257 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:32:05.181268 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:32:05.181278 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:32:05.181288 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:32:05.181298 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:32:05.181310 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:32:05.181342 systemd-journald[1150]: Collecting audit messages is disabled. Jul 15 23:32:05.181364 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:32:05.181374 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:32:05.181385 systemd-journald[1150]: Journal started Jul 15 23:32:05.181406 systemd-journald[1150]: Runtime Journal (/run/log/journal/d308b6b76fd34f5c9440aadaba28fb5d) is 6M, max 48.5M, 42.4M free. Jul 15 23:32:04.959821 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:32:04.982906 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 23:32:04.983275 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:32:05.185174 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:32:05.185902 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:32:05.186094 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:32:05.187196 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:32:05.187341 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:32:05.188480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:32:05.188637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:32:05.189799 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:32:05.189987 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:32:05.191052 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:32:05.191195 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:32:05.192343 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:32:05.193400 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:32:05.194577 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:32:05.195733 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:32:05.208133 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:32:05.210238 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:32:05.211870 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:32:05.212779 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:32:05.212808 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:32:05.214508 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:32:05.221159 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:32:05.222574 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:32:05.223712 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:32:05.225544 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:32:05.226524 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:32:05.229100 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:32:05.229991 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:32:05.231079 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:32:05.234195 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:32:05.236154 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:32:05.240021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:32:05.241806 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:32:05.243028 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:32:05.244207 systemd-journald[1150]: Time spent on flushing to /var/log/journal/d308b6b76fd34f5c9440aadaba28fb5d is 13.525ms for 888 entries. Jul 15 23:32:05.244207 systemd-journald[1150]: System Journal (/var/log/journal/d308b6b76fd34f5c9440aadaba28fb5d) is 8M, max 195.6M, 187.6M free. Jul 15 23:32:05.263996 systemd-journald[1150]: Received client request to flush runtime journal. Jul 15 23:32:05.264076 kernel: loop0: detected capacity change from 0 to 107312 Jul 15 23:32:05.264090 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:32:05.247607 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:32:05.255158 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:32:05.264288 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:32:05.267053 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:32:05.268515 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:32:05.279638 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jul 15 23:32:05.279655 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jul 15 23:32:05.284497 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:32:05.285591 kernel: loop1: detected capacity change from 0 to 138376 Jul 15 23:32:05.292423 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:32:05.295424 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:32:05.311941 kernel: loop2: detected capacity change from 0 to 207008 Jul 15 23:32:05.328166 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:32:05.330318 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:32:05.335614 kernel: loop3: detected capacity change from 0 to 107312 Jul 15 23:32:05.340958 kernel: loop4: detected capacity change from 0 to 138376 Jul 15 23:32:05.348956 kernel: loop5: detected capacity change from 0 to 207008 Jul 15 23:32:05.353934 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 23:32:05.354339 (sd-merge)[1226]: Merged extensions into '/usr'. Jul 15 23:32:05.354413 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jul 15 23:32:05.354423 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jul 15 23:32:05.360309 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:32:05.364048 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:32:05.364062 systemd[1]: Reloading... Jul 15 23:32:05.414968 zram_generator::config[1252]: No configuration found. Jul 15 23:32:05.495013 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:32:05.504728 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:32:05.568981 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:32:05.569346 systemd[1]: Reloading finished in 204 ms. Jul 15 23:32:05.605776 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:32:05.608965 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:32:05.618669 systemd[1]: Starting ensure-sysext.service... Jul 15 23:32:05.620516 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:32:05.636539 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:32:05.636567 systemd[1]: Reloading... Jul 15 23:32:05.641357 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:32:05.641389 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:32:05.641594 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:32:05.641787 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:32:05.642424 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:32:05.642634 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jul 15 23:32:05.642681 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jul 15 23:32:05.645523 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:32:05.645536 systemd-tmpfiles[1289]: Skipping /boot Jul 15 23:32:05.654789 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:32:05.654804 systemd-tmpfiles[1289]: Skipping /boot Jul 15 23:32:05.677974 zram_generator::config[1316]: No configuration found. Jul 15 23:32:05.753153 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:32:05.815868 systemd[1]: Reloading finished in 179 ms. Jul 15 23:32:05.838967 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:32:05.845034 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:32:05.857069 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:32:05.859332 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:32:05.863066 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:32:05.865728 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:32:05.870466 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:32:05.875702 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:32:05.887324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:32:05.889498 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:32:05.891726 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:32:05.894811 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:32:05.896482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:32:05.896599 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:32:05.898786 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:32:05.900734 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:32:05.902421 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:32:05.902629 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:32:05.904020 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:32:05.904193 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:32:05.905610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:32:05.905800 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:32:05.914226 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:32:05.917799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:32:05.919512 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:32:05.921380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:32:05.930766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:32:05.932166 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:32:05.932290 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:32:05.933373 systemd-udevd[1362]: Using default interface naming scheme 'v255'. Jul 15 23:32:05.935286 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:32:05.936156 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:32:05.937574 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:32:05.939191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:32:05.939357 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:32:05.940698 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:32:05.940850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:32:05.944111 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:32:05.945779 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:32:05.945952 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:32:05.951471 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:32:05.952939 augenrules[1398]: No rules Jul 15 23:32:05.952979 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:32:05.959858 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:32:05.963119 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:32:05.964972 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:32:05.965847 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:32:05.965972 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:32:05.966092 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:32:05.967010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:32:05.968663 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:32:05.968873 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:32:05.970217 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:32:05.971497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:32:05.971648 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:32:05.973370 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:32:05.973520 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:32:05.974681 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:32:05.974840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:32:05.984359 systemd[1]: Finished ensure-sysext.service. Jul 15 23:32:06.008206 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:32:06.008734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:32:06.019717 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:32:06.020524 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:32:06.020579 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:32:06.022500 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 23:32:06.030325 systemd-resolved[1356]: Positive Trust Anchors: Jul 15 23:32:06.030340 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:32:06.030371 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:32:06.038728 systemd-resolved[1356]: Defaulting to hostname 'linux'. Jul 15 23:32:06.044985 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:32:06.045915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:32:06.080568 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 23:32:06.105902 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 23:32:06.107016 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:32:06.108042 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:32:06.109148 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:32:06.110624 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:32:06.111809 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:32:06.111847 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:32:06.112580 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:32:06.113842 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:32:06.114867 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:32:06.115846 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:32:06.117590 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:32:06.118507 systemd-networkd[1442]: lo: Link UP Jul 15 23:32:06.118511 systemd-networkd[1442]: lo: Gained carrier Jul 15 23:32:06.119285 systemd-networkd[1442]: Enumeration completed Jul 15 23:32:06.119753 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:32:06.122264 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:32:06.122273 systemd-networkd[1442]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:32:06.122683 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:32:06.123053 systemd-networkd[1442]: eth0: Link UP Jul 15 23:32:06.123170 systemd-networkd[1442]: eth0: Gained carrier Jul 15 23:32:06.123184 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:32:06.124089 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:32:06.125298 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:32:06.128804 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:32:06.130498 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:32:06.132328 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:32:06.133366 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:32:06.136988 systemd-networkd[1442]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:32:06.138145 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Jul 15 23:32:06.139543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:32:06.139618 systemd-timesyncd[1443]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 23:32:06.139660 systemd-timesyncd[1443]: Initial clock synchronization to Tue 2025-07-15 23:32:06.388285 UTC. Jul 15 23:32:06.140657 systemd[1]: Reached target network.target - Network. Jul 15 23:32:06.141339 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:32:06.142008 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:32:06.142697 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:32:06.142726 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:32:06.143733 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:32:06.146162 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:32:06.152196 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:32:06.154349 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:32:06.156405 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:32:06.157374 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:32:06.169158 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:32:06.171102 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:32:06.175096 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:32:06.177185 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:32:06.179080 extend-filesystems[1460]: Found /dev/vda6 Jul 15 23:32:06.180182 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:32:06.183716 jq[1459]: false Jul 15 23:32:06.185618 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:32:06.188131 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:32:06.191770 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:32:06.193660 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 23:32:06.194087 extend-filesystems[1460]: Found /dev/vda9 Jul 15 23:32:06.194123 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:32:06.195135 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:32:06.199177 extend-filesystems[1460]: Checking size of /dev/vda9 Jul 15 23:32:06.200110 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:32:06.204779 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:32:06.206039 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:32:06.206216 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:32:06.206450 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:32:06.206609 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:32:06.220547 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:32:06.232903 jq[1487]: true Jul 15 23:32:06.236098 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:32:06.236316 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:32:06.239449 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:32:06.243093 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:32:06.259224 tar[1490]: linux-arm64/LICENSE Jul 15 23:32:06.261132 tar[1490]: linux-arm64/helm Jul 15 23:32:06.261996 extend-filesystems[1460]: Resized partition /dev/vda9 Jul 15 23:32:06.262650 dbus-daemon[1457]: [system] SELinux support is enabled Jul 15 23:32:06.262828 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:32:06.266151 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:32:06.266187 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:32:06.267635 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:32:06.267660 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:32:06.269255 extend-filesystems[1517]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:32:06.273672 jq[1503]: true Jul 15 23:32:06.289995 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 23:32:06.318108 update_engine[1484]: I20250715 23:32:06.317917 1484 main.cc:92] Flatcar Update Engine starting Jul 15 23:32:06.320284 update_engine[1484]: I20250715 23:32:06.319852 1484 update_check_scheduler.cc:74] Next update check in 6m25s Jul 15 23:32:06.319998 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:32:06.322616 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:32:06.324947 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 23:32:06.336802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:32:06.338213 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 23:32:06.338213 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 23:32:06.338213 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 23:32:06.338187 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:32:06.345719 extend-filesystems[1460]: Resized filesystem in /dev/vda9 Jul 15 23:32:06.338428 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:32:06.355938 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:32:06.358988 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:32:06.361564 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 23:32:06.421720 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 23:32:06.421979 systemd-logind[1476]: New seat seat0. Jul 15 23:32:06.423010 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:32:06.506537 locksmithd[1537]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:32:06.518381 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:32:06.542096 containerd[1498]: time="2025-07-15T23:32:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:32:06.543279 containerd[1498]: time="2025-07-15T23:32:06.543198440Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:32:06.553970 containerd[1498]: time="2025-07-15T23:32:06.553898880Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.92µs" Jul 15 23:32:06.553970 containerd[1498]: time="2025-07-15T23:32:06.553960480Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:32:06.553970 containerd[1498]: time="2025-07-15T23:32:06.553979720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:32:06.554185 containerd[1498]: time="2025-07-15T23:32:06.554143640Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:32:06.554185 containerd[1498]: time="2025-07-15T23:32:06.554165640Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:32:06.554285 containerd[1498]: time="2025-07-15T23:32:06.554188640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:32:06.554285 containerd[1498]: time="2025-07-15T23:32:06.554236800Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:32:06.554285 containerd[1498]: time="2025-07-15T23:32:06.554247560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:32:06.554491 containerd[1498]: time="2025-07-15T23:32:06.554468960Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:32:06.554491 containerd[1498]: time="2025-07-15T23:32:06.554490040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:32:06.554542 containerd[1498]: time="2025-07-15T23:32:06.554502720Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:32:06.554542 containerd[1498]: time="2025-07-15T23:32:06.554511080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:32:06.554629 containerd[1498]: time="2025-07-15T23:32:06.554579280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:32:06.554785 containerd[1498]: time="2025-07-15T23:32:06.554753400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:32:06.554819 containerd[1498]: time="2025-07-15T23:32:06.554799560Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:32:06.554819 containerd[1498]: time="2025-07-15T23:32:06.554814920Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:32:06.554858 containerd[1498]: time="2025-07-15T23:32:06.554846480Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:32:06.556311 containerd[1498]: time="2025-07-15T23:32:06.555988920Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:32:06.556311 containerd[1498]: time="2025-07-15T23:32:06.556103360Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:32:06.560112 containerd[1498]: time="2025-07-15T23:32:06.560077160Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:32:06.560239 containerd[1498]: time="2025-07-15T23:32:06.560223520Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:32:06.560400 containerd[1498]: time="2025-07-15T23:32:06.560381840Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:32:06.560484 containerd[1498]: time="2025-07-15T23:32:06.560469440Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:32:06.560597 containerd[1498]: time="2025-07-15T23:32:06.560579880Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:32:06.560654 containerd[1498]: time="2025-07-15T23:32:06.560641920Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:32:06.560703 containerd[1498]: time="2025-07-15T23:32:06.560691560Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:32:06.560772 containerd[1498]: time="2025-07-15T23:32:06.560742280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:32:06.560872 containerd[1498]: time="2025-07-15T23:32:06.560855640Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:32:06.560957 containerd[1498]: time="2025-07-15T23:32:06.560922920Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:32:06.561015 containerd[1498]: time="2025-07-15T23:32:06.560995200Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561226920Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561356240Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561380040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561395720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561407520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561418800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561428920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561446880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561457200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561468720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561479680Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561490640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561671600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:32:06.561721 containerd[1498]: time="2025-07-15T23:32:06.561685600Z" level=info msg="Start snapshots syncer" Jul 15 23:32:06.562059 containerd[1498]: time="2025-07-15T23:32:06.562036960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:32:06.562496 containerd[1498]: time="2025-07-15T23:32:06.562431600Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:32:06.562646 containerd[1498]: time="2025-07-15T23:32:06.562628600Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:32:06.562791 containerd[1498]: time="2025-07-15T23:32:06.562772160Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:32:06.563011 containerd[1498]: time="2025-07-15T23:32:06.562987560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:32:06.563102 containerd[1498]: time="2025-07-15T23:32:06.563087880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:32:06.563156 containerd[1498]: time="2025-07-15T23:32:06.563142320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:32:06.563220 containerd[1498]: time="2025-07-15T23:32:06.563207320Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:32:06.563271 containerd[1498]: time="2025-07-15T23:32:06.563258880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:32:06.563319 containerd[1498]: time="2025-07-15T23:32:06.563308080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:32:06.563367 containerd[1498]: time="2025-07-15T23:32:06.563355080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:32:06.563432 containerd[1498]: time="2025-07-15T23:32:06.563418600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:32:06.563486 containerd[1498]: time="2025-07-15T23:32:06.563473280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:32:06.563546 containerd[1498]: time="2025-07-15T23:32:06.563532920Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:32:06.563637 containerd[1498]: time="2025-07-15T23:32:06.563622600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:32:06.563694 containerd[1498]: time="2025-07-15T23:32:06.563679960Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:32:06.563740 containerd[1498]: time="2025-07-15T23:32:06.563727240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:32:06.563800 containerd[1498]: time="2025-07-15T23:32:06.563787080Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:32:06.563844 containerd[1498]: time="2025-07-15T23:32:06.563833600Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:32:06.563891 containerd[1498]: time="2025-07-15T23:32:06.563878800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:32:06.563980 containerd[1498]: time="2025-07-15T23:32:06.563964320Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:32:06.564096 containerd[1498]: time="2025-07-15T23:32:06.564085720Z" level=info msg="runtime interface created" Jul 15 23:32:06.564138 containerd[1498]: time="2025-07-15T23:32:06.564126520Z" level=info msg="created NRI interface" Jul 15 23:32:06.564199 containerd[1498]: time="2025-07-15T23:32:06.564185760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:32:06.564255 containerd[1498]: time="2025-07-15T23:32:06.564242760Z" level=info msg="Connect containerd service" Jul 15 23:32:06.564335 containerd[1498]: time="2025-07-15T23:32:06.564320920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:32:06.565615 containerd[1498]: time="2025-07-15T23:32:06.565173240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:32:06.716537 containerd[1498]: time="2025-07-15T23:32:06.716456240Z" level=info msg="Start subscribing containerd event" Jul 15 23:32:06.716537 containerd[1498]: time="2025-07-15T23:32:06.716542440Z" level=info msg="Start recovering state" Jul 15 23:32:06.716667 containerd[1498]: time="2025-07-15T23:32:06.716646920Z" level=info msg="Start event monitor" Jul 15 23:32:06.716707 containerd[1498]: time="2025-07-15T23:32:06.716668480Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:32:06.716707 containerd[1498]: time="2025-07-15T23:32:06.716676920Z" level=info msg="Start streaming server" Jul 15 23:32:06.716707 containerd[1498]: time="2025-07-15T23:32:06.716687120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:32:06.716707 containerd[1498]: time="2025-07-15T23:32:06.716696000Z" level=info msg="runtime interface starting up..." Jul 15 23:32:06.716707 containerd[1498]: time="2025-07-15T23:32:06.716701760Z" level=info msg="starting plugins..." Jul 15 23:32:06.716814 containerd[1498]: time="2025-07-15T23:32:06.716715680Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:32:06.717194 containerd[1498]: time="2025-07-15T23:32:06.717159880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:32:06.717266 containerd[1498]: time="2025-07-15T23:32:06.717242680Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:32:06.717416 containerd[1498]: time="2025-07-15T23:32:06.717402440Z" level=info msg="containerd successfully booted in 0.175675s" Jul 15 23:32:06.717514 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:32:06.776701 tar[1490]: linux-arm64/README.md Jul 15 23:32:06.793364 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:32:07.227514 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:32:07.247052 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:32:07.249598 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:32:07.271614 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:32:07.271825 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:32:07.274501 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:32:07.297216 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:32:07.299882 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:32:07.301854 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 23:32:07.302998 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:32:07.645271 systemd-networkd[1442]: eth0: Gained IPv6LL Jul 15 23:32:07.647905 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:32:07.649428 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:32:07.651710 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 23:32:07.682392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:32:07.684525 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:32:07.698316 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 23:32:07.698543 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 23:32:07.699893 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:32:07.707988 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:32:08.269612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:32:08.271075 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:32:08.273517 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:32:08.274031 systemd[1]: Startup finished in 2.066s (kernel) + 4.941s (initrd) + 3.765s (userspace) = 10.773s. Jul 15 23:32:08.698031 kubelet[1617]: E0715 23:32:08.697903 1617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:32:08.700299 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:32:08.700453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:32:08.700813 systemd[1]: kubelet.service: Consumed 812ms CPU time, 257.7M memory peak. Jul 15 23:32:12.906377 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:32:12.907462 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:55068.service - OpenSSH per-connection server daemon (10.0.0.1:55068). Jul 15 23:32:12.968666 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 55068 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:32:12.970538 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:32:12.978340 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:32:12.979290 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:32:12.985642 systemd-logind[1476]: New session 1 of user core. Jul 15 23:32:13.014037 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:32:13.016505 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:32:13.035840 (systemd)[1634]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:32:13.038093 systemd-logind[1476]: New session c1 of user core. Jul 15 23:32:13.143479 systemd[1634]: Queued start job for default target default.target. Jul 15 23:32:13.158892 systemd[1634]: Created slice app.slice - User Application Slice. Jul 15 23:32:13.158921 systemd[1634]: Reached target paths.target - Paths. Jul 15 23:32:13.158982 systemd[1634]: Reached target timers.target - Timers. Jul 15 23:32:13.160253 systemd[1634]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:32:13.169158 systemd[1634]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:32:13.169220 systemd[1634]: Reached target sockets.target - Sockets. Jul 15 23:32:13.169260 systemd[1634]: Reached target basic.target - Basic System. Jul 15 23:32:13.169288 systemd[1634]: Reached target default.target - Main User Target. Jul 15 23:32:13.169314 systemd[1634]: Startup finished in 125ms. Jul 15 23:32:13.169544 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:32:13.172036 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:32:13.234391 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:55084.service - OpenSSH per-connection server daemon (10.0.0.1:55084). Jul 15 23:32:13.289106 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 55084 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:32:13.290363 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:32:13.294569 systemd-logind[1476]: New session 2 of user core. Jul 15 23:32:13.303130 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:32:13.357397 sshd[1647]: Connection closed by 10.0.0.1 port 55084 Jul 15 23:32:13.356479 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Jul 15 23:32:13.376250 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:55084.service: Deactivated successfully. Jul 15 23:32:13.378990 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:32:13.381108 systemd-logind[1476]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:32:13.385529 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:55116.service - OpenSSH per-connection server daemon (10.0.0.1:55116). Jul 15 23:32:13.386035 systemd-logind[1476]: Removed session 2. Jul 15 23:32:13.434207 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 55116 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:32:13.435396 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:32:13.441870 systemd-logind[1476]: New session 3 of user core. Jul 15 23:32:13.457192 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:32:13.506232 sshd[1655]: Connection closed by 10.0.0.1 port 55116 Jul 15 23:32:13.506662 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Jul 15 23:32:13.523824 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:55116.service: Deactivated successfully. Jul 15 23:32:13.525580 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:32:13.527479 systemd-logind[1476]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:32:13.529853 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:55124.service - OpenSSH per-connection server daemon (10.0.0.1:55124). Jul 15 23:32:13.530359 systemd-logind[1476]: Removed session 3. Jul 15 23:32:13.579675 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 55124 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:32:13.580994 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:32:13.585281 systemd-logind[1476]: New session 4 of user core. Jul 15 23:32:13.592120 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:32:13.644800 sshd[1663]: Connection closed by 10.0.0.1 port 55124 Jul 15 23:32:13.644726 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Jul 15 23:32:13.655060 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:55124.service: Deactivated successfully. Jul 15 23:32:13.656556 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:32:13.658343 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:32:13.660609 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:55140.service - OpenSSH per-connection server daemon (10.0.0.1:55140). Jul 15 23:32:13.661515 systemd-logind[1476]: Removed session 4. Jul 15 23:32:13.715546 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 55140 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:32:13.716723 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:32:13.721177 systemd-logind[1476]: New session 5 of user core. Jul 15 23:32:13.732102 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:32:13.794791 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:32:13.795078 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:32:13.810670 sudo[1672]: pam_unix(sudo:session): session closed for user root Jul 15 23:32:13.812376 sshd[1671]: Connection closed by 10.0.0.1 port 55140 Jul 15 23:32:13.812731 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Jul 15 23:32:13.826082 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:55140.service: Deactivated successfully. Jul 15 23:32:13.829287 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:32:13.830039 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:32:13.832375 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:55146.service - OpenSSH per-connection server daemon (10.0.0.1:55146). Jul 15 23:32:13.833477 systemd-logind[1476]: Removed session 5. Jul 15 23:32:13.884040 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 55146 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:32:13.885332 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:32:13.889898 systemd-logind[1476]: New session 6 of user core. Jul 15 23:32:13.896173 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:32:13.948276 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:32:13.948527 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:32:14.022347 sudo[1682]: pam_unix(sudo:session): session closed for user root Jul 15 23:32:14.027833 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:32:14.028135 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:32:14.036005 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:32:14.074596 augenrules[1704]: No rules Jul 15 23:32:14.075961 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:32:14.076204 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:32:14.077133 sudo[1681]: pam_unix(sudo:session): session closed for user root Jul 15 23:32:14.078455 sshd[1680]: Connection closed by 10.0.0.1 port 55146 Jul 15 23:32:14.078806 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Jul 15 23:32:14.091264 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:55146.service: Deactivated successfully. Jul 15 23:32:14.093503 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:32:14.094303 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:32:14.098021 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:55150.service - OpenSSH per-connection server daemon (10.0.0.1:55150). Jul 15 23:32:14.098836 systemd-logind[1476]: Removed session 6. Jul 15 23:32:14.151779 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 55150 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:32:14.153041 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:32:14.158173 systemd-logind[1476]: New session 7 of user core. Jul 15 23:32:14.169125 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:32:14.220023 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:32:14.220671 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:32:14.605591 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:32:14.620282 (dockerd)[1737]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:32:14.900027 dockerd[1737]: time="2025-07-15T23:32:14.899898852Z" level=info msg="Starting up" Jul 15 23:32:14.901699 dockerd[1737]: time="2025-07-15T23:32:14.901665968Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:32:14.937077 systemd[1]: var-lib-docker-metacopy\x2dcheck158090679-merged.mount: Deactivated successfully. Jul 15 23:32:14.950659 dockerd[1737]: time="2025-07-15T23:32:14.950613773Z" level=info msg="Loading containers: start." Jul 15 23:32:14.957985 kernel: Initializing XFRM netlink socket Jul 15 23:32:15.151758 systemd-networkd[1442]: docker0: Link UP Jul 15 23:32:15.156016 dockerd[1737]: time="2025-07-15T23:32:15.155927852Z" level=info msg="Loading containers: done." Jul 15 23:32:15.170239 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3706055996-merged.mount: Deactivated successfully. Jul 15 23:32:15.172825 dockerd[1737]: time="2025-07-15T23:32:15.172457949Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:32:15.172825 dockerd[1737]: time="2025-07-15T23:32:15.172552105Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:32:15.172825 dockerd[1737]: time="2025-07-15T23:32:15.172654871Z" level=info msg="Initializing buildkit" Jul 15 23:32:15.197322 dockerd[1737]: time="2025-07-15T23:32:15.197285518Z" level=info msg="Completed buildkit initialization" Jul 15 23:32:15.202148 dockerd[1737]: time="2025-07-15T23:32:15.202109956Z" level=info msg="Daemon has completed initialization" Jul 15 23:32:15.202419 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:32:15.203056 dockerd[1737]: time="2025-07-15T23:32:15.202882563Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:32:15.825092 containerd[1498]: time="2025-07-15T23:32:15.825042964Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Jul 15 23:32:16.477713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537524305.mount: Deactivated successfully. Jul 15 23:32:17.409335 containerd[1498]: time="2025-07-15T23:32:17.409288462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:17.409800 containerd[1498]: time="2025-07-15T23:32:17.409765734Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327783" Jul 15 23:32:17.410652 containerd[1498]: time="2025-07-15T23:32:17.410599015Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:17.413644 containerd[1498]: time="2025-07-15T23:32:17.413591696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:17.414223 containerd[1498]: time="2025-07-15T23:32:17.414187772Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 1.589099313s" Jul 15 23:32:17.414223 containerd[1498]: time="2025-07-15T23:32:17.414222534Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Jul 15 23:32:17.415074 containerd[1498]: time="2025-07-15T23:32:17.415001252Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Jul 15 23:32:18.633146 containerd[1498]: time="2025-07-15T23:32:18.633090618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:18.634082 containerd[1498]: time="2025-07-15T23:32:18.634046934Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529698" Jul 15 23:32:18.635236 containerd[1498]: time="2025-07-15T23:32:18.635202224Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:18.637648 containerd[1498]: time="2025-07-15T23:32:18.637597970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:18.638902 containerd[1498]: time="2025-07-15T23:32:18.638856795Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 1.223825708s" Jul 15 23:32:18.638902 containerd[1498]: time="2025-07-15T23:32:18.638897283Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Jul 15 23:32:18.639551 containerd[1498]: time="2025-07-15T23:32:18.639444734Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Jul 15 23:32:18.951308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:32:18.952903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:32:19.079703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:32:19.083239 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:32:19.116605 kubelet[2011]: E0715 23:32:19.116529 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:32:19.119654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:32:19.119791 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:32:19.120404 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.2M memory peak. Jul 15 23:32:19.815994 containerd[1498]: time="2025-07-15T23:32:19.815946039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:19.817017 containerd[1498]: time="2025-07-15T23:32:19.816940468Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484140" Jul 15 23:32:19.817784 containerd[1498]: time="2025-07-15T23:32:19.817732598Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:19.821163 containerd[1498]: time="2025-07-15T23:32:19.821095165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:19.822275 containerd[1498]: time="2025-07-15T23:32:19.822146388Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 1.182672292s" Jul 15 23:32:19.822275 containerd[1498]: time="2025-07-15T23:32:19.822177743Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Jul 15 23:32:19.822670 containerd[1498]: time="2025-07-15T23:32:19.822632895Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Jul 15 23:32:20.824121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895284001.mount: Deactivated successfully. Jul 15 23:32:21.029960 containerd[1498]: time="2025-07-15T23:32:21.029774028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:21.030407 containerd[1498]: time="2025-07-15T23:32:21.030379316Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378407" Jul 15 23:32:21.031102 containerd[1498]: time="2025-07-15T23:32:21.031057191Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:21.033940 containerd[1498]: time="2025-07-15T23:32:21.033880423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:21.034611 containerd[1498]: time="2025-07-15T23:32:21.034478236Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.211810777s" Jul 15 23:32:21.034611 containerd[1498]: time="2025-07-15T23:32:21.034507013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Jul 15 23:32:21.035037 containerd[1498]: time="2025-07-15T23:32:21.035012907Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 23:32:21.667560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515655848.mount: Deactivated successfully. Jul 15 23:32:22.389550 containerd[1498]: time="2025-07-15T23:32:22.388554541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:22.391644 containerd[1498]: time="2025-07-15T23:32:22.391599101Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 15 23:32:22.392575 containerd[1498]: time="2025-07-15T23:32:22.392532561Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:22.424791 containerd[1498]: time="2025-07-15T23:32:22.424736467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:22.426164 containerd[1498]: time="2025-07-15T23:32:22.426122136Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.391079051s" Jul 15 23:32:22.426319 containerd[1498]: time="2025-07-15T23:32:22.426248946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 23:32:22.426899 containerd[1498]: time="2025-07-15T23:32:22.426858613Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:32:22.909539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3999796535.mount: Deactivated successfully. Jul 15 23:32:22.913964 containerd[1498]: time="2025-07-15T23:32:22.913921296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:32:22.914384 containerd[1498]: time="2025-07-15T23:32:22.914350851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 15 23:32:22.915080 containerd[1498]: time="2025-07-15T23:32:22.915049731Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:32:22.917047 containerd[1498]: time="2025-07-15T23:32:22.917016106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:32:22.917793 containerd[1498]: time="2025-07-15T23:32:22.917771141Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 490.794434ms" Jul 15 23:32:22.917850 containerd[1498]: time="2025-07-15T23:32:22.917799820Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 23:32:22.918383 containerd[1498]: time="2025-07-15T23:32:22.918290912Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 23:32:23.461988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount61586211.mount: Deactivated successfully. Jul 15 23:32:24.991424 containerd[1498]: time="2025-07-15T23:32:24.991378291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:24.992100 containerd[1498]: time="2025-07-15T23:32:24.991799840Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 15 23:32:24.993793 containerd[1498]: time="2025-07-15T23:32:24.993173398Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:24.996050 containerd[1498]: time="2025-07-15T23:32:24.996009198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:24.997145 containerd[1498]: time="2025-07-15T23:32:24.997119634Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.078800091s" Jul 15 23:32:24.997223 containerd[1498]: time="2025-07-15T23:32:24.997149931Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 15 23:32:29.272284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 23:32:29.273700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:32:29.434518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:32:29.438751 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:32:29.475332 kubelet[2172]: E0715 23:32:29.475276 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:32:29.477658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:32:29.477785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:32:29.478258 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.6M memory peak. Jul 15 23:32:30.257150 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:32:30.257292 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.6M memory peak. Jul 15 23:32:30.260181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:32:30.281164 systemd[1]: Reload requested from client PID 2187 ('systemctl') (unit session-7.scope)... Jul 15 23:32:30.281180 systemd[1]: Reloading... Jul 15 23:32:30.353954 zram_generator::config[2230]: No configuration found. Jul 15 23:32:30.481493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:32:30.569212 systemd[1]: Reloading finished in 287 ms. Jul 15 23:32:30.616808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:32:30.620250 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:32:30.620702 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:32:30.621969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:32:30.622017 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.1M memory peak. Jul 15 23:32:30.623728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:32:30.745221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:32:30.748959 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:32:30.790371 kubelet[2278]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:32:30.790371 kubelet[2278]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:32:30.790371 kubelet[2278]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:32:30.791565 kubelet[2278]: I0715 23:32:30.791504 2278 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:32:31.733618 kubelet[2278]: I0715 23:32:31.733573 2278 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 23:32:31.733618 kubelet[2278]: I0715 23:32:31.733605 2278 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:32:31.733885 kubelet[2278]: I0715 23:32:31.733863 2278 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 23:32:31.766972 kubelet[2278]: E0715 23:32:31.766910 2278 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:32:31.768858 kubelet[2278]: I0715 23:32:31.768825 2278 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:32:31.773561 kubelet[2278]: I0715 23:32:31.773535 2278 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:32:31.778950 kubelet[2278]: I0715 23:32:31.778063 2278 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:32:31.778950 kubelet[2278]: I0715 23:32:31.778807 2278 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:32:31.779362 kubelet[2278]: I0715 23:32:31.778849 2278 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:32:31.779465 kubelet[2278]: I0715 23:32:31.779441 2278 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:32:31.779465 kubelet[2278]: I0715 23:32:31.779459 2278 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 23:32:31.779743 kubelet[2278]: I0715 23:32:31.779726 2278 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:32:31.785291 kubelet[2278]: I0715 23:32:31.785245 2278 kubelet.go:446] "Attempting to sync node with API server" Jul 15 23:32:31.785291 kubelet[2278]: I0715 23:32:31.785283 2278 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:32:31.785373 kubelet[2278]: I0715 23:32:31.785318 2278 kubelet.go:352] "Adding apiserver pod source" Jul 15 23:32:31.785373 kubelet[2278]: I0715 23:32:31.785328 2278 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:32:31.789278 kubelet[2278]: I0715 23:32:31.788794 2278 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:32:31.789635 kubelet[2278]: I0715 23:32:31.789593 2278 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:32:31.789755 kubelet[2278]: W0715 23:32:31.789737 2278 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:32:31.790374 kubelet[2278]: W0715 23:32:31.790326 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 15 23:32:31.790576 kubelet[2278]: E0715 23:32:31.790541 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:32:31.790873 kubelet[2278]: I0715 23:32:31.790739 2278 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:32:31.790974 kubelet[2278]: I0715 23:32:31.790964 2278 server.go:1287] "Started kubelet" Jul 15 23:32:31.791247 kubelet[2278]: I0715 23:32:31.791187 2278 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:32:31.792164 kubelet[2278]: I0715 23:32:31.792139 2278 server.go:479] "Adding debug handlers to kubelet server" Jul 15 23:32:31.792445 kubelet[2278]: W0715 23:32:31.792408 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 15 23:32:31.792530 kubelet[2278]: E0715 23:32:31.792513 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:32:31.794833 kubelet[2278]: I0715 23:32:31.794800 2278 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:32:31.795015 kubelet[2278]: I0715 23:32:31.794912 2278 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:32:31.795295 kubelet[2278]: I0715 23:32:31.795270 2278 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:32:31.797774 kubelet[2278]: I0715 23:32:31.796314 2278 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:32:31.797774 kubelet[2278]: E0715 23:32:31.796638 2278 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:32:31.797774 kubelet[2278]: I0715 23:32:31.796665 2278 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:32:31.797774 kubelet[2278]: I0715 23:32:31.796839 2278 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:32:31.797774 kubelet[2278]: I0715 23:32:31.796884 2278 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:32:31.797774 kubelet[2278]: W0715 23:32:31.797226 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 15 23:32:31.797774 kubelet[2278]: E0715 23:32:31.797263 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:32:31.797774 kubelet[2278]: E0715 23:32:31.797475 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Jul 15 23:32:31.799803 kubelet[2278]: I0715 23:32:31.799243 2278 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:32:31.800577 kubelet[2278]: I0715 23:32:31.800393 2278 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:32:31.800577 kubelet[2278]: I0715 23:32:31.800409 2278 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:32:31.805150 kubelet[2278]: E0715 23:32:31.802867 2278 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185290be3fe82c5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 23:32:31.790910554 +0000 UTC m=+1.038916419,LastTimestamp:2025-07-15 23:32:31.790910554 +0000 UTC m=+1.038916419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 23:32:31.805267 kubelet[2278]: E0715 23:32:31.805236 2278 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:32:31.812100 kubelet[2278]: I0715 23:32:31.812067 2278 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:32:31.812100 kubelet[2278]: I0715 23:32:31.812089 2278 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:32:31.812100 kubelet[2278]: I0715 23:32:31.812107 2278 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:32:31.813975 kubelet[2278]: I0715 23:32:31.813588 2278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:32:31.816054 kubelet[2278]: I0715 23:32:31.816026 2278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:32:31.816108 kubelet[2278]: I0715 23:32:31.816069 2278 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 23:32:31.816108 kubelet[2278]: I0715 23:32:31.816092 2278 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:32:31.816108 kubelet[2278]: I0715 23:32:31.816101 2278 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 23:32:31.816337 kubelet[2278]: E0715 23:32:31.816303 2278 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:32:31.816864 kubelet[2278]: W0715 23:32:31.816818 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 15 23:32:31.816893 kubelet[2278]: E0715 23:32:31.816875 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:32:31.896997 kubelet[2278]: E0715 23:32:31.896954 2278 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:32:31.917196 kubelet[2278]: E0715 23:32:31.917166 2278 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:32:31.918888 kubelet[2278]: I0715 23:32:31.918855 2278 policy_none.go:49] "None policy: Start" Jul 15 23:32:31.918962 kubelet[2278]: I0715 23:32:31.918898 2278 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:32:31.918962 kubelet[2278]: I0715 23:32:31.918914 2278 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:32:31.925249 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:32:31.936617 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:32:31.939536 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:32:31.948833 kubelet[2278]: I0715 23:32:31.948786 2278 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:32:31.949098 kubelet[2278]: I0715 23:32:31.949058 2278 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:32:31.949159 kubelet[2278]: I0715 23:32:31.949082 2278 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:32:31.949586 kubelet[2278]: I0715 23:32:31.949546 2278 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:32:31.951219 kubelet[2278]: E0715 23:32:31.951195 2278 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:32:31.951284 kubelet[2278]: E0715 23:32:31.951240 2278 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 23:32:31.998302 kubelet[2278]: E0715 23:32:31.998198 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Jul 15 23:32:32.050320 kubelet[2278]: I0715 23:32:32.050289 2278 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:32:32.052525 kubelet[2278]: E0715 23:32:32.052482 2278 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 15 23:32:32.125787 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice - libcontainer container kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Jul 15 23:32:32.134750 kubelet[2278]: E0715 23:32:32.134708 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:32:32.136617 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice - libcontainer container kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Jul 15 23:32:32.148129 kubelet[2278]: E0715 23:32:32.148095 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:32:32.151288 systemd[1]: Created slice kubepods-burstable-pod8458ef5f3e0816c9cfc51210c1216f5f.slice - libcontainer container kubepods-burstable-pod8458ef5f3e0816c9cfc51210c1216f5f.slice. Jul 15 23:32:32.152915 kubelet[2278]: E0715 23:32:32.152893 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:32:32.200736 kubelet[2278]: I0715 23:32:32.199182 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8458ef5f3e0816c9cfc51210c1216f5f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8458ef5f3e0816c9cfc51210c1216f5f\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:32.200736 kubelet[2278]: I0715 23:32:32.199227 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:32.200736 kubelet[2278]: I0715 23:32:32.199249 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:32.200736 kubelet[2278]: I0715 23:32:32.199269 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:32.200736 kubelet[2278]: I0715 23:32:32.199300 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:32.200985 kubelet[2278]: I0715 23:32:32.199317 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:32:32.200985 kubelet[2278]: I0715 23:32:32.199345 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8458ef5f3e0816c9cfc51210c1216f5f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8458ef5f3e0816c9cfc51210c1216f5f\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:32.200985 kubelet[2278]: I0715 23:32:32.199402 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8458ef5f3e0816c9cfc51210c1216f5f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8458ef5f3e0816c9cfc51210c1216f5f\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:32.200985 kubelet[2278]: I0715 23:32:32.199431 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:32.254397 kubelet[2278]: I0715 23:32:32.254277 2278 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:32:32.254693 kubelet[2278]: E0715 23:32:32.254655 2278 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 15 23:32:32.399412 kubelet[2278]: E0715 23:32:32.399367 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Jul 15 23:32:32.436435 containerd[1498]: time="2025-07-15T23:32:32.436397888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Jul 15 23:32:32.449629 containerd[1498]: time="2025-07-15T23:32:32.449582665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Jul 15 23:32:32.453108 containerd[1498]: time="2025-07-15T23:32:32.453052330Z" level=info msg="connecting to shim cdd90843390dcb4fca757540ec5d79a584624617452a2d403a5f2a2f71b03fce" address="unix:///run/containerd/s/91723b15c22b1d0ed21b115af5d26b027051600ad3b5686f9bc0667f771f5057" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:32:32.454733 containerd[1498]: time="2025-07-15T23:32:32.454274237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8458ef5f3e0816c9cfc51210c1216f5f,Namespace:kube-system,Attempt:0,}" Jul 15 23:32:32.476802 containerd[1498]: time="2025-07-15T23:32:32.476655593Z" level=info msg="connecting to shim 47859876446d76765bcbf98386d8fc63f2025028b6c8feef9429f4a2b3a33518" address="unix:///run/containerd/s/a02e2e0026ef0ef842cb7592d486acc32c35671cb5556f6edff0a3b00df28e49" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:32:32.481138 systemd[1]: Started cri-containerd-cdd90843390dcb4fca757540ec5d79a584624617452a2d403a5f2a2f71b03fce.scope - libcontainer container cdd90843390dcb4fca757540ec5d79a584624617452a2d403a5f2a2f71b03fce. Jul 15 23:32:32.489158 containerd[1498]: time="2025-07-15T23:32:32.489105279Z" level=info msg="connecting to shim 79ebc310c26bdbb8c27490378ccb1b36470fd1c56560f5849888393633e5761e" address="unix:///run/containerd/s/98b789b770519137e82a23eae0374649f210a3bbe9af662ea6fe21c02695c443" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:32:32.504247 systemd[1]: Started cri-containerd-47859876446d76765bcbf98386d8fc63f2025028b6c8feef9429f4a2b3a33518.scope - libcontainer container 47859876446d76765bcbf98386d8fc63f2025028b6c8feef9429f4a2b3a33518. Jul 15 23:32:32.510426 systemd[1]: Started cri-containerd-79ebc310c26bdbb8c27490378ccb1b36470fd1c56560f5849888393633e5761e.scope - libcontainer container 79ebc310c26bdbb8c27490378ccb1b36470fd1c56560f5849888393633e5761e. Jul 15 23:32:32.528474 containerd[1498]: time="2025-07-15T23:32:32.528430235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdd90843390dcb4fca757540ec5d79a584624617452a2d403a5f2a2f71b03fce\"" Jul 15 23:32:32.531915 containerd[1498]: time="2025-07-15T23:32:32.531879398Z" level=info msg="CreateContainer within sandbox \"cdd90843390dcb4fca757540ec5d79a584624617452a2d403a5f2a2f71b03fce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:32:32.541659 containerd[1498]: time="2025-07-15T23:32:32.541614811Z" level=info msg="Container 026f250dbce65a9f881b95c2718f5b9e5b606725ce558090536b8f63132d5f16: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:32:32.549237 containerd[1498]: time="2025-07-15T23:32:32.549186639Z" level=info msg="CreateContainer within sandbox \"cdd90843390dcb4fca757540ec5d79a584624617452a2d403a5f2a2f71b03fce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"026f250dbce65a9f881b95c2718f5b9e5b606725ce558090536b8f63132d5f16\"" Jul 15 23:32:32.550830 containerd[1498]: time="2025-07-15T23:32:32.550643085Z" level=info msg="StartContainer for \"026f250dbce65a9f881b95c2718f5b9e5b606725ce558090536b8f63132d5f16\"" Jul 15 23:32:32.551556 containerd[1498]: time="2025-07-15T23:32:32.551530423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"47859876446d76765bcbf98386d8fc63f2025028b6c8feef9429f4a2b3a33518\"" Jul 15 23:32:32.553779 containerd[1498]: time="2025-07-15T23:32:32.553751031Z" level=info msg="connecting to shim 026f250dbce65a9f881b95c2718f5b9e5b606725ce558090536b8f63132d5f16" address="unix:///run/containerd/s/91723b15c22b1d0ed21b115af5d26b027051600ad3b5686f9bc0667f771f5057" protocol=ttrpc version=3 Jul 15 23:32:32.555573 containerd[1498]: time="2025-07-15T23:32:32.555548573Z" level=info msg="CreateContainer within sandbox \"47859876446d76765bcbf98386d8fc63f2025028b6c8feef9429f4a2b3a33518\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:32:32.562019 containerd[1498]: time="2025-07-15T23:32:32.561990395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8458ef5f3e0816c9cfc51210c1216f5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"79ebc310c26bdbb8c27490378ccb1b36470fd1c56560f5849888393633e5761e\"" Jul 15 23:32:32.564953 containerd[1498]: time="2025-07-15T23:32:32.564915300Z" level=info msg="Container ca08d143b2c31de213d676198a16201d8ab001787b5227e4204a509631222a6a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:32:32.565965 containerd[1498]: time="2025-07-15T23:32:32.565943634Z" level=info msg="CreateContainer within sandbox \"79ebc310c26bdbb8c27490378ccb1b36470fd1c56560f5849888393633e5761e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:32:32.572273 systemd[1]: Started cri-containerd-026f250dbce65a9f881b95c2718f5b9e5b606725ce558090536b8f63132d5f16.scope - libcontainer container 026f250dbce65a9f881b95c2718f5b9e5b606725ce558090536b8f63132d5f16. Jul 15 23:32:32.573850 containerd[1498]: time="2025-07-15T23:32:32.573604320Z" level=info msg="CreateContainer within sandbox \"47859876446d76765bcbf98386d8fc63f2025028b6c8feef9429f4a2b3a33518\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ca08d143b2c31de213d676198a16201d8ab001787b5227e4204a509631222a6a\"" Jul 15 23:32:32.574275 containerd[1498]: time="2025-07-15T23:32:32.574249751Z" level=info msg="StartContainer for \"ca08d143b2c31de213d676198a16201d8ab001787b5227e4204a509631222a6a\"" Jul 15 23:32:32.575455 containerd[1498]: time="2025-07-15T23:32:32.575414075Z" level=info msg="connecting to shim ca08d143b2c31de213d676198a16201d8ab001787b5227e4204a509631222a6a" address="unix:///run/containerd/s/a02e2e0026ef0ef842cb7592d486acc32c35671cb5556f6edff0a3b00df28e49" protocol=ttrpc version=3 Jul 15 23:32:32.576936 containerd[1498]: time="2025-07-15T23:32:32.576896549Z" level=info msg="Container 50dbc11f737c5ec387a5091bcbcdba8f63d4d367d4373c536965af2eef3f25cd: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:32:32.583349 containerd[1498]: time="2025-07-15T23:32:32.583319631Z" level=info msg="CreateContainer within sandbox \"79ebc310c26bdbb8c27490378ccb1b36470fd1c56560f5849888393633e5761e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"50dbc11f737c5ec387a5091bcbcdba8f63d4d367d4373c536965af2eef3f25cd\"" Jul 15 23:32:32.584779 containerd[1498]: time="2025-07-15T23:32:32.584750328Z" level=info msg="StartContainer for \"50dbc11f737c5ec387a5091bcbcdba8f63d4d367d4373c536965af2eef3f25cd\"" Jul 15 23:32:32.585820 containerd[1498]: time="2025-07-15T23:32:32.585786631Z" level=info msg="connecting to shim 50dbc11f737c5ec387a5091bcbcdba8f63d4d367d4373c536965af2eef3f25cd" address="unix:///run/containerd/s/98b789b770519137e82a23eae0374649f210a3bbe9af662ea6fe21c02695c443" protocol=ttrpc version=3 Jul 15 23:32:32.597225 systemd[1]: Started cri-containerd-ca08d143b2c31de213d676198a16201d8ab001787b5227e4204a509631222a6a.scope - libcontainer container ca08d143b2c31de213d676198a16201d8ab001787b5227e4204a509631222a6a. Jul 15 23:32:32.598107 kubelet[2278]: W0715 23:32:32.597556 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 15 23:32:32.598107 kubelet[2278]: E0715 23:32:32.597622 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:32:32.612446 systemd[1]: Started cri-containerd-50dbc11f737c5ec387a5091bcbcdba8f63d4d367d4373c536965af2eef3f25cd.scope - libcontainer container 50dbc11f737c5ec387a5091bcbcdba8f63d4d367d4373c536965af2eef3f25cd. Jul 15 23:32:32.620441 containerd[1498]: time="2025-07-15T23:32:32.620129254Z" level=info msg="StartContainer for \"026f250dbce65a9f881b95c2718f5b9e5b606725ce558090536b8f63132d5f16\" returns successfully" Jul 15 23:32:32.656414 kubelet[2278]: I0715 23:32:32.656367 2278 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:32:32.657392 kubelet[2278]: E0715 23:32:32.657249 2278 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 15 23:32:32.665016 containerd[1498]: time="2025-07-15T23:32:32.660588901Z" level=info msg="StartContainer for \"ca08d143b2c31de213d676198a16201d8ab001787b5227e4204a509631222a6a\" returns successfully" Jul 15 23:32:32.701083 containerd[1498]: time="2025-07-15T23:32:32.700782855Z" level=info msg="StartContainer for \"50dbc11f737c5ec387a5091bcbcdba8f63d4d367d4373c536965af2eef3f25cd\" returns successfully" Jul 15 23:32:32.826898 kubelet[2278]: E0715 23:32:32.826676 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:32:32.828868 kubelet[2278]: E0715 23:32:32.828839 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:32:32.832984 kubelet[2278]: E0715 23:32:32.832688 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:32:32.847295 kubelet[2278]: W0715 23:32:32.847222 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 15 23:32:32.847418 kubelet[2278]: E0715 23:32:32.847304 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:32:33.459009 kubelet[2278]: I0715 23:32:33.458975 2278 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:32:33.835098 kubelet[2278]: E0715 23:32:33.835063 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:32:33.835622 kubelet[2278]: E0715 23:32:33.835591 2278 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:32:33.976335 kubelet[2278]: E0715 23:32:33.976286 2278 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 23:32:34.059372 kubelet[2278]: I0715 23:32:34.059327 2278 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 23:32:34.059372 kubelet[2278]: E0715 23:32:34.059370 2278 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 23:32:34.097701 kubelet[2278]: I0715 23:32:34.097425 2278 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 23:32:34.109456 kubelet[2278]: E0715 23:32:34.109335 2278 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 15 23:32:34.109456 kubelet[2278]: I0715 23:32:34.109362 2278 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:34.111913 kubelet[2278]: E0715 23:32:34.111874 2278 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:34.111913 kubelet[2278]: I0715 23:32:34.111898 2278 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:34.113619 kubelet[2278]: E0715 23:32:34.113576 2278 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:34.790889 kubelet[2278]: I0715 23:32:34.790844 2278 apiserver.go:52] "Watching apiserver" Jul 15 23:32:34.797226 kubelet[2278]: I0715 23:32:34.797200 2278 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:32:34.835757 kubelet[2278]: I0715 23:32:34.835731 2278 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:34.837563 kubelet[2278]: E0715 23:32:34.837528 2278 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:35.745696 systemd[1]: Reload requested from client PID 2551 ('systemctl') (unit session-7.scope)... Jul 15 23:32:35.745713 systemd[1]: Reloading... Jul 15 23:32:35.805004 zram_generator::config[2594]: No configuration found. Jul 15 23:32:35.837306 kubelet[2278]: I0715 23:32:35.837274 2278 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:35.921638 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:32:36.023567 systemd[1]: Reloading finished in 277 ms. Jul 15 23:32:36.056520 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:32:36.072843 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:32:36.073211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:32:36.073343 systemd[1]: kubelet.service: Consumed 1.434s CPU time, 128.8M memory peak. Jul 15 23:32:36.075120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:32:36.215719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:32:36.218965 (kubelet)[2636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:32:36.257963 kubelet[2636]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:32:36.257963 kubelet[2636]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:32:36.257963 kubelet[2636]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:32:36.258602 kubelet[2636]: I0715 23:32:36.258207 2636 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:32:36.264399 kubelet[2636]: I0715 23:32:36.264367 2636 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 23:32:36.264535 kubelet[2636]: I0715 23:32:36.264523 2636 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:32:36.264850 kubelet[2636]: I0715 23:32:36.264831 2636 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 23:32:36.267955 kubelet[2636]: I0715 23:32:36.267897 2636 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 23:32:36.270388 kubelet[2636]: I0715 23:32:36.270245 2636 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:32:36.274114 kubelet[2636]: I0715 23:32:36.274032 2636 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:32:36.277196 kubelet[2636]: I0715 23:32:36.277175 2636 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:32:36.277565 kubelet[2636]: I0715 23:32:36.277523 2636 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:32:36.277879 kubelet[2636]: I0715 23:32:36.277641 2636 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:32:36.278071 kubelet[2636]: I0715 23:32:36.278054 2636 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:32:36.278130 kubelet[2636]: I0715 23:32:36.278122 2636 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 23:32:36.278241 kubelet[2636]: I0715 23:32:36.278229 2636 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:32:36.278454 kubelet[2636]: I0715 23:32:36.278442 2636 kubelet.go:446] "Attempting to sync node with API server" Jul 15 23:32:36.278559 kubelet[2636]: I0715 23:32:36.278547 2636 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:32:36.279276 kubelet[2636]: I0715 23:32:36.278998 2636 kubelet.go:352] "Adding apiserver pod source" Jul 15 23:32:36.279276 kubelet[2636]: I0715 23:32:36.279018 2636 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:32:36.279950 kubelet[2636]: I0715 23:32:36.279920 2636 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:32:36.281154 kubelet[2636]: I0715 23:32:36.281137 2636 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:32:36.281902 kubelet[2636]: I0715 23:32:36.281884 2636 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:32:36.282055 kubelet[2636]: I0715 23:32:36.282043 2636 server.go:1287] "Started kubelet" Jul 15 23:32:36.282451 kubelet[2636]: I0715 23:32:36.282237 2636 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:32:36.282796 kubelet[2636]: I0715 23:32:36.282744 2636 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:32:36.283506 kubelet[2636]: I0715 23:32:36.283476 2636 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:32:36.284243 kubelet[2636]: I0715 23:32:36.283967 2636 server.go:479] "Adding debug handlers to kubelet server" Jul 15 23:32:36.284419 kubelet[2636]: I0715 23:32:36.284394 2636 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:32:36.285496 kubelet[2636]: I0715 23:32:36.284984 2636 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:32:36.288934 kubelet[2636]: I0715 23:32:36.287385 2636 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:32:36.288934 kubelet[2636]: E0715 23:32:36.287544 2636 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:32:36.288934 kubelet[2636]: I0715 23:32:36.288454 2636 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:32:36.288934 kubelet[2636]: I0715 23:32:36.288628 2636 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:32:36.289995 kubelet[2636]: I0715 23:32:36.289968 2636 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:32:36.291227 kubelet[2636]: I0715 23:32:36.290126 2636 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:32:36.295442 kubelet[2636]: I0715 23:32:36.295418 2636 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:32:36.312595 kubelet[2636]: I0715 23:32:36.312399 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:32:36.313365 kubelet[2636]: I0715 23:32:36.313298 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:32:36.313365 kubelet[2636]: I0715 23:32:36.313353 2636 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 23:32:36.313469 kubelet[2636]: I0715 23:32:36.313378 2636 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:32:36.313469 kubelet[2636]: I0715 23:32:36.313405 2636 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 23:32:36.313469 kubelet[2636]: E0715 23:32:36.313450 2636 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:32:36.338845 kubelet[2636]: I0715 23:32:36.338821 2636 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:32:36.338845 kubelet[2636]: I0715 23:32:36.338838 2636 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:32:36.338991 kubelet[2636]: I0715 23:32:36.338856 2636 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:32:36.339013 kubelet[2636]: I0715 23:32:36.339002 2636 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:32:36.339035 kubelet[2636]: I0715 23:32:36.339013 2636 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:32:36.339035 kubelet[2636]: I0715 23:32:36.339030 2636 policy_none.go:49] "None policy: Start" Jul 15 23:32:36.339076 kubelet[2636]: I0715 23:32:36.339038 2636 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:32:36.339076 kubelet[2636]: I0715 23:32:36.339046 2636 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:32:36.339144 kubelet[2636]: I0715 23:32:36.339131 2636 state_mem.go:75] "Updated machine memory state" Jul 15 23:32:36.343117 kubelet[2636]: I0715 23:32:36.342969 2636 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:32:36.343117 kubelet[2636]: I0715 23:32:36.343102 2636 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:32:36.343211 kubelet[2636]: I0715 23:32:36.343111 2636 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:32:36.343351 kubelet[2636]: I0715 23:32:36.343292 2636 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:32:36.345301 kubelet[2636]: E0715 23:32:36.344907 2636 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:32:36.414308 kubelet[2636]: I0715 23:32:36.414269 2636 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:36.414438 kubelet[2636]: I0715 23:32:36.414376 2636 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 23:32:36.414463 kubelet[2636]: I0715 23:32:36.414443 2636 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:36.420239 kubelet[2636]: E0715 23:32:36.420211 2636 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:36.445643 kubelet[2636]: I0715 23:32:36.445497 2636 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:32:36.454190 kubelet[2636]: I0715 23:32:36.454069 2636 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 23:32:36.454295 kubelet[2636]: I0715 23:32:36.454163 2636 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 23:32:36.590164 kubelet[2636]: I0715 23:32:36.590061 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:36.590494 kubelet[2636]: I0715 23:32:36.590309 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8458ef5f3e0816c9cfc51210c1216f5f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8458ef5f3e0816c9cfc51210c1216f5f\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:36.590494 kubelet[2636]: I0715 23:32:36.590336 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8458ef5f3e0816c9cfc51210c1216f5f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8458ef5f3e0816c9cfc51210c1216f5f\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:36.590494 kubelet[2636]: I0715 23:32:36.590355 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:36.590494 kubelet[2636]: I0715 23:32:36.590372 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:36.590494 kubelet[2636]: I0715 23:32:36.590388 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:36.590651 kubelet[2636]: I0715 23:32:36.590409 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8458ef5f3e0816c9cfc51210c1216f5f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8458ef5f3e0816c9cfc51210c1216f5f\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:32:36.590651 kubelet[2636]: I0715 23:32:36.590423 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:32:36.590651 kubelet[2636]: I0715 23:32:36.590438 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:32:37.281217 kubelet[2636]: I0715 23:32:37.280091 2636 apiserver.go:52] "Watching apiserver" Jul 15 23:32:37.288999 kubelet[2636]: I0715 23:32:37.288971 2636 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:32:37.325032 kubelet[2636]: I0715 23:32:37.324837 2636 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 23:32:37.331397 kubelet[2636]: E0715 23:32:37.331349 2636 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 23:32:37.345475 kubelet[2636]: I0715 23:32:37.344990 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.344976217 podStartE2EDuration="1.344976217s" podCreationTimestamp="2025-07-15 23:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:32:37.344686853 +0000 UTC m=+1.122905587" watchObservedRunningTime="2025-07-15 23:32:37.344976217 +0000 UTC m=+1.123194911" Jul 15 23:32:37.413732 kubelet[2636]: I0715 23:32:37.413618 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.413579953 podStartE2EDuration="1.413579953s" podCreationTimestamp="2025-07-15 23:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:32:37.365567868 +0000 UTC m=+1.143786602" watchObservedRunningTime="2025-07-15 23:32:37.413579953 +0000 UTC m=+1.191798687" Jul 15 23:32:37.414089 kubelet[2636]: I0715 23:32:37.413975 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.413966691 podStartE2EDuration="2.413966691s" podCreationTimestamp="2025-07-15 23:32:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:32:37.413820409 +0000 UTC m=+1.192039143" watchObservedRunningTime="2025-07-15 23:32:37.413966691 +0000 UTC m=+1.192185425" Jul 15 23:32:41.479226 kubelet[2636]: I0715 23:32:41.479179 2636 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:32:41.479644 containerd[1498]: time="2025-07-15T23:32:41.479525469Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:32:41.479811 kubelet[2636]: I0715 23:32:41.479727 2636 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:32:42.188843 systemd[1]: Created slice kubepods-besteffort-podd7a2a1e2_3ab9_4063_b1a7_0ee87930ad40.slice - libcontainer container kubepods-besteffort-podd7a2a1e2_3ab9_4063_b1a7_0ee87930ad40.slice. Jul 15 23:32:42.229172 kubelet[2636]: I0715 23:32:42.229115 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d7a2a1e2-3ab9-4063-b1a7-0ee87930ad40-kube-proxy\") pod \"kube-proxy-m4xzl\" (UID: \"d7a2a1e2-3ab9-4063-b1a7-0ee87930ad40\") " pod="kube-system/kube-proxy-m4xzl" Jul 15 23:32:42.229172 kubelet[2636]: I0715 23:32:42.229170 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7a2a1e2-3ab9-4063-b1a7-0ee87930ad40-lib-modules\") pod \"kube-proxy-m4xzl\" (UID: \"d7a2a1e2-3ab9-4063-b1a7-0ee87930ad40\") " pod="kube-system/kube-proxy-m4xzl" Jul 15 23:32:42.229354 kubelet[2636]: I0715 23:32:42.229199 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7a2a1e2-3ab9-4063-b1a7-0ee87930ad40-xtables-lock\") pod \"kube-proxy-m4xzl\" (UID: \"d7a2a1e2-3ab9-4063-b1a7-0ee87930ad40\") " pod="kube-system/kube-proxy-m4xzl" Jul 15 23:32:42.229354 kubelet[2636]: I0715 23:32:42.229224 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m5gz\" (UniqueName: \"kubernetes.io/projected/d7a2a1e2-3ab9-4063-b1a7-0ee87930ad40-kube-api-access-4m5gz\") pod \"kube-proxy-m4xzl\" (UID: \"d7a2a1e2-3ab9-4063-b1a7-0ee87930ad40\") " pod="kube-system/kube-proxy-m4xzl" Jul 15 23:32:42.500585 containerd[1498]: time="2025-07-15T23:32:42.500462922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m4xzl,Uid:d7a2a1e2-3ab9-4063-b1a7-0ee87930ad40,Namespace:kube-system,Attempt:0,}" Jul 15 23:32:42.531212 containerd[1498]: time="2025-07-15T23:32:42.531159529Z" level=info msg="connecting to shim a3a01a21e68120a29bdb5e7c2fc7799d80eb6d8859e0a2c5b506d9c7c5051208" address="unix:///run/containerd/s/33c587f5871e2bd5ca882a6c01125abf10496ba1b4b6f3b2f7da2034f0f239ec" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:32:42.558144 systemd[1]: Started cri-containerd-a3a01a21e68120a29bdb5e7c2fc7799d80eb6d8859e0a2c5b506d9c7c5051208.scope - libcontainer container a3a01a21e68120a29bdb5e7c2fc7799d80eb6d8859e0a2c5b506d9c7c5051208. Jul 15 23:32:42.585300 containerd[1498]: time="2025-07-15T23:32:42.585257504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m4xzl,Uid:d7a2a1e2-3ab9-4063-b1a7-0ee87930ad40,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3a01a21e68120a29bdb5e7c2fc7799d80eb6d8859e0a2c5b506d9c7c5051208\"" Jul 15 23:32:42.588749 containerd[1498]: time="2025-07-15T23:32:42.588698309Z" level=info msg="CreateContainer within sandbox \"a3a01a21e68120a29bdb5e7c2fc7799d80eb6d8859e0a2c5b506d9c7c5051208\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:32:42.601968 containerd[1498]: time="2025-07-15T23:32:42.599606993Z" level=info msg="Container e170a87059eb6d4d9a614b4f10065000a0cf2da09f834006296a5aff0caaaf10: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:32:42.609960 containerd[1498]: time="2025-07-15T23:32:42.609892713Z" level=info msg="CreateContainer within sandbox \"a3a01a21e68120a29bdb5e7c2fc7799d80eb6d8859e0a2c5b506d9c7c5051208\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e170a87059eb6d4d9a614b4f10065000a0cf2da09f834006296a5aff0caaaf10\"" Jul 15 23:32:42.611636 containerd[1498]: time="2025-07-15T23:32:42.611578880Z" level=info msg="StartContainer for \"e170a87059eb6d4d9a614b4f10065000a0cf2da09f834006296a5aff0caaaf10\"" Jul 15 23:32:42.615362 containerd[1498]: time="2025-07-15T23:32:42.615315740Z" level=info msg="connecting to shim e170a87059eb6d4d9a614b4f10065000a0cf2da09f834006296a5aff0caaaf10" address="unix:///run/containerd/s/33c587f5871e2bd5ca882a6c01125abf10496ba1b4b6f3b2f7da2034f0f239ec" protocol=ttrpc version=3 Jul 15 23:32:42.639140 systemd[1]: Started cri-containerd-e170a87059eb6d4d9a614b4f10065000a0cf2da09f834006296a5aff0caaaf10.scope - libcontainer container e170a87059eb6d4d9a614b4f10065000a0cf2da09f834006296a5aff0caaaf10. Jul 15 23:32:42.671659 systemd[1]: Created slice kubepods-besteffort-pod8ed267bd_2621_42df_9f09_b97114ec288e.slice - libcontainer container kubepods-besteffort-pod8ed267bd_2621_42df_9f09_b97114ec288e.slice. Jul 15 23:32:42.692713 containerd[1498]: time="2025-07-15T23:32:42.692678100Z" level=info msg="StartContainer for \"e170a87059eb6d4d9a614b4f10065000a0cf2da09f834006296a5aff0caaaf10\" returns successfully" Jul 15 23:32:42.733349 kubelet[2636]: I0715 23:32:42.733252 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwv4\" (UniqueName: \"kubernetes.io/projected/8ed267bd-2621-42df-9f09-b97114ec288e-kube-api-access-knwv4\") pod \"tigera-operator-747864d56d-798mp\" (UID: \"8ed267bd-2621-42df-9f09-b97114ec288e\") " pod="tigera-operator/tigera-operator-747864d56d-798mp" Jul 15 23:32:42.733775 kubelet[2636]: I0715 23:32:42.733711 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8ed267bd-2621-42df-9f09-b97114ec288e-var-lib-calico\") pod \"tigera-operator-747864d56d-798mp\" (UID: \"8ed267bd-2621-42df-9f09-b97114ec288e\") " pod="tigera-operator/tigera-operator-747864d56d-798mp" Jul 15 23:32:42.976652 containerd[1498]: time="2025-07-15T23:32:42.976595803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-798mp,Uid:8ed267bd-2621-42df-9f09-b97114ec288e,Namespace:tigera-operator,Attempt:0,}" Jul 15 23:32:42.996460 containerd[1498]: time="2025-07-15T23:32:42.996408377Z" level=info msg="connecting to shim e66c3a50baae52f0859115face9579e396880a8e3536dfd780f7d7102efe88e6" address="unix:///run/containerd/s/426db8e9cbbd4cdd91b71e3977530dfb01f1e0b434de8c6b5ec7e816de83b41a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:32:43.023132 systemd[1]: Started cri-containerd-e66c3a50baae52f0859115face9579e396880a8e3536dfd780f7d7102efe88e6.scope - libcontainer container e66c3a50baae52f0859115face9579e396880a8e3536dfd780f7d7102efe88e6. Jul 15 23:32:43.064777 containerd[1498]: time="2025-07-15T23:32:43.064713042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-798mp,Uid:8ed267bd-2621-42df-9f09-b97114ec288e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e66c3a50baae52f0859115face9579e396880a8e3536dfd780f7d7102efe88e6\"" Jul 15 23:32:43.068088 containerd[1498]: time="2025-07-15T23:32:43.068012182Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 23:32:43.349969 kubelet[2636]: I0715 23:32:43.348978 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m4xzl" podStartSLOduration=1.348905975 podStartE2EDuration="1.348905975s" podCreationTimestamp="2025-07-15 23:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:32:43.348890329 +0000 UTC m=+7.127109063" watchObservedRunningTime="2025-07-15 23:32:43.348905975 +0000 UTC m=+7.127124709" Jul 15 23:32:44.378493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530655907.mount: Deactivated successfully. Jul 15 23:32:44.816519 containerd[1498]: time="2025-07-15T23:32:44.816472122Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:44.817083 containerd[1498]: time="2025-07-15T23:32:44.817057520Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 15 23:32:44.817768 containerd[1498]: time="2025-07-15T23:32:44.817739118Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:44.819680 containerd[1498]: time="2025-07-15T23:32:44.819650696Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:44.820572 containerd[1498]: time="2025-07-15T23:32:44.820248659Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.752200663s" Jul 15 23:32:44.820572 containerd[1498]: time="2025-07-15T23:32:44.820283153Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 15 23:32:44.824669 containerd[1498]: time="2025-07-15T23:32:44.824128559Z" level=info msg="CreateContainer within sandbox \"e66c3a50baae52f0859115face9579e396880a8e3536dfd780f7d7102efe88e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 23:32:44.831051 containerd[1498]: time="2025-07-15T23:32:44.831004319Z" level=info msg="Container 1fa846662a0fa4a966cb48758493552a091e81d2867d7a6d550149c24cea7a4c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:32:44.836997 containerd[1498]: time="2025-07-15T23:32:44.836854341Z" level=info msg="CreateContainer within sandbox \"e66c3a50baae52f0859115face9579e396880a8e3536dfd780f7d7102efe88e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1fa846662a0fa4a966cb48758493552a091e81d2867d7a6d550149c24cea7a4c\"" Jul 15 23:32:44.837556 containerd[1498]: time="2025-07-15T23:32:44.837369191Z" level=info msg="StartContainer for \"1fa846662a0fa4a966cb48758493552a091e81d2867d7a6d550149c24cea7a4c\"" Jul 15 23:32:44.838214 containerd[1498]: time="2025-07-15T23:32:44.838178840Z" level=info msg="connecting to shim 1fa846662a0fa4a966cb48758493552a091e81d2867d7a6d550149c24cea7a4c" address="unix:///run/containerd/s/426db8e9cbbd4cdd91b71e3977530dfb01f1e0b434de8c6b5ec7e816de83b41a" protocol=ttrpc version=3 Jul 15 23:32:44.861103 systemd[1]: Started cri-containerd-1fa846662a0fa4a966cb48758493552a091e81d2867d7a6d550149c24cea7a4c.scope - libcontainer container 1fa846662a0fa4a966cb48758493552a091e81d2867d7a6d550149c24cea7a4c. Jul 15 23:32:44.899876 containerd[1498]: time="2025-07-15T23:32:44.899812736Z" level=info msg="StartContainer for \"1fa846662a0fa4a966cb48758493552a091e81d2867d7a6d550149c24cea7a4c\" returns successfully" Jul 15 23:32:45.402257 kubelet[2636]: I0715 23:32:45.402054 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-798mp" podStartSLOduration=1.64282814 podStartE2EDuration="3.39845663s" podCreationTimestamp="2025-07-15 23:32:42 +0000 UTC" firstStartedPulling="2025-07-15 23:32:43.066672925 +0000 UTC m=+6.844891659" lastFinishedPulling="2025-07-15 23:32:44.822301415 +0000 UTC m=+8.600520149" observedRunningTime="2025-07-15 23:32:45.398232463 +0000 UTC m=+9.176451157" watchObservedRunningTime="2025-07-15 23:32:45.39845663 +0000 UTC m=+9.176675364" Jul 15 23:32:50.334977 sudo[1716]: pam_unix(sudo:session): session closed for user root Jul 15 23:32:50.340498 sshd[1715]: Connection closed by 10.0.0.1 port 55150 Jul 15 23:32:50.342253 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jul 15 23:32:50.346216 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:55150.service: Deactivated successfully. Jul 15 23:32:50.348451 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:32:50.348706 systemd[1]: session-7.scope: Consumed 7.431s CPU time, 233.8M memory peak. Jul 15 23:32:50.351573 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:32:50.354666 systemd-logind[1476]: Removed session 7. Jul 15 23:32:51.502434 update_engine[1484]: I20250715 23:32:51.501969 1484 update_attempter.cc:509] Updating boot flags... Jul 15 23:32:55.745327 systemd[1]: Created slice kubepods-besteffort-pod9e482116_e365_4f56_b1aa_6f55ba73aa2f.slice - libcontainer container kubepods-besteffort-pod9e482116_e365_4f56_b1aa_6f55ba73aa2f.slice. Jul 15 23:32:55.826365 kubelet[2636]: I0715 23:32:55.826316 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl4q6\" (UniqueName: \"kubernetes.io/projected/9e482116-e365-4f56-b1aa-6f55ba73aa2f-kube-api-access-vl4q6\") pod \"calico-typha-6f8f894d96-xt52f\" (UID: \"9e482116-e365-4f56-b1aa-6f55ba73aa2f\") " pod="calico-system/calico-typha-6f8f894d96-xt52f" Jul 15 23:32:55.826971 kubelet[2636]: I0715 23:32:55.826476 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e482116-e365-4f56-b1aa-6f55ba73aa2f-tigera-ca-bundle\") pod \"calico-typha-6f8f894d96-xt52f\" (UID: \"9e482116-e365-4f56-b1aa-6f55ba73aa2f\") " pod="calico-system/calico-typha-6f8f894d96-xt52f" Jul 15 23:32:55.826971 kubelet[2636]: I0715 23:32:55.826500 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9e482116-e365-4f56-b1aa-6f55ba73aa2f-typha-certs\") pod \"calico-typha-6f8f894d96-xt52f\" (UID: \"9e482116-e365-4f56-b1aa-6f55ba73aa2f\") " pod="calico-system/calico-typha-6f8f894d96-xt52f" Jul 15 23:32:56.051203 containerd[1498]: time="2025-07-15T23:32:56.051115877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f8f894d96-xt52f,Uid:9e482116-e365-4f56-b1aa-6f55ba73aa2f,Namespace:calico-system,Attempt:0,}" Jul 15 23:32:56.087074 containerd[1498]: time="2025-07-15T23:32:56.086202277Z" level=info msg="connecting to shim e1056200641d694ec3c21946f8bb86ddaf6b88e2b00bc5a05a00227114296e5a" address="unix:///run/containerd/s/1316616eac501a56f94ef18e4a4dac1b86f4844b0f2f23ba6f646434d64b6ca1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:32:56.117699 systemd[1]: Created slice kubepods-besteffort-pod156f303b_ad78_44ea_a5a2_54a3b2b6c38b.slice - libcontainer container kubepods-besteffort-pod156f303b_ad78_44ea_a5a2_54a3b2b6c38b.slice. Jul 15 23:32:56.128633 kubelet[2636]: I0715 23:32:56.128583 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/156f303b-ad78-44ea-a5a2-54a3b2b6c38b-node-certs\") pod \"calico-node-k8psf\" (UID: \"156f303b-ad78-44ea-a5a2-54a3b2b6c38b\") " pod="calico-system/calico-node-k8psf" Jul 15 23:32:56.128633 kubelet[2636]: I0715 23:32:56.128628 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/156f303b-ad78-44ea-a5a2-54a3b2b6c38b-cni-bin-dir\") pod \"calico-node-k8psf\" (UID: \"156f303b-ad78-44ea-a5a2-54a3b2b6c38b\") " pod="calico-system/calico-node-k8psf" Jul 15 23:32:56.128889 kubelet[2636]: I0715 23:32:56.128646 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/156f303b-ad78-44ea-a5a2-54a3b2b6c38b-policysync\") pod \"calico-node-k8psf\" (UID: \"156f303b-ad78-44ea-a5a2-54a3b2b6c38b\") " pod="calico-system/calico-node-k8psf" Jul 15 23:32:56.128889 kubelet[2636]: I0715 23:32:56.128663 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/156f303b-ad78-44ea-a5a2-54a3b2b6c38b-var-lib-calico\") pod \"calico-node-k8psf\" (UID: \"156f303b-ad78-44ea-a5a2-54a3b2b6c38b\") " pod="calico-system/calico-node-k8psf" Jul 15 23:32:56.128889 kubelet[2636]: I0715 23:32:56.128682 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chpvx\" (UniqueName: \"kubernetes.io/projected/156f303b-ad78-44ea-a5a2-54a3b2b6c38b-kube-api-access-chpvx\") pod \"calico-node-k8psf\" (UID: \"156f303b-ad78-44ea-a5a2-54a3b2b6c38b\") " pod="calico-system/calico-node-k8psf" Jul 15 23:32:56.128889 kubelet[2636]: I0715 23:32:56.128751 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/156f303b-ad78-44ea-a5a2-54a3b2b6c38b-flexvol-driver-host\") pod \"calico-node-k8psf\" (UID: \"156f303b-ad78-44ea-a5a2-54a3b2b6c38b\") " pod="calico-system/calico-node-k8psf" Jul 15 23:32:56.128889 kubelet[2636]: I0715 23:32:56.128790 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/156f303b-ad78-44ea-a5a2-54a3b2b6c38b-var-run-calico\") pod \"calico-node-k8psf\" (UID: \"156f303b-ad78-44ea-a5a2-54a3b2b6c38b\") " pod="calico-system/calico-node-k8psf" Jul 15 23:32:56.129069 kubelet[2636]: I0715 23:32:56.128811 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/156f303b-ad78-44ea-a5a2-54a3b2b6c38b-lib-modules\") pod \"calico-node-k8psf\" (UID: \"156f303b-ad78-44ea-a5a2-54a3b2b6c38b\") " pod="calico-system/calico-node-k8psf" Jul 15 23:32:56.129069 kubelet[2636]: I0715 23:32:56.128830 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/156f303b-ad78-44ea-a5a2-54a3b2b6c38b-cni-log-dir\") pod \"calico-node-k8psf\" (UID: \"156f303b-ad78-44ea-a5a2-54a3b2b6c38b\") " pod="calico-system/calico-node-k8psf" Jul 15 23:32:56.129069 kubelet[2636]: I0715 23:32:56.128849 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/156f303b-ad78-44ea-a5a2-54a3b2b6c38b-cni-net-dir\") pod \"calico-node-k8psf\" (UID: \"156f303b-ad78-44ea-a5a2-54a3b2b6c38b\") " pod="calico-system/calico-node-k8psf" Jul 15 23:32:56.129069 kubelet[2636]: I0715 23:32:56.128868 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/156f303b-ad78-44ea-a5a2-54a3b2b6c38b-tigera-ca-bundle\") pod \"calico-node-k8psf\" (UID: \"156f303b-ad78-44ea-a5a2-54a3b2b6c38b\") " pod="calico-system/calico-node-k8psf" Jul 15 23:32:56.129069 kubelet[2636]: I0715 23:32:56.128892 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/156f303b-ad78-44ea-a5a2-54a3b2b6c38b-xtables-lock\") pod \"calico-node-k8psf\" (UID: \"156f303b-ad78-44ea-a5a2-54a3b2b6c38b\") " pod="calico-system/calico-node-k8psf" Jul 15 23:32:56.146108 systemd[1]: Started cri-containerd-e1056200641d694ec3c21946f8bb86ddaf6b88e2b00bc5a05a00227114296e5a.scope - libcontainer container e1056200641d694ec3c21946f8bb86ddaf6b88e2b00bc5a05a00227114296e5a. Jul 15 23:32:56.203262 containerd[1498]: time="2025-07-15T23:32:56.203147449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f8f894d96-xt52f,Uid:9e482116-e365-4f56-b1aa-6f55ba73aa2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"e1056200641d694ec3c21946f8bb86ddaf6b88e2b00bc5a05a00227114296e5a\"" Jul 15 23:32:56.213542 containerd[1498]: time="2025-07-15T23:32:56.213515171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 23:32:56.237899 kubelet[2636]: E0715 23:32:56.237790 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.237899 kubelet[2636]: W0715 23:32:56.237814 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.239008 kubelet[2636]: E0715 23:32:56.238833 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.252054 kubelet[2636]: E0715 23:32:56.252028 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.252054 kubelet[2636]: W0715 23:32:56.252050 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.252171 kubelet[2636]: E0715 23:32:56.252071 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.404967 kubelet[2636]: E0715 23:32:56.403961 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9dn4" podUID="8911d17a-4907-4974-b467-7de008d2f4c7" Jul 15 23:32:56.421863 containerd[1498]: time="2025-07-15T23:32:56.421825047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k8psf,Uid:156f303b-ad78-44ea-a5a2-54a3b2b6c38b,Namespace:calico-system,Attempt:0,}" Jul 15 23:32:56.426236 kubelet[2636]: E0715 23:32:56.426197 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.426236 kubelet[2636]: W0715 23:32:56.426230 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.426387 kubelet[2636]: E0715 23:32:56.426251 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.427729 kubelet[2636]: E0715 23:32:56.427700 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.427957 kubelet[2636]: W0715 23:32:56.427717 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.427957 kubelet[2636]: E0715 23:32:56.427765 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.428067 kubelet[2636]: E0715 23:32:56.428040 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.428067 kubelet[2636]: W0715 23:32:56.428049 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.428067 kubelet[2636]: E0715 23:32:56.428059 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.428253 kubelet[2636]: E0715 23:32:56.428240 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.428253 kubelet[2636]: W0715 23:32:56.428251 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.428316 kubelet[2636]: E0715 23:32:56.428259 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.428462 kubelet[2636]: E0715 23:32:56.428449 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.428462 kubelet[2636]: W0715 23:32:56.428460 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.428520 kubelet[2636]: E0715 23:32:56.428471 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.428682 kubelet[2636]: E0715 23:32:56.428671 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.428682 kubelet[2636]: W0715 23:32:56.428682 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.428743 kubelet[2636]: E0715 23:32:56.428689 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.428821 kubelet[2636]: E0715 23:32:56.428810 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.428821 kubelet[2636]: W0715 23:32:56.428819 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.428883 kubelet[2636]: E0715 23:32:56.428826 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.429042 kubelet[2636]: E0715 23:32:56.429028 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.429083 kubelet[2636]: W0715 23:32:56.429051 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.429083 kubelet[2636]: E0715 23:32:56.429062 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.429239 kubelet[2636]: E0715 23:32:56.429226 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.429239 kubelet[2636]: W0715 23:32:56.429238 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.429301 kubelet[2636]: E0715 23:32:56.429247 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.429413 kubelet[2636]: E0715 23:32:56.429396 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.429413 kubelet[2636]: W0715 23:32:56.429407 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.429413 kubelet[2636]: E0715 23:32:56.429414 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.429550 kubelet[2636]: E0715 23:32:56.429538 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.429550 kubelet[2636]: W0715 23:32:56.429548 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.429613 kubelet[2636]: E0715 23:32:56.429556 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.429704 kubelet[2636]: E0715 23:32:56.429694 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.429704 kubelet[2636]: W0715 23:32:56.429703 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.429759 kubelet[2636]: E0715 23:32:56.429710 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.429846 kubelet[2636]: E0715 23:32:56.429835 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.429846 kubelet[2636]: W0715 23:32:56.429844 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.429893 kubelet[2636]: E0715 23:32:56.429852 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.430626 kubelet[2636]: E0715 23:32:56.430607 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.430626 kubelet[2636]: W0715 23:32:56.430625 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.430733 kubelet[2636]: E0715 23:32:56.430637 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.430819 kubelet[2636]: E0715 23:32:56.430805 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.430857 kubelet[2636]: W0715 23:32:56.430819 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.430857 kubelet[2636]: E0715 23:32:56.430829 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.431585 kubelet[2636]: E0715 23:32:56.431520 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.431638 kubelet[2636]: W0715 23:32:56.431585 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.431638 kubelet[2636]: E0715 23:32:56.431611 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.432017 kubelet[2636]: E0715 23:32:56.432001 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.432017 kubelet[2636]: W0715 23:32:56.432017 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.432088 kubelet[2636]: E0715 23:32:56.432027 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.432558 kubelet[2636]: E0715 23:32:56.432502 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.432654 kubelet[2636]: W0715 23:32:56.432558 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.432654 kubelet[2636]: E0715 23:32:56.432571 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.433123 kubelet[2636]: E0715 23:32:56.433106 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.433158 kubelet[2636]: W0715 23:32:56.433133 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.433158 kubelet[2636]: E0715 23:32:56.433145 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.433357 kubelet[2636]: E0715 23:32:56.433345 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.433357 kubelet[2636]: W0715 23:32:56.433357 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.433425 kubelet[2636]: E0715 23:32:56.433374 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.433687 kubelet[2636]: E0715 23:32:56.433671 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.433719 kubelet[2636]: W0715 23:32:56.433687 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.433719 kubelet[2636]: E0715 23:32:56.433700 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.433758 kubelet[2636]: I0715 23:32:56.433727 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8911d17a-4907-4974-b467-7de008d2f4c7-registration-dir\") pod \"csi-node-driver-h9dn4\" (UID: \"8911d17a-4907-4974-b467-7de008d2f4c7\") " pod="calico-system/csi-node-driver-h9dn4" Jul 15 23:32:56.433905 kubelet[2636]: E0715 23:32:56.433890 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.433997 kubelet[2636]: W0715 23:32:56.433981 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.434027 kubelet[2636]: E0715 23:32:56.434004 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.434186 kubelet[2636]: I0715 23:32:56.434082 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxw78\" (UniqueName: \"kubernetes.io/projected/8911d17a-4907-4974-b467-7de008d2f4c7-kube-api-access-fxw78\") pod \"csi-node-driver-h9dn4\" (UID: \"8911d17a-4907-4974-b467-7de008d2f4c7\") " pod="calico-system/csi-node-driver-h9dn4" Jul 15 23:32:56.434350 kubelet[2636]: E0715 23:32:56.434330 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.434444 kubelet[2636]: W0715 23:32:56.434427 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.434670 kubelet[2636]: E0715 23:32:56.434654 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.434959 kubelet[2636]: E0715 23:32:56.434923 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.435010 kubelet[2636]: W0715 23:32:56.434957 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.435010 kubelet[2636]: E0715 23:32:56.434984 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.435221 kubelet[2636]: E0715 23:32:56.435194 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.435221 kubelet[2636]: W0715 23:32:56.435203 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.435307 kubelet[2636]: E0715 23:32:56.435231 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.435307 kubelet[2636]: I0715 23:32:56.435254 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8911d17a-4907-4974-b467-7de008d2f4c7-kubelet-dir\") pod \"csi-node-driver-h9dn4\" (UID: \"8911d17a-4907-4974-b467-7de008d2f4c7\") " pod="calico-system/csi-node-driver-h9dn4" Jul 15 23:32:56.435517 kubelet[2636]: E0715 23:32:56.435472 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.435517 kubelet[2636]: W0715 23:32:56.435485 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.435517 kubelet[2636]: E0715 23:32:56.435497 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.435733 kubelet[2636]: E0715 23:32:56.435713 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.435733 kubelet[2636]: W0715 23:32:56.435727 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.435904 kubelet[2636]: E0715 23:32:56.435738 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.436003 kubelet[2636]: E0715 23:32:56.435991 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.436003 kubelet[2636]: W0715 23:32:56.436002 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.436053 kubelet[2636]: E0715 23:32:56.436010 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.436196 kubelet[2636]: E0715 23:32:56.436184 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.436196 kubelet[2636]: W0715 23:32:56.436195 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.436311 kubelet[2636]: E0715 23:32:56.436202 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.437022 kubelet[2636]: E0715 23:32:56.437005 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.437022 kubelet[2636]: W0715 23:32:56.437019 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.437174 kubelet[2636]: E0715 23:32:56.437028 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.437174 kubelet[2636]: I0715 23:32:56.437048 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8911d17a-4907-4974-b467-7de008d2f4c7-socket-dir\") pod \"csi-node-driver-h9dn4\" (UID: \"8911d17a-4907-4974-b467-7de008d2f4c7\") " pod="calico-system/csi-node-driver-h9dn4" Jul 15 23:32:56.437348 kubelet[2636]: E0715 23:32:56.437332 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.440825 kubelet[2636]: W0715 23:32:56.437345 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.440920 kubelet[2636]: E0715 23:32:56.440837 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.440920 kubelet[2636]: I0715 23:32:56.440858 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8911d17a-4907-4974-b467-7de008d2f4c7-varrun\") pod \"csi-node-driver-h9dn4\" (UID: \"8911d17a-4907-4974-b467-7de008d2f4c7\") " pod="calico-system/csi-node-driver-h9dn4" Jul 15 23:32:56.441200 kubelet[2636]: E0715 23:32:56.441185 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.441258 kubelet[2636]: W0715 23:32:56.441199 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.441334 kubelet[2636]: E0715 23:32:56.441314 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.441554 kubelet[2636]: E0715 23:32:56.441541 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.441554 kubelet[2636]: W0715 23:32:56.441554 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.441644 kubelet[2636]: E0715 23:32:56.441568 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.441726 kubelet[2636]: E0715 23:32:56.441713 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.441771 kubelet[2636]: W0715 23:32:56.441726 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.441771 kubelet[2636]: E0715 23:32:56.441735 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.442026 kubelet[2636]: E0715 23:32:56.442010 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.442103 kubelet[2636]: W0715 23:32:56.442089 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.442177 kubelet[2636]: E0715 23:32:56.442147 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.442231 containerd[1498]: time="2025-07-15T23:32:56.442194289Z" level=info msg="connecting to shim dd19f96970b080f34d74b935dc163e8f8ba344c93858113ac7b3752d8f77855c" address="unix:///run/containerd/s/5f9fb454fd32618919a3cfa994a5d7dbb2436fbe9b0bd95e5c4f33f7072bdd85" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:32:56.472094 systemd[1]: Started cri-containerd-dd19f96970b080f34d74b935dc163e8f8ba344c93858113ac7b3752d8f77855c.scope - libcontainer container dd19f96970b080f34d74b935dc163e8f8ba344c93858113ac7b3752d8f77855c. Jul 15 23:32:56.497702 containerd[1498]: time="2025-07-15T23:32:56.497509181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k8psf,Uid:156f303b-ad78-44ea-a5a2-54a3b2b6c38b,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd19f96970b080f34d74b935dc163e8f8ba344c93858113ac7b3752d8f77855c\"" Jul 15 23:32:56.542379 kubelet[2636]: E0715 23:32:56.542348 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.542379 kubelet[2636]: W0715 23:32:56.542373 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.542625 kubelet[2636]: E0715 23:32:56.542399 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.542625 kubelet[2636]: E0715 23:32:56.542620 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.542826 kubelet[2636]: W0715 23:32:56.542631 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.542826 kubelet[2636]: E0715 23:32:56.542648 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.542975 kubelet[2636]: E0715 23:32:56.542957 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.543032 kubelet[2636]: W0715 23:32:56.543019 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.543095 kubelet[2636]: E0715 23:32:56.543083 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.543319 kubelet[2636]: E0715 23:32:56.543305 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.543319 kubelet[2636]: W0715 23:32:56.543318 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.543427 kubelet[2636]: E0715 23:32:56.543333 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.543515 kubelet[2636]: E0715 23:32:56.543502 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.543515 kubelet[2636]: W0715 23:32:56.543514 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.543568 kubelet[2636]: E0715 23:32:56.543528 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.543699 kubelet[2636]: E0715 23:32:56.543687 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.543699 kubelet[2636]: W0715 23:32:56.543699 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.543761 kubelet[2636]: E0715 23:32:56.543712 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.543908 kubelet[2636]: E0715 23:32:56.543895 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.543967 kubelet[2636]: W0715 23:32:56.543908 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.543993 kubelet[2636]: E0715 23:32:56.543969 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.544077 kubelet[2636]: E0715 23:32:56.544064 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.544077 kubelet[2636]: W0715 23:32:56.544074 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.544179 kubelet[2636]: E0715 23:32:56.544107 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.544218 kubelet[2636]: E0715 23:32:56.544205 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.544218 kubelet[2636]: W0715 23:32:56.544215 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.544270 kubelet[2636]: E0715 23:32:56.544228 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.544378 kubelet[2636]: E0715 23:32:56.544365 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.544378 kubelet[2636]: W0715 23:32:56.544376 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.544554 kubelet[2636]: E0715 23:32:56.544395 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.544653 kubelet[2636]: E0715 23:32:56.544636 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.544714 kubelet[2636]: W0715 23:32:56.544701 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.544775 kubelet[2636]: E0715 23:32:56.544764 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.544967 kubelet[2636]: E0715 23:32:56.544952 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.544967 kubelet[2636]: W0715 23:32:56.544968 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.545034 kubelet[2636]: E0715 23:32:56.544983 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.545133 kubelet[2636]: E0715 23:32:56.545121 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.545133 kubelet[2636]: W0715 23:32:56.545132 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.545191 kubelet[2636]: E0715 23:32:56.545171 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.545275 kubelet[2636]: E0715 23:32:56.545262 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.545275 kubelet[2636]: W0715 23:32:56.545272 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.545366 kubelet[2636]: E0715 23:32:56.545300 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.545415 kubelet[2636]: E0715 23:32:56.545402 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.545415 kubelet[2636]: W0715 23:32:56.545412 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.545512 kubelet[2636]: E0715 23:32:56.545437 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.545574 kubelet[2636]: E0715 23:32:56.545560 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.545574 kubelet[2636]: W0715 23:32:56.545571 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.545733 kubelet[2636]: E0715 23:32:56.545584 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.545827 kubelet[2636]: E0715 23:32:56.545813 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.545890 kubelet[2636]: W0715 23:32:56.545877 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.546030 kubelet[2636]: E0715 23:32:56.545972 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.546207 kubelet[2636]: E0715 23:32:56.546193 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.546279 kubelet[2636]: W0715 23:32:56.546266 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.546339 kubelet[2636]: E0715 23:32:56.546327 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.546553 kubelet[2636]: E0715 23:32:56.546537 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.546553 kubelet[2636]: W0715 23:32:56.546550 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.546672 kubelet[2636]: E0715 23:32:56.546564 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.546962 kubelet[2636]: E0715 23:32:56.546866 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.546962 kubelet[2636]: W0715 23:32:56.546881 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.546962 kubelet[2636]: E0715 23:32:56.546899 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.547276 kubelet[2636]: E0715 23:32:56.547243 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.547276 kubelet[2636]: W0715 23:32:56.547260 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.547474 kubelet[2636]: E0715 23:32:56.547419 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.547597 kubelet[2636]: E0715 23:32:56.547584 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.547655 kubelet[2636]: W0715 23:32:56.547643 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.547730 kubelet[2636]: E0715 23:32:56.547709 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.547990 kubelet[2636]: E0715 23:32:56.547975 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.548140 kubelet[2636]: W0715 23:32:56.548061 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.548140 kubelet[2636]: E0715 23:32:56.548110 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.548357 kubelet[2636]: E0715 23:32:56.548343 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.548428 kubelet[2636]: W0715 23:32:56.548415 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.548514 kubelet[2636]: E0715 23:32:56.548490 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.548707 kubelet[2636]: E0715 23:32:56.548693 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.548956 kubelet[2636]: W0715 23:32:56.548771 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.548956 kubelet[2636]: E0715 23:32:56.548789 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:56.568032 kubelet[2636]: E0715 23:32:56.567994 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:32:56.568032 kubelet[2636]: W0715 23:32:56.568016 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:32:56.568032 kubelet[2636]: E0715 23:32:56.568041 2636 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:32:57.155260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940175372.mount: Deactivated successfully. Jul 15 23:32:58.314503 kubelet[2636]: E0715 23:32:58.314461 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9dn4" podUID="8911d17a-4907-4974-b467-7de008d2f4c7" Jul 15 23:32:58.325942 containerd[1498]: time="2025-07-15T23:32:58.325210898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:58.327902 containerd[1498]: time="2025-07-15T23:32:58.327728244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 15 23:32:58.328701 containerd[1498]: time="2025-07-15T23:32:58.328663471Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:58.333195 containerd[1498]: time="2025-07-15T23:32:58.332410543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:58.354249 containerd[1498]: time="2025-07-15T23:32:58.353599115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.139908145s" Jul 15 23:32:58.354942 containerd[1498]: time="2025-07-15T23:32:58.354389193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 15 23:32:58.358307 containerd[1498]: time="2025-07-15T23:32:58.358285375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 23:32:58.378880 containerd[1498]: time="2025-07-15T23:32:58.378850542Z" level=info msg="CreateContainer within sandbox \"e1056200641d694ec3c21946f8bb86ddaf6b88e2b00bc5a05a00227114296e5a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 23:32:58.391292 containerd[1498]: time="2025-07-15T23:32:58.390729605Z" level=info msg="Container ac5097e0c64353c26b80681487dc10617d4bf770cdbd5724a75cc02fbea62456: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:32:58.406865 containerd[1498]: time="2025-07-15T23:32:58.406758702Z" level=info msg="CreateContainer within sandbox \"e1056200641d694ec3c21946f8bb86ddaf6b88e2b00bc5a05a00227114296e5a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ac5097e0c64353c26b80681487dc10617d4bf770cdbd5724a75cc02fbea62456\"" Jul 15 23:32:58.407455 containerd[1498]: time="2025-07-15T23:32:58.407414713Z" level=info msg="StartContainer for \"ac5097e0c64353c26b80681487dc10617d4bf770cdbd5724a75cc02fbea62456\"" Jul 15 23:32:58.408645 containerd[1498]: time="2025-07-15T23:32:58.408623596Z" level=info msg="connecting to shim ac5097e0c64353c26b80681487dc10617d4bf770cdbd5724a75cc02fbea62456" address="unix:///run/containerd/s/1316616eac501a56f94ef18e4a4dac1b86f4844b0f2f23ba6f646434d64b6ca1" protocol=ttrpc version=3 Jul 15 23:32:58.438134 systemd[1]: Started cri-containerd-ac5097e0c64353c26b80681487dc10617d4bf770cdbd5724a75cc02fbea62456.scope - libcontainer container ac5097e0c64353c26b80681487dc10617d4bf770cdbd5724a75cc02fbea62456. Jul 15 23:32:58.475315 containerd[1498]: time="2025-07-15T23:32:58.475268649Z" level=info msg="StartContainer for \"ac5097e0c64353c26b80681487dc10617d4bf770cdbd5724a75cc02fbea62456\" returns successfully" Jul 15 23:32:59.304695 containerd[1498]: time="2025-07-15T23:32:59.304647197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:59.305262 containerd[1498]: time="2025-07-15T23:32:59.305237630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 15 23:32:59.306010 containerd[1498]: time="2025-07-15T23:32:59.305980013Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:59.308063 containerd[1498]: time="2025-07-15T23:32:59.308038208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:32:59.308911 containerd[1498]: time="2025-07-15T23:32:59.308711337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 950.309058ms" Jul 15 23:32:59.308911 containerd[1498]: time="2025-07-15T23:32:59.308746624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 15 23:32:59.310631 containerd[1498]: time="2025-07-15T23:32:59.310605420Z" level=info msg="CreateContainer within sandbox \"dd19f96970b080f34d74b935dc163e8f8ba344c93858113ac7b3752d8f77855c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 23:32:59.320121 containerd[1498]: time="2025-07-15T23:32:59.319916407Z" level=info msg="Container 85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:32:59.331511 containerd[1498]: time="2025-07-15T23:32:59.331468343Z" level=info msg="CreateContainer within sandbox \"dd19f96970b080f34d74b935dc163e8f8ba344c93858113ac7b3752d8f77855c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9\"" Jul 15 23:32:59.332169 containerd[1498]: time="2025-07-15T23:32:59.332138112Z" level=info msg="StartContainer for \"85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9\"" Jul 15 23:32:59.333718 containerd[1498]: time="2025-07-15T23:32:59.333687529Z" level=info msg="connecting to shim 85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9" address="unix:///run/containerd/s/5f9fb454fd32618919a3cfa994a5d7dbb2436fbe9b0bd95e5c4f33f7072bdd85" protocol=ttrpc version=3 Jul 15 23:32:59.356114 systemd[1]: Started cri-containerd-85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9.scope - libcontainer container 85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9. Jul 15 23:32:59.393152 containerd[1498]: time="2025-07-15T23:32:59.393117371Z" level=info msg="StartContainer for \"85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9\" returns successfully" Jul 15 23:32:59.407622 kubelet[2636]: I0715 23:32:59.407434 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f8f894d96-xt52f" podStartSLOduration=2.260183285 podStartE2EDuration="4.407416795s" podCreationTimestamp="2025-07-15 23:32:55 +0000 UTC" firstStartedPulling="2025-07-15 23:32:56.210890913 +0000 UTC m=+19.989109607" lastFinishedPulling="2025-07-15 23:32:58.358124383 +0000 UTC m=+22.136343117" observedRunningTime="2025-07-15 23:32:59.407071249 +0000 UTC m=+23.185289983" watchObservedRunningTime="2025-07-15 23:32:59.407416795 +0000 UTC m=+23.185635529" Jul 15 23:32:59.433140 systemd[1]: cri-containerd-85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9.scope: Deactivated successfully. Jul 15 23:32:59.462955 containerd[1498]: time="2025-07-15T23:32:59.462829987Z" level=info msg="received exit event container_id:\"85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9\" id:\"85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9\" pid:3295 exited_at:{seconds:1752622379 nanos:450090183}" Jul 15 23:32:59.465007 containerd[1498]: time="2025-07-15T23:32:59.464963036Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9\" id:\"85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9\" pid:3295 exited_at:{seconds:1752622379 nanos:450090183}" Jul 15 23:32:59.547780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85b26ee57dc19ef6bd1eefd80076e7dd73f87a904cee19f6df11f7d2198b3fa9-rootfs.mount: Deactivated successfully. Jul 15 23:33:00.314191 kubelet[2636]: E0715 23:33:00.314127 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9dn4" podUID="8911d17a-4907-4974-b467-7de008d2f4c7" Jul 15 23:33:00.404632 kubelet[2636]: I0715 23:33:00.402185 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:33:00.406172 containerd[1498]: time="2025-07-15T23:33:00.406123671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 23:33:02.324757 kubelet[2636]: E0715 23:33:02.324697 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9dn4" podUID="8911d17a-4907-4974-b467-7de008d2f4c7" Jul 15 23:33:02.518656 containerd[1498]: time="2025-07-15T23:33:02.518615571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:02.519596 containerd[1498]: time="2025-07-15T23:33:02.519438030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 15 23:33:02.520258 containerd[1498]: time="2025-07-15T23:33:02.520229883Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:02.522797 containerd[1498]: time="2025-07-15T23:33:02.522758470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:02.523407 containerd[1498]: time="2025-07-15T23:33:02.523374053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.117198972s" Jul 15 23:33:02.523453 containerd[1498]: time="2025-07-15T23:33:02.523407339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 15 23:33:02.525513 containerd[1498]: time="2025-07-15T23:33:02.525482369Z" level=info msg="CreateContainer within sandbox \"dd19f96970b080f34d74b935dc163e8f8ba344c93858113ac7b3752d8f77855c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 23:33:02.532109 containerd[1498]: time="2025-07-15T23:33:02.532076081Z" level=info msg="Container 04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:02.540548 containerd[1498]: time="2025-07-15T23:33:02.540505062Z" level=info msg="CreateContainer within sandbox \"dd19f96970b080f34d74b935dc163e8f8ba344c93858113ac7b3752d8f77855c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c\"" Jul 15 23:33:02.541062 containerd[1498]: time="2025-07-15T23:33:02.540997345Z" level=info msg="StartContainer for \"04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c\"" Jul 15 23:33:02.542587 containerd[1498]: time="2025-07-15T23:33:02.542542486Z" level=info msg="connecting to shim 04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c" address="unix:///run/containerd/s/5f9fb454fd32618919a3cfa994a5d7dbb2436fbe9b0bd95e5c4f33f7072bdd85" protocol=ttrpc version=3 Jul 15 23:33:02.572074 systemd[1]: Started cri-containerd-04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c.scope - libcontainer container 04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c. Jul 15 23:33:02.602061 containerd[1498]: time="2025-07-15T23:33:02.601625810Z" level=info msg="StartContainer for \"04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c\" returns successfully" Jul 15 23:33:03.216487 systemd[1]: cri-containerd-04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c.scope: Deactivated successfully. Jul 15 23:33:03.217864 systemd[1]: cri-containerd-04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c.scope: Consumed 453ms CPU time, 175.6M memory peak, 3.4M read from disk, 165.8M written to disk. Jul 15 23:33:03.220107 containerd[1498]: time="2025-07-15T23:33:03.218131430Z" level=info msg="received exit event container_id:\"04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c\" id:\"04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c\" pid:3371 exited_at:{seconds:1752622383 nanos:217714323}" Jul 15 23:33:03.226549 containerd[1498]: time="2025-07-15T23:33:03.218223685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c\" id:\"04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c\" pid:3371 exited_at:{seconds:1752622383 nanos:217714323}" Jul 15 23:33:03.246631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04addc6c7e77bec02d0d02bf9570027cc53e5b4b6f52618fbe4cd110830e621c-rootfs.mount: Deactivated successfully. Jul 15 23:33:03.269523 kubelet[2636]: I0715 23:33:03.269488 2636 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 23:33:03.390490 systemd[1]: Created slice kubepods-burstable-pod7610213d_974f_4d75_b1f9_ebf5a38cf36b.slice - libcontainer container kubepods-burstable-pod7610213d_974f_4d75_b1f9_ebf5a38cf36b.slice. Jul 15 23:33:03.400417 systemd[1]: Created slice kubepods-besteffort-pod27fd4573_e26a_489d_805e_62cf57fe804e.slice - libcontainer container kubepods-besteffort-pod27fd4573_e26a_489d_805e_62cf57fe804e.slice. Jul 15 23:33:03.402369 kubelet[2636]: I0715 23:33:03.402340 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7610213d-974f-4d75-b1f9-ebf5a38cf36b-config-volume\") pod \"coredns-668d6bf9bc-wjtwg\" (UID: \"7610213d-974f-4d75-b1f9-ebf5a38cf36b\") " pod="kube-system/coredns-668d6bf9bc-wjtwg" Jul 15 23:33:03.402369 kubelet[2636]: I0715 23:33:03.402378 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98g7w\" (UniqueName: \"kubernetes.io/projected/2faee96c-ca0f-4547-884b-1082f57853b3-kube-api-access-98g7w\") pod \"calico-apiserver-87f5777cd-jfddx\" (UID: \"2faee96c-ca0f-4547-884b-1082f57853b3\") " pod="calico-apiserver/calico-apiserver-87f5777cd-jfddx" Jul 15 23:33:03.402660 kubelet[2636]: I0715 23:33:03.402399 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca63a45-0475-4e53-bb61-3e648a3e1119-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-m7wgp\" (UID: \"aca63a45-0475-4e53-bb61-3e648a3e1119\") " pod="calico-system/goldmane-768f4c5c69-m7wgp" Jul 15 23:33:03.402660 kubelet[2636]: I0715 23:33:03.402417 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aca63a45-0475-4e53-bb61-3e648a3e1119-config\") pod \"goldmane-768f4c5c69-m7wgp\" (UID: \"aca63a45-0475-4e53-bb61-3e648a3e1119\") " pod="calico-system/goldmane-768f4c5c69-m7wgp" Jul 15 23:33:03.402660 kubelet[2636]: I0715 23:33:03.402432 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd427\" (UniqueName: \"kubernetes.io/projected/aca63a45-0475-4e53-bb61-3e648a3e1119-kube-api-access-cd427\") pod \"goldmane-768f4c5c69-m7wgp\" (UID: \"aca63a45-0475-4e53-bb61-3e648a3e1119\") " pod="calico-system/goldmane-768f4c5c69-m7wgp" Jul 15 23:33:03.402660 kubelet[2636]: I0715 23:33:03.402450 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/181456f3-ba09-4518-bb01-853bb51ff674-calico-apiserver-certs\") pod \"calico-apiserver-87f5777cd-sw4nq\" (UID: \"181456f3-ba09-4518-bb01-853bb51ff674\") " pod="calico-apiserver/calico-apiserver-87f5777cd-sw4nq" Jul 15 23:33:03.402660 kubelet[2636]: I0715 23:33:03.402468 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/49a97146-9785-414f-b77a-cdbbdcd61b0a-calico-apiserver-certs\") pod \"calico-apiserver-7bb4f99774-qlvcz\" (UID: \"49a97146-9785-414f-b77a-cdbbdcd61b0a\") " pod="calico-apiserver/calico-apiserver-7bb4f99774-qlvcz" Jul 15 23:33:03.402782 kubelet[2636]: I0715 23:33:03.402489 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpzmn\" (UniqueName: \"kubernetes.io/projected/181456f3-ba09-4518-bb01-853bb51ff674-kube-api-access-wpzmn\") pod \"calico-apiserver-87f5777cd-sw4nq\" (UID: \"181456f3-ba09-4518-bb01-853bb51ff674\") " pod="calico-apiserver/calico-apiserver-87f5777cd-sw4nq" Jul 15 23:33:03.402782 kubelet[2636]: I0715 23:33:03.402507 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvzhb\" (UniqueName: \"kubernetes.io/projected/b54fcd2f-6015-4209-af4e-e71aceea4f10-kube-api-access-pvzhb\") pod \"whisker-6d44546496-ddr2p\" (UID: \"b54fcd2f-6015-4209-af4e-e71aceea4f10\") " pod="calico-system/whisker-6d44546496-ddr2p" Jul 15 23:33:03.402782 kubelet[2636]: I0715 23:33:03.402523 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt985\" (UniqueName: \"kubernetes.io/projected/7610213d-974f-4d75-b1f9-ebf5a38cf36b-kube-api-access-xt985\") pod \"coredns-668d6bf9bc-wjtwg\" (UID: \"7610213d-974f-4d75-b1f9-ebf5a38cf36b\") " pod="kube-system/coredns-668d6bf9bc-wjtwg" Jul 15 23:33:03.402782 kubelet[2636]: I0715 23:33:03.402538 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e26b50-6338-4565-b6ac-b31e704a7c57-config-volume\") pod \"coredns-668d6bf9bc-t4msq\" (UID: \"50e26b50-6338-4565-b6ac-b31e704a7c57\") " pod="kube-system/coredns-668d6bf9bc-t4msq" Jul 15 23:33:03.402782 kubelet[2636]: I0715 23:33:03.402555 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b54fcd2f-6015-4209-af4e-e71aceea4f10-whisker-ca-bundle\") pod \"whisker-6d44546496-ddr2p\" (UID: \"b54fcd2f-6015-4209-af4e-e71aceea4f10\") " pod="calico-system/whisker-6d44546496-ddr2p" Jul 15 23:33:03.403032 kubelet[2636]: I0715 23:33:03.402571 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/aca63a45-0475-4e53-bb61-3e648a3e1119-goldmane-key-pair\") pod \"goldmane-768f4c5c69-m7wgp\" (UID: \"aca63a45-0475-4e53-bb61-3e648a3e1119\") " pod="calico-system/goldmane-768f4c5c69-m7wgp" Jul 15 23:33:03.403032 kubelet[2636]: I0715 23:33:03.402589 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27fd4573-e26a-489d-805e-62cf57fe804e-tigera-ca-bundle\") pod \"calico-kube-controllers-64c48566cf-hvx9f\" (UID: \"27fd4573-e26a-489d-805e-62cf57fe804e\") " pod="calico-system/calico-kube-controllers-64c48566cf-hvx9f" Jul 15 23:33:03.403032 kubelet[2636]: I0715 23:33:03.402604 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b54fcd2f-6015-4209-af4e-e71aceea4f10-whisker-backend-key-pair\") pod \"whisker-6d44546496-ddr2p\" (UID: \"b54fcd2f-6015-4209-af4e-e71aceea4f10\") " pod="calico-system/whisker-6d44546496-ddr2p" Jul 15 23:33:03.403032 kubelet[2636]: I0715 23:33:03.402621 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbg9v\" (UniqueName: \"kubernetes.io/projected/50e26b50-6338-4565-b6ac-b31e704a7c57-kube-api-access-vbg9v\") pod \"coredns-668d6bf9bc-t4msq\" (UID: \"50e26b50-6338-4565-b6ac-b31e704a7c57\") " pod="kube-system/coredns-668d6bf9bc-t4msq" Jul 15 23:33:03.403032 kubelet[2636]: I0715 23:33:03.402639 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kkdz\" (UniqueName: \"kubernetes.io/projected/49a97146-9785-414f-b77a-cdbbdcd61b0a-kube-api-access-6kkdz\") pod \"calico-apiserver-7bb4f99774-qlvcz\" (UID: \"49a97146-9785-414f-b77a-cdbbdcd61b0a\") " pod="calico-apiserver/calico-apiserver-7bb4f99774-qlvcz" Jul 15 23:33:03.404786 kubelet[2636]: I0715 23:33:03.402658 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hxkf\" (UniqueName: \"kubernetes.io/projected/27fd4573-e26a-489d-805e-62cf57fe804e-kube-api-access-6hxkf\") pod \"calico-kube-controllers-64c48566cf-hvx9f\" (UID: \"27fd4573-e26a-489d-805e-62cf57fe804e\") " pod="calico-system/calico-kube-controllers-64c48566cf-hvx9f" Jul 15 23:33:03.404786 kubelet[2636]: I0715 23:33:03.402688 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2faee96c-ca0f-4547-884b-1082f57853b3-calico-apiserver-certs\") pod \"calico-apiserver-87f5777cd-jfddx\" (UID: \"2faee96c-ca0f-4547-884b-1082f57853b3\") " pod="calico-apiserver/calico-apiserver-87f5777cd-jfddx" Jul 15 23:33:03.407152 systemd[1]: Created slice kubepods-burstable-pod50e26b50_6338_4565_b6ac_b31e704a7c57.slice - libcontainer container kubepods-burstable-pod50e26b50_6338_4565_b6ac_b31e704a7c57.slice. Jul 15 23:33:03.414791 containerd[1498]: time="2025-07-15T23:33:03.414711088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 23:33:03.417896 systemd[1]: Created slice kubepods-besteffort-pod49a97146_9785_414f_b77a_cdbbdcd61b0a.slice - libcontainer container kubepods-besteffort-pod49a97146_9785_414f_b77a_cdbbdcd61b0a.slice. Jul 15 23:33:03.423352 systemd[1]: Created slice kubepods-besteffort-pod2faee96c_ca0f_4547_884b_1082f57853b3.slice - libcontainer container kubepods-besteffort-pod2faee96c_ca0f_4547_884b_1082f57853b3.slice. Jul 15 23:33:03.430770 systemd[1]: Created slice kubepods-besteffort-podb54fcd2f_6015_4209_af4e_e71aceea4f10.slice - libcontainer container kubepods-besteffort-podb54fcd2f_6015_4209_af4e_e71aceea4f10.slice. Jul 15 23:33:03.438832 systemd[1]: Created slice kubepods-besteffort-pod181456f3_ba09_4518_bb01_853bb51ff674.slice - libcontainer container kubepods-besteffort-pod181456f3_ba09_4518_bb01_853bb51ff674.slice. Jul 15 23:33:03.442609 systemd[1]: Created slice kubepods-besteffort-podaca63a45_0475_4e53_bb61_3e648a3e1119.slice - libcontainer container kubepods-besteffort-podaca63a45_0475_4e53_bb61_3e648a3e1119.slice. Jul 15 23:33:03.698035 containerd[1498]: time="2025-07-15T23:33:03.697992979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wjtwg,Uid:7610213d-974f-4d75-b1f9-ebf5a38cf36b,Namespace:kube-system,Attempt:0,}" Jul 15 23:33:03.706359 containerd[1498]: time="2025-07-15T23:33:03.706317646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c48566cf-hvx9f,Uid:27fd4573-e26a-489d-805e-62cf57fe804e,Namespace:calico-system,Attempt:0,}" Jul 15 23:33:03.713072 containerd[1498]: time="2025-07-15T23:33:03.713034253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t4msq,Uid:50e26b50-6338-4565-b6ac-b31e704a7c57,Namespace:kube-system,Attempt:0,}" Jul 15 23:33:03.739098 containerd[1498]: time="2025-07-15T23:33:03.736788138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d44546496-ddr2p,Uid:b54fcd2f-6015-4209-af4e-e71aceea4f10,Namespace:calico-system,Attempt:0,}" Jul 15 23:33:03.739098 containerd[1498]: time="2025-07-15T23:33:03.737441684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb4f99774-qlvcz,Uid:49a97146-9785-414f-b77a-cdbbdcd61b0a,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:33:03.739098 containerd[1498]: time="2025-07-15T23:33:03.737597869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f5777cd-jfddx,Uid:2faee96c-ca0f-4547-884b-1082f57853b3,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:33:03.759996 containerd[1498]: time="2025-07-15T23:33:03.759965809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f5777cd-sw4nq,Uid:181456f3-ba09-4518-bb01-853bb51ff674,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:33:03.760251 containerd[1498]: time="2025-07-15T23:33:03.760228932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m7wgp,Uid:aca63a45-0475-4e53-bb61-3e648a3e1119,Namespace:calico-system,Attempt:0,}" Jul 15 23:33:04.121495 containerd[1498]: time="2025-07-15T23:33:04.121448870Z" level=error msg="Failed to destroy network for sandbox \"8919b2fbb35eb6779af3b3f50a8df320b25e5903275668fdd8667ddc5c107da1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.123948 containerd[1498]: time="2025-07-15T23:33:04.123875247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f5777cd-sw4nq,Uid:181456f3-ba09-4518-bb01-853bb51ff674,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8919b2fbb35eb6779af3b3f50a8df320b25e5903275668fdd8667ddc5c107da1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.126069 kubelet[2636]: E0715 23:33:04.126012 2636 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8919b2fbb35eb6779af3b3f50a8df320b25e5903275668fdd8667ddc5c107da1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.127198 containerd[1498]: time="2025-07-15T23:33:04.127081186Z" level=error msg="Failed to destroy network for sandbox \"51461272b4771dd68b55144e1ec07e13fb86883917dffe8ff0069faf3fa7ef19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.127951 containerd[1498]: time="2025-07-15T23:33:04.127902793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f5777cd-jfddx,Uid:2faee96c-ca0f-4547-884b-1082f57853b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51461272b4771dd68b55144e1ec07e13fb86883917dffe8ff0069faf3fa7ef19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.128068 kubelet[2636]: E0715 23:33:04.128033 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8919b2fbb35eb6779af3b3f50a8df320b25e5903275668fdd8667ddc5c107da1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87f5777cd-sw4nq" Jul 15 23:33:04.128393 kubelet[2636]: E0715 23:33:04.128364 2636 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8919b2fbb35eb6779af3b3f50a8df320b25e5903275668fdd8667ddc5c107da1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87f5777cd-sw4nq" Jul 15 23:33:04.128459 kubelet[2636]: E0715 23:33:04.128256 2636 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51461272b4771dd68b55144e1ec07e13fb86883917dffe8ff0069faf3fa7ef19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.128459 kubelet[2636]: E0715 23:33:04.128440 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51461272b4771dd68b55144e1ec07e13fb86883917dffe8ff0069faf3fa7ef19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87f5777cd-jfddx" Jul 15 23:33:04.128504 kubelet[2636]: E0715 23:33:04.128458 2636 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51461272b4771dd68b55144e1ec07e13fb86883917dffe8ff0069faf3fa7ef19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87f5777cd-jfddx" Jul 15 23:33:04.128504 kubelet[2636]: E0715 23:33:04.128489 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87f5777cd-jfddx_calico-apiserver(2faee96c-ca0f-4547-884b-1082f57853b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87f5777cd-jfddx_calico-apiserver(2faee96c-ca0f-4547-884b-1082f57853b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51461272b4771dd68b55144e1ec07e13fb86883917dffe8ff0069faf3fa7ef19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87f5777cd-jfddx" podUID="2faee96c-ca0f-4547-884b-1082f57853b3" Jul 15 23:33:04.128567 kubelet[2636]: E0715 23:33:04.128539 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87f5777cd-sw4nq_calico-apiserver(181456f3-ba09-4518-bb01-853bb51ff674)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87f5777cd-sw4nq_calico-apiserver(181456f3-ba09-4518-bb01-853bb51ff674)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8919b2fbb35eb6779af3b3f50a8df320b25e5903275668fdd8667ddc5c107da1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87f5777cd-sw4nq" podUID="181456f3-ba09-4518-bb01-853bb51ff674" Jul 15 23:33:04.132437 containerd[1498]: time="2025-07-15T23:33:04.132402573Z" level=error msg="Failed to destroy network for sandbox \"24fbd97e7a96bfa95fb0d8875f34a35f57a3e1f958804c8c9f72c6b345bf6b94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.135402 containerd[1498]: time="2025-07-15T23:33:04.135351072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb4f99774-qlvcz,Uid:49a97146-9785-414f-b77a-cdbbdcd61b0a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24fbd97e7a96bfa95fb0d8875f34a35f57a3e1f958804c8c9f72c6b345bf6b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.135547 kubelet[2636]: E0715 23:33:04.135513 2636 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24fbd97e7a96bfa95fb0d8875f34a35f57a3e1f958804c8c9f72c6b345bf6b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.135611 kubelet[2636]: E0715 23:33:04.135560 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24fbd97e7a96bfa95fb0d8875f34a35f57a3e1f958804c8c9f72c6b345bf6b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb4f99774-qlvcz" Jul 15 23:33:04.135611 kubelet[2636]: E0715 23:33:04.135578 2636 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24fbd97e7a96bfa95fb0d8875f34a35f57a3e1f958804c8c9f72c6b345bf6b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb4f99774-qlvcz" Jul 15 23:33:04.135695 kubelet[2636]: E0715 23:33:04.135613 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bb4f99774-qlvcz_calico-apiserver(49a97146-9785-414f-b77a-cdbbdcd61b0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bb4f99774-qlvcz_calico-apiserver(49a97146-9785-414f-b77a-cdbbdcd61b0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24fbd97e7a96bfa95fb0d8875f34a35f57a3e1f958804c8c9f72c6b345bf6b94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb4f99774-qlvcz" podUID="49a97146-9785-414f-b77a-cdbbdcd61b0a" Jul 15 23:33:04.138021 containerd[1498]: time="2025-07-15T23:33:04.137918911Z" level=error msg="Failed to destroy network for sandbox \"3af83bed3c52bedb45558874b87b26fe7872276c3de47d0024d236fe04dae92b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.138838 containerd[1498]: time="2025-07-15T23:33:04.138805409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d44546496-ddr2p,Uid:b54fcd2f-6015-4209-af4e-e71aceea4f10,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3af83bed3c52bedb45558874b87b26fe7872276c3de47d0024d236fe04dae92b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.139595 kubelet[2636]: E0715 23:33:04.139564 2636 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3af83bed3c52bedb45558874b87b26fe7872276c3de47d0024d236fe04dae92b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.139666 kubelet[2636]: E0715 23:33:04.139612 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3af83bed3c52bedb45558874b87b26fe7872276c3de47d0024d236fe04dae92b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d44546496-ddr2p" Jul 15 23:33:04.139666 kubelet[2636]: E0715 23:33:04.139630 2636 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3af83bed3c52bedb45558874b87b26fe7872276c3de47d0024d236fe04dae92b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d44546496-ddr2p" Jul 15 23:33:04.139710 kubelet[2636]: E0715 23:33:04.139663 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d44546496-ddr2p_calico-system(b54fcd2f-6015-4209-af4e-e71aceea4f10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d44546496-ddr2p_calico-system(b54fcd2f-6015-4209-af4e-e71aceea4f10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3af83bed3c52bedb45558874b87b26fe7872276c3de47d0024d236fe04dae92b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d44546496-ddr2p" podUID="b54fcd2f-6015-4209-af4e-e71aceea4f10" Jul 15 23:33:04.145080 containerd[1498]: time="2025-07-15T23:33:04.145045139Z" level=error msg="Failed to destroy network for sandbox \"7c7667c49d8db6568feb8a67da77e6f28f3f6be1035eaeb564a815e1ee5bdb52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.146301 containerd[1498]: time="2025-07-15T23:33:04.146226042Z" level=error msg="Failed to destroy network for sandbox \"8822a29c699276f54f891266ed37a9c6bb2ca1614796716f45d75348c81ba069\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.146442 containerd[1498]: time="2025-07-15T23:33:04.146235364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wjtwg,Uid:7610213d-974f-4d75-b1f9-ebf5a38cf36b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c7667c49d8db6568feb8a67da77e6f28f3f6be1035eaeb564a815e1ee5bdb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.147095 kubelet[2636]: E0715 23:33:04.147030 2636 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c7667c49d8db6568feb8a67da77e6f28f3f6be1035eaeb564a815e1ee5bdb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.147164 kubelet[2636]: E0715 23:33:04.147103 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c7667c49d8db6568feb8a67da77e6f28f3f6be1035eaeb564a815e1ee5bdb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wjtwg" Jul 15 23:33:04.147164 kubelet[2636]: E0715 23:33:04.147124 2636 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c7667c49d8db6568feb8a67da77e6f28f3f6be1035eaeb564a815e1ee5bdb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wjtwg" Jul 15 23:33:04.147225 kubelet[2636]: E0715 23:33:04.147158 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wjtwg_kube-system(7610213d-974f-4d75-b1f9-ebf5a38cf36b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wjtwg_kube-system(7610213d-974f-4d75-b1f9-ebf5a38cf36b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c7667c49d8db6568feb8a67da77e6f28f3f6be1035eaeb564a815e1ee5bdb52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wjtwg" podUID="7610213d-974f-4d75-b1f9-ebf5a38cf36b" Jul 15 23:33:04.147813 containerd[1498]: time="2025-07-15T23:33:04.147745279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c48566cf-hvx9f,Uid:27fd4573-e26a-489d-805e-62cf57fe804e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8822a29c699276f54f891266ed37a9c6bb2ca1614796716f45d75348c81ba069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.148171 kubelet[2636]: E0715 23:33:04.148140 2636 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8822a29c699276f54f891266ed37a9c6bb2ca1614796716f45d75348c81ba069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.148244 kubelet[2636]: E0715 23:33:04.148189 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8822a29c699276f54f891266ed37a9c6bb2ca1614796716f45d75348c81ba069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64c48566cf-hvx9f" Jul 15 23:33:04.148244 kubelet[2636]: E0715 23:33:04.148206 2636 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8822a29c699276f54f891266ed37a9c6bb2ca1614796716f45d75348c81ba069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64c48566cf-hvx9f" Jul 15 23:33:04.148314 kubelet[2636]: E0715 23:33:04.148240 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64c48566cf-hvx9f_calico-system(27fd4573-e26a-489d-805e-62cf57fe804e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64c48566cf-hvx9f_calico-system(27fd4573-e26a-489d-805e-62cf57fe804e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8822a29c699276f54f891266ed37a9c6bb2ca1614796716f45d75348c81ba069\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64c48566cf-hvx9f" podUID="27fd4573-e26a-489d-805e-62cf57fe804e" Jul 15 23:33:04.151917 containerd[1498]: time="2025-07-15T23:33:04.151865119Z" level=error msg="Failed to destroy network for sandbox \"6742f9cad2db660f66794c3b228b8ec4837c0c5a616011e773a6a68e37aa4f32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.152849 containerd[1498]: time="2025-07-15T23:33:04.152775741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t4msq,Uid:50e26b50-6338-4565-b6ac-b31e704a7c57,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6742f9cad2db660f66794c3b228b8ec4837c0c5a616011e773a6a68e37aa4f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.153116 kubelet[2636]: E0715 23:33:04.153068 2636 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6742f9cad2db660f66794c3b228b8ec4837c0c5a616011e773a6a68e37aa4f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.153218 kubelet[2636]: E0715 23:33:04.153126 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6742f9cad2db660f66794c3b228b8ec4837c0c5a616011e773a6a68e37aa4f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t4msq" Jul 15 23:33:04.153218 kubelet[2636]: E0715 23:33:04.153142 2636 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6742f9cad2db660f66794c3b228b8ec4837c0c5a616011e773a6a68e37aa4f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t4msq" Jul 15 23:33:04.153218 kubelet[2636]: E0715 23:33:04.153183 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t4msq_kube-system(50e26b50-6338-4565-b6ac-b31e704a7c57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t4msq_kube-system(50e26b50-6338-4565-b6ac-b31e704a7c57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6742f9cad2db660f66794c3b228b8ec4837c0c5a616011e773a6a68e37aa4f32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t4msq" podUID="50e26b50-6338-4565-b6ac-b31e704a7c57" Jul 15 23:33:04.153871 containerd[1498]: time="2025-07-15T23:33:04.153842267Z" level=error msg="Failed to destroy network for sandbox \"f35f1edeb62c85343bdc89531b92199793c9ee84865e741812d61d0a41f998b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.155147 containerd[1498]: time="2025-07-15T23:33:04.155116545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m7wgp,Uid:aca63a45-0475-4e53-bb61-3e648a3e1119,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f35f1edeb62c85343bdc89531b92199793c9ee84865e741812d61d0a41f998b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.155276 kubelet[2636]: E0715 23:33:04.155252 2636 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f35f1edeb62c85343bdc89531b92199793c9ee84865e741812d61d0a41f998b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.155314 kubelet[2636]: E0715 23:33:04.155290 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f35f1edeb62c85343bdc89531b92199793c9ee84865e741812d61d0a41f998b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-m7wgp" Jul 15 23:33:04.155314 kubelet[2636]: E0715 23:33:04.155305 2636 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f35f1edeb62c85343bdc89531b92199793c9ee84865e741812d61d0a41f998b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-m7wgp" Jul 15 23:33:04.155371 kubelet[2636]: E0715 23:33:04.155335 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-m7wgp_calico-system(aca63a45-0475-4e53-bb61-3e648a3e1119)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-m7wgp_calico-system(aca63a45-0475-4e53-bb61-3e648a3e1119)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f35f1edeb62c85343bdc89531b92199793c9ee84865e741812d61d0a41f998b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-m7wgp" podUID="aca63a45-0475-4e53-bb61-3e648a3e1119" Jul 15 23:33:04.320473 systemd[1]: Created slice kubepods-besteffort-pod8911d17a_4907_4974_b467_7de008d2f4c7.slice - libcontainer container kubepods-besteffort-pod8911d17a_4907_4974_b467_7de008d2f4c7.slice. Jul 15 23:33:04.325339 containerd[1498]: time="2025-07-15T23:33:04.325283604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9dn4,Uid:8911d17a-4907-4974-b467-7de008d2f4c7,Namespace:calico-system,Attempt:0,}" Jul 15 23:33:04.375888 containerd[1498]: time="2025-07-15T23:33:04.375757572Z" level=error msg="Failed to destroy network for sandbox \"eca6417020af013076a598aa653c923f8a0b0f044c0f2d5f1793a3c3870e403d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.378205 containerd[1498]: time="2025-07-15T23:33:04.378137062Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9dn4,Uid:8911d17a-4907-4974-b467-7de008d2f4c7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eca6417020af013076a598aa653c923f8a0b0f044c0f2d5f1793a3c3870e403d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.378442 kubelet[2636]: E0715 23:33:04.378392 2636 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eca6417020af013076a598aa653c923f8a0b0f044c0f2d5f1793a3c3870e403d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:33:04.378489 kubelet[2636]: E0715 23:33:04.378453 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eca6417020af013076a598aa653c923f8a0b0f044c0f2d5f1793a3c3870e403d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9dn4" Jul 15 23:33:04.378489 kubelet[2636]: E0715 23:33:04.378475 2636 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eca6417020af013076a598aa653c923f8a0b0f044c0f2d5f1793a3c3870e403d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9dn4" Jul 15 23:33:04.378537 kubelet[2636]: E0715 23:33:04.378514 2636 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h9dn4_calico-system(8911d17a-4907-4974-b467-7de008d2f4c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h9dn4_calico-system(8911d17a-4907-4974-b467-7de008d2f4c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eca6417020af013076a598aa653c923f8a0b0f044c0f2d5f1793a3c3870e403d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h9dn4" podUID="8911d17a-4907-4974-b467-7de008d2f4c7" Jul 15 23:33:04.533246 systemd[1]: run-netns-cni\x2dfb0ea2d0\x2d0d05\x2d8f56\x2d7998\x2d9057739f7c76.mount: Deactivated successfully. Jul 15 23:33:04.533344 systemd[1]: run-netns-cni\x2d7d23acf6\x2d37e8\x2d115a\x2d8e9e\x2d288db024a7a7.mount: Deactivated successfully. Jul 15 23:33:04.533388 systemd[1]: run-netns-cni\x2dff2dc456\x2dfb92\x2db9bb\x2d3c91\x2d903211176bf5.mount: Deactivated successfully. Jul 15 23:33:07.600505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351072561.mount: Deactivated successfully. Jul 15 23:33:07.845552 containerd[1498]: time="2025-07-15T23:33:07.845505616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:07.846461 containerd[1498]: time="2025-07-15T23:33:07.846308767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 15 23:33:07.847160 containerd[1498]: time="2025-07-15T23:33:07.847129441Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:07.848971 containerd[1498]: time="2025-07-15T23:33:07.848808874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:07.849378 containerd[1498]: time="2025-07-15T23:33:07.849334667Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.434545966s" Jul 15 23:33:07.849378 containerd[1498]: time="2025-07-15T23:33:07.849370792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 15 23:33:07.861623 containerd[1498]: time="2025-07-15T23:33:07.861542640Z" level=info msg="CreateContainer within sandbox \"dd19f96970b080f34d74b935dc163e8f8ba344c93858113ac7b3752d8f77855c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 23:33:07.868406 containerd[1498]: time="2025-07-15T23:33:07.868370146Z" level=info msg="Container 7b6c87d3554b968ed77a0b191c262b5661752a4a018214d6db21d6c7494a66e6: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:07.886728 containerd[1498]: time="2025-07-15T23:33:07.886687567Z" level=info msg="CreateContainer within sandbox \"dd19f96970b080f34d74b935dc163e8f8ba344c93858113ac7b3752d8f77855c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7b6c87d3554b968ed77a0b191c262b5661752a4a018214d6db21d6c7494a66e6\"" Jul 15 23:33:07.887414 containerd[1498]: time="2025-07-15T23:33:07.887328695Z" level=info msg="StartContainer for \"7b6c87d3554b968ed77a0b191c262b5661752a4a018214d6db21d6c7494a66e6\"" Jul 15 23:33:07.889013 containerd[1498]: time="2025-07-15T23:33:07.888970123Z" level=info msg="connecting to shim 7b6c87d3554b968ed77a0b191c262b5661752a4a018214d6db21d6c7494a66e6" address="unix:///run/containerd/s/5f9fb454fd32618919a3cfa994a5d7dbb2436fbe9b0bd95e5c4f33f7072bdd85" protocol=ttrpc version=3 Jul 15 23:33:07.926128 systemd[1]: Started cri-containerd-7b6c87d3554b968ed77a0b191c262b5661752a4a018214d6db21d6c7494a66e6.scope - libcontainer container 7b6c87d3554b968ed77a0b191c262b5661752a4a018214d6db21d6c7494a66e6. Jul 15 23:33:07.959633 containerd[1498]: time="2025-07-15T23:33:07.959580315Z" level=info msg="StartContainer for \"7b6c87d3554b968ed77a0b191c262b5661752a4a018214d6db21d6c7494a66e6\" returns successfully" Jul 15 23:33:08.158689 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 23:33:08.158787 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 23:33:08.437280 kubelet[2636]: I0715 23:33:08.437114 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b54fcd2f-6015-4209-af4e-e71aceea4f10-whisker-ca-bundle\") pod \"b54fcd2f-6015-4209-af4e-e71aceea4f10\" (UID: \"b54fcd2f-6015-4209-af4e-e71aceea4f10\") " Jul 15 23:33:08.437280 kubelet[2636]: I0715 23:33:08.437168 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvzhb\" (UniqueName: \"kubernetes.io/projected/b54fcd2f-6015-4209-af4e-e71aceea4f10-kube-api-access-pvzhb\") pod \"b54fcd2f-6015-4209-af4e-e71aceea4f10\" (UID: \"b54fcd2f-6015-4209-af4e-e71aceea4f10\") " Jul 15 23:33:08.437280 kubelet[2636]: I0715 23:33:08.437194 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b54fcd2f-6015-4209-af4e-e71aceea4f10-whisker-backend-key-pair\") pod \"b54fcd2f-6015-4209-af4e-e71aceea4f10\" (UID: \"b54fcd2f-6015-4209-af4e-e71aceea4f10\") " Jul 15 23:33:08.438475 kubelet[2636]: I0715 23:33:08.437673 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b54fcd2f-6015-4209-af4e-e71aceea4f10-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b54fcd2f-6015-4209-af4e-e71aceea4f10" (UID: "b54fcd2f-6015-4209-af4e-e71aceea4f10"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:33:08.443956 kubelet[2636]: I0715 23:33:08.443352 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b54fcd2f-6015-4209-af4e-e71aceea4f10-kube-api-access-pvzhb" (OuterVolumeSpecName: "kube-api-access-pvzhb") pod "b54fcd2f-6015-4209-af4e-e71aceea4f10" (UID: "b54fcd2f-6015-4209-af4e-e71aceea4f10"). InnerVolumeSpecName "kube-api-access-pvzhb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:33:08.451057 kubelet[2636]: I0715 23:33:08.451026 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b54fcd2f-6015-4209-af4e-e71aceea4f10-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b54fcd2f-6015-4209-af4e-e71aceea4f10" (UID: "b54fcd2f-6015-4209-af4e-e71aceea4f10"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 23:33:08.454321 kubelet[2636]: I0715 23:33:08.453751 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k8psf" podStartSLOduration=1.102294508 podStartE2EDuration="12.453734693s" podCreationTimestamp="2025-07-15 23:32:56 +0000 UTC" firstStartedPulling="2025-07-15 23:32:56.498584337 +0000 UTC m=+20.276803071" lastFinishedPulling="2025-07-15 23:33:07.850024562 +0000 UTC m=+31.628243256" observedRunningTime="2025-07-15 23:33:08.45341501 +0000 UTC m=+32.231633744" watchObservedRunningTime="2025-07-15 23:33:08.453734693 +0000 UTC m=+32.231953427" Jul 15 23:33:08.538773 kubelet[2636]: I0715 23:33:08.538734 2636 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pvzhb\" (UniqueName: \"kubernetes.io/projected/b54fcd2f-6015-4209-af4e-e71aceea4f10-kube-api-access-pvzhb\") on node \"localhost\" DevicePath \"\"" Jul 15 23:33:08.538773 kubelet[2636]: I0715 23:33:08.538767 2636 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b54fcd2f-6015-4209-af4e-e71aceea4f10-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 15 23:33:08.538935 kubelet[2636]: I0715 23:33:08.538789 2636 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b54fcd2f-6015-4209-af4e-e71aceea4f10-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 15 23:33:08.566684 containerd[1498]: time="2025-07-15T23:33:08.566642555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b6c87d3554b968ed77a0b191c262b5661752a4a018214d6db21d6c7494a66e6\" id:\"1f39f960d14e9322c73b16eec4d68611ff375fa402fabf415148aebce47c0d5b\" pid:3789 exit_status:1 exited_at:{seconds:1752622388 nanos:566349596}" Jul 15 23:33:08.601288 systemd[1]: var-lib-kubelet-pods-b54fcd2f\x2d6015\x2d4209\x2daf4e\x2de71aceea4f10-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpvzhb.mount: Deactivated successfully. Jul 15 23:33:08.601381 systemd[1]: var-lib-kubelet-pods-b54fcd2f\x2d6015\x2d4209\x2daf4e\x2de71aceea4f10-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 23:33:08.735770 systemd[1]: Removed slice kubepods-besteffort-podb54fcd2f_6015_4209_af4e_e71aceea4f10.slice - libcontainer container kubepods-besteffort-podb54fcd2f_6015_4209_af4e_e71aceea4f10.slice. Jul 15 23:33:08.786195 systemd[1]: Created slice kubepods-besteffort-pod3059a1c9_ca19_4af9_9c76_ad1f1daf6b6f.slice - libcontainer container kubepods-besteffort-pod3059a1c9_ca19_4af9_9c76_ad1f1daf6b6f.slice. Jul 15 23:33:08.941323 kubelet[2636]: I0715 23:33:08.941228 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3059a1c9-ca19-4af9-9c76-ad1f1daf6b6f-whisker-backend-key-pair\") pod \"whisker-74fdd7968-jvx46\" (UID: \"3059a1c9-ca19-4af9-9c76-ad1f1daf6b6f\") " pod="calico-system/whisker-74fdd7968-jvx46" Jul 15 23:33:08.941323 kubelet[2636]: I0715 23:33:08.941283 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w549c\" (UniqueName: \"kubernetes.io/projected/3059a1c9-ca19-4af9-9c76-ad1f1daf6b6f-kube-api-access-w549c\") pod \"whisker-74fdd7968-jvx46\" (UID: \"3059a1c9-ca19-4af9-9c76-ad1f1daf6b6f\") " pod="calico-system/whisker-74fdd7968-jvx46" Jul 15 23:33:08.941534 kubelet[2636]: I0715 23:33:08.941305 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3059a1c9-ca19-4af9-9c76-ad1f1daf6b6f-whisker-ca-bundle\") pod \"whisker-74fdd7968-jvx46\" (UID: \"3059a1c9-ca19-4af9-9c76-ad1f1daf6b6f\") " pod="calico-system/whisker-74fdd7968-jvx46" Jul 15 23:33:09.090846 containerd[1498]: time="2025-07-15T23:33:09.090539856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74fdd7968-jvx46,Uid:3059a1c9-ca19-4af9-9c76-ad1f1daf6b6f,Namespace:calico-system,Attempt:0,}" Jul 15 23:33:09.477828 systemd-networkd[1442]: cali41575e782aa: Link UP Jul 15 23:33:09.478222 systemd-networkd[1442]: cali41575e782aa: Gained carrier Jul 15 23:33:09.495256 containerd[1498]: 2025-07-15 23:33:09.186 [INFO][3808] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:33:09.495256 containerd[1498]: 2025-07-15 23:33:09.250 [INFO][3808] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--74fdd7968--jvx46-eth0 whisker-74fdd7968- calico-system 3059a1c9-ca19-4af9-9c76-ad1f1daf6b6f 904 0 2025-07-15 23:33:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74fdd7968 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-74fdd7968-jvx46 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali41575e782aa [] [] }} ContainerID="0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" Namespace="calico-system" Pod="whisker-74fdd7968-jvx46" WorkloadEndpoint="localhost-k8s-whisker--74fdd7968--jvx46-" Jul 15 23:33:09.495256 containerd[1498]: 2025-07-15 23:33:09.251 [INFO][3808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" Namespace="calico-system" Pod="whisker-74fdd7968-jvx46" WorkloadEndpoint="localhost-k8s-whisker--74fdd7968--jvx46-eth0" Jul 15 23:33:09.495256 containerd[1498]: 2025-07-15 23:33:09.410 [INFO][3823] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" HandleID="k8s-pod-network.0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" Workload="localhost-k8s-whisker--74fdd7968--jvx46-eth0" Jul 15 23:33:09.495844 containerd[1498]: 2025-07-15 23:33:09.410 [INFO][3823] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" HandleID="k8s-pod-network.0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" Workload="localhost-k8s-whisker--74fdd7968--jvx46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000183760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-74fdd7968-jvx46", "timestamp":"2025-07-15 23:33:09.410447129 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:33:09.495844 containerd[1498]: 2025-07-15 23:33:09.410 [INFO][3823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:33:09.495844 containerd[1498]: 2025-07-15 23:33:09.410 [INFO][3823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:33:09.495844 containerd[1498]: 2025-07-15 23:33:09.410 [INFO][3823] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:33:09.495844 containerd[1498]: 2025-07-15 23:33:09.423 [INFO][3823] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" host="localhost" Jul 15 23:33:09.495844 containerd[1498]: 2025-07-15 23:33:09.430 [INFO][3823] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:33:09.495844 containerd[1498]: 2025-07-15 23:33:09.436 [INFO][3823] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:33:09.495844 containerd[1498]: 2025-07-15 23:33:09.439 [INFO][3823] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:09.495844 containerd[1498]: 2025-07-15 23:33:09.441 [INFO][3823] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:09.495844 containerd[1498]: 2025-07-15 23:33:09.441 [INFO][3823] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" host="localhost" Jul 15 23:33:09.496820 containerd[1498]: 2025-07-15 23:33:09.449 [INFO][3823] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70 Jul 15 23:33:09.496820 containerd[1498]: 2025-07-15 23:33:09.454 [INFO][3823] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" host="localhost" Jul 15 23:33:09.496820 containerd[1498]: 2025-07-15 23:33:09.459 [INFO][3823] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" host="localhost" Jul 15 23:33:09.496820 containerd[1498]: 2025-07-15 23:33:09.459 [INFO][3823] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" host="localhost" Jul 15 23:33:09.496820 containerd[1498]: 2025-07-15 23:33:09.459 [INFO][3823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:33:09.496820 containerd[1498]: 2025-07-15 23:33:09.459 [INFO][3823] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" HandleID="k8s-pod-network.0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" Workload="localhost-k8s-whisker--74fdd7968--jvx46-eth0" Jul 15 23:33:09.497127 containerd[1498]: 2025-07-15 23:33:09.463 [INFO][3808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" Namespace="calico-system" Pod="whisker-74fdd7968-jvx46" WorkloadEndpoint="localhost-k8s-whisker--74fdd7968--jvx46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74fdd7968--jvx46-eth0", GenerateName:"whisker-74fdd7968-", Namespace:"calico-system", SelfLink:"", UID:"3059a1c9-ca19-4af9-9c76-ad1f1daf6b6f", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74fdd7968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-74fdd7968-jvx46", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali41575e782aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:09.497127 containerd[1498]: 2025-07-15 23:33:09.463 [INFO][3808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" Namespace="calico-system" Pod="whisker-74fdd7968-jvx46" WorkloadEndpoint="localhost-k8s-whisker--74fdd7968--jvx46-eth0" Jul 15 23:33:09.497321 containerd[1498]: 2025-07-15 23:33:09.463 [INFO][3808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41575e782aa ContainerID="0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" Namespace="calico-system" Pod="whisker-74fdd7968-jvx46" WorkloadEndpoint="localhost-k8s-whisker--74fdd7968--jvx46-eth0" Jul 15 23:33:09.497321 containerd[1498]: 2025-07-15 23:33:09.478 [INFO][3808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" Namespace="calico-system" Pod="whisker-74fdd7968-jvx46" WorkloadEndpoint="localhost-k8s-whisker--74fdd7968--jvx46-eth0" Jul 15 23:33:09.497470 containerd[1498]: 2025-07-15 23:33:09.482 [INFO][3808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" Namespace="calico-system" Pod="whisker-74fdd7968-jvx46" WorkloadEndpoint="localhost-k8s-whisker--74fdd7968--jvx46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74fdd7968--jvx46-eth0", GenerateName:"whisker-74fdd7968-", Namespace:"calico-system", SelfLink:"", UID:"3059a1c9-ca19-4af9-9c76-ad1f1daf6b6f", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74fdd7968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70", Pod:"whisker-74fdd7968-jvx46", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali41575e782aa", MAC:"a6:5e:d5:c9:57:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:09.497572 containerd[1498]: 2025-07-15 23:33:09.492 [INFO][3808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" Namespace="calico-system" Pod="whisker-74fdd7968-jvx46" WorkloadEndpoint="localhost-k8s-whisker--74fdd7968--jvx46-eth0" Jul 15 23:33:09.593197 containerd[1498]: time="2025-07-15T23:33:09.593146723Z" level=info msg="connecting to shim 0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70" address="unix:///run/containerd/s/66ec37cfedd62766491094251a5eb79e6c82540ef2caa93cb3ec54de579eaab8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:33:09.601193 containerd[1498]: time="2025-07-15T23:33:09.601148516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b6c87d3554b968ed77a0b191c262b5661752a4a018214d6db21d6c7494a66e6\" id:\"d7cc1d8ff9f00b53ee7af9083e38ef6853456e0083e8101c5777f34d93c22dbb\" pid:3842 exit_status:1 exited_at:{seconds:1752622389 nanos:599774979}" Jul 15 23:33:09.635138 systemd[1]: Started cri-containerd-0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70.scope - libcontainer container 0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70. Jul 15 23:33:09.656298 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:33:09.727043 containerd[1498]: time="2025-07-15T23:33:09.727003009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74fdd7968-jvx46,Uid:3059a1c9-ca19-4af9-9c76-ad1f1daf6b6f,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70\"" Jul 15 23:33:09.732129 containerd[1498]: time="2025-07-15T23:33:09.731403018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 23:33:10.316755 kubelet[2636]: I0715 23:33:10.316709 2636 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b54fcd2f-6015-4209-af4e-e71aceea4f10" path="/var/lib/kubelet/pods/b54fcd2f-6015-4209-af4e-e71aceea4f10/volumes" Jul 15 23:33:10.657637 containerd[1498]: time="2025-07-15T23:33:10.657518457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:10.671956 containerd[1498]: time="2025-07-15T23:33:10.671883770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 15 23:33:10.674476 containerd[1498]: time="2025-07-15T23:33:10.674426807Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:10.677086 containerd[1498]: time="2025-07-15T23:33:10.677043454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:10.678141 containerd[1498]: time="2025-07-15T23:33:10.678096585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 946.656683ms" Jul 15 23:33:10.678141 containerd[1498]: time="2025-07-15T23:33:10.678133110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 15 23:33:10.681105 containerd[1498]: time="2025-07-15T23:33:10.681074797Z" level=info msg="CreateContainer within sandbox \"0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 23:33:10.699346 containerd[1498]: time="2025-07-15T23:33:10.699304793Z" level=info msg="Container 5d73275c71d059c07d310f5caaf71eaedfe1de688d365bc8f6643f5b6fd0e1c4: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:10.708625 containerd[1498]: time="2025-07-15T23:33:10.708444613Z" level=info msg="CreateContainer within sandbox \"0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5d73275c71d059c07d310f5caaf71eaedfe1de688d365bc8f6643f5b6fd0e1c4\"" Jul 15 23:33:10.708965 containerd[1498]: time="2025-07-15T23:33:10.708937115Z" level=info msg="StartContainer for \"5d73275c71d059c07d310f5caaf71eaedfe1de688d365bc8f6643f5b6fd0e1c4\"" Jul 15 23:33:10.710920 containerd[1498]: time="2025-07-15T23:33:10.710882518Z" level=info msg="connecting to shim 5d73275c71d059c07d310f5caaf71eaedfe1de688d365bc8f6643f5b6fd0e1c4" address="unix:///run/containerd/s/66ec37cfedd62766491094251a5eb79e6c82540ef2caa93cb3ec54de579eaab8" protocol=ttrpc version=3 Jul 15 23:33:10.742181 systemd[1]: Started cri-containerd-5d73275c71d059c07d310f5caaf71eaedfe1de688d365bc8f6643f5b6fd0e1c4.scope - libcontainer container 5d73275c71d059c07d310f5caaf71eaedfe1de688d365bc8f6643f5b6fd0e1c4. Jul 15 23:33:10.818377 containerd[1498]: time="2025-07-15T23:33:10.818333730Z" level=info msg="StartContainer for \"5d73275c71d059c07d310f5caaf71eaedfe1de688d365bc8f6643f5b6fd0e1c4\" returns successfully" Jul 15 23:33:10.820590 containerd[1498]: time="2025-07-15T23:33:10.820532484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 23:33:10.876818 systemd-networkd[1442]: cali41575e782aa: Gained IPv6LL Jul 15 23:33:12.457608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2303233299.mount: Deactivated successfully. Jul 15 23:33:12.505697 containerd[1498]: time="2025-07-15T23:33:12.505650397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:12.506280 containerd[1498]: time="2025-07-15T23:33:12.506251468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 15 23:33:12.507246 containerd[1498]: time="2025-07-15T23:33:12.507210260Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:12.509270 containerd[1498]: time="2025-07-15T23:33:12.509237017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:12.510080 containerd[1498]: time="2025-07-15T23:33:12.510049152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.689483904s" Jul 15 23:33:12.510125 containerd[1498]: time="2025-07-15T23:33:12.510082236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 15 23:33:12.513841 containerd[1498]: time="2025-07-15T23:33:12.513787669Z" level=info msg="CreateContainer within sandbox \"0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 23:33:12.520755 containerd[1498]: time="2025-07-15T23:33:12.520136692Z" level=info msg="Container 7bf3718e6544c84aa1f34648b7ae7a196a9d7a89f249e1e55e2c990efc2d1e87: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:12.528476 containerd[1498]: time="2025-07-15T23:33:12.528430182Z" level=info msg="CreateContainer within sandbox \"0d9644325d1b71b702a669dc20113c4ffc602ca94713f253dac5cddaed5d1b70\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7bf3718e6544c84aa1f34648b7ae7a196a9d7a89f249e1e55e2c990efc2d1e87\"" Jul 15 23:33:12.529051 containerd[1498]: time="2025-07-15T23:33:12.529020571Z" level=info msg="StartContainer for \"7bf3718e6544c84aa1f34648b7ae7a196a9d7a89f249e1e55e2c990efc2d1e87\"" Jul 15 23:33:12.530183 containerd[1498]: time="2025-07-15T23:33:12.530154583Z" level=info msg="connecting to shim 7bf3718e6544c84aa1f34648b7ae7a196a9d7a89f249e1e55e2c990efc2d1e87" address="unix:///run/containerd/s/66ec37cfedd62766491094251a5eb79e6c82540ef2caa93cb3ec54de579eaab8" protocol=ttrpc version=3 Jul 15 23:33:12.551078 systemd[1]: Started cri-containerd-7bf3718e6544c84aa1f34648b7ae7a196a9d7a89f249e1e55e2c990efc2d1e87.scope - libcontainer container 7bf3718e6544c84aa1f34648b7ae7a196a9d7a89f249e1e55e2c990efc2d1e87. Jul 15 23:33:12.588040 containerd[1498]: time="2025-07-15T23:33:12.587670991Z" level=info msg="StartContainer for \"7bf3718e6544c84aa1f34648b7ae7a196a9d7a89f249e1e55e2c990efc2d1e87\" returns successfully" Jul 15 23:33:13.459619 kubelet[2636]: I0715 23:33:13.459528 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-74fdd7968-jvx46" podStartSLOduration=2.679639049 podStartE2EDuration="5.459511851s" podCreationTimestamp="2025-07-15 23:33:08 +0000 UTC" firstStartedPulling="2025-07-15 23:33:09.731026329 +0000 UTC m=+33.509245063" lastFinishedPulling="2025-07-15 23:33:12.510899131 +0000 UTC m=+36.289117865" observedRunningTime="2025-07-15 23:33:13.458723242 +0000 UTC m=+37.236942056" watchObservedRunningTime="2025-07-15 23:33:13.459511851 +0000 UTC m=+37.237730585" Jul 15 23:33:15.314401 containerd[1498]: time="2025-07-15T23:33:15.314349704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m7wgp,Uid:aca63a45-0475-4e53-bb61-3e648a3e1119,Namespace:calico-system,Attempt:0,}" Jul 15 23:33:15.314736 containerd[1498]: time="2025-07-15T23:33:15.314664817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb4f99774-qlvcz,Uid:49a97146-9785-414f-b77a-cdbbdcd61b0a,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:33:15.444354 systemd-networkd[1442]: cali76c7b88125e: Link UP Jul 15 23:33:15.444999 systemd-networkd[1442]: cali76c7b88125e: Gained carrier Jul 15 23:33:15.459137 containerd[1498]: 2025-07-15 23:33:15.344 [INFO][4215] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:33:15.459137 containerd[1498]: 2025-07-15 23:33:15.372 [INFO][4215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0 goldmane-768f4c5c69- calico-system aca63a45-0475-4e53-bb61-3e648a3e1119 843 0 2025-07-15 23:32:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-m7wgp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali76c7b88125e [] [] }} ContainerID="82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7wgp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7wgp-" Jul 15 23:33:15.459137 containerd[1498]: 2025-07-15 23:33:15.372 [INFO][4215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7wgp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0" Jul 15 23:33:15.459137 containerd[1498]: 2025-07-15 23:33:15.403 [INFO][4246] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" HandleID="k8s-pod-network.82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" Workload="localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0" Jul 15 23:33:15.459341 containerd[1498]: 2025-07-15 23:33:15.403 [INFO][4246] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" HandleID="k8s-pod-network.82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" Workload="localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ac090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-m7wgp", "timestamp":"2025-07-15 23:33:15.403001423 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:33:15.459341 containerd[1498]: 2025-07-15 23:33:15.403 [INFO][4246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:33:15.459341 containerd[1498]: 2025-07-15 23:33:15.403 [INFO][4246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:33:15.459341 containerd[1498]: 2025-07-15 23:33:15.403 [INFO][4246] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:33:15.459341 containerd[1498]: 2025-07-15 23:33:15.413 [INFO][4246] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" host="localhost" Jul 15 23:33:15.459341 containerd[1498]: 2025-07-15 23:33:15.418 [INFO][4246] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:33:15.459341 containerd[1498]: 2025-07-15 23:33:15.422 [INFO][4246] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:33:15.459341 containerd[1498]: 2025-07-15 23:33:15.424 [INFO][4246] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:15.459341 containerd[1498]: 2025-07-15 23:33:15.426 [INFO][4246] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:15.459341 containerd[1498]: 2025-07-15 23:33:15.426 [INFO][4246] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" host="localhost" Jul 15 23:33:15.459755 containerd[1498]: 2025-07-15 23:33:15.427 [INFO][4246] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e Jul 15 23:33:15.459755 containerd[1498]: 2025-07-15 23:33:15.432 [INFO][4246] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" host="localhost" Jul 15 23:33:15.459755 containerd[1498]: 2025-07-15 23:33:15.438 [INFO][4246] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" host="localhost" Jul 15 23:33:15.459755 containerd[1498]: 2025-07-15 23:33:15.438 [INFO][4246] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" host="localhost" Jul 15 23:33:15.459755 containerd[1498]: 2025-07-15 23:33:15.438 [INFO][4246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:33:15.459755 containerd[1498]: 2025-07-15 23:33:15.438 [INFO][4246] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" HandleID="k8s-pod-network.82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" Workload="localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0" Jul 15 23:33:15.459883 containerd[1498]: 2025-07-15 23:33:15.441 [INFO][4215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7wgp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"aca63a45-0475-4e53-bb61-3e648a3e1119", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-m7wgp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76c7b88125e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:15.459883 containerd[1498]: 2025-07-15 23:33:15.441 [INFO][4215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7wgp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0" Jul 15 23:33:15.459971 containerd[1498]: 2025-07-15 23:33:15.441 [INFO][4215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76c7b88125e ContainerID="82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7wgp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0" Jul 15 23:33:15.459971 containerd[1498]: 2025-07-15 23:33:15.445 [INFO][4215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7wgp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0" Jul 15 23:33:15.460020 containerd[1498]: 2025-07-15 23:33:15.445 [INFO][4215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7wgp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"aca63a45-0475-4e53-bb61-3e648a3e1119", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e", Pod:"goldmane-768f4c5c69-m7wgp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76c7b88125e", MAC:"62:38:33:99:bd:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:15.460068 containerd[1498]: 2025-07-15 23:33:15.456 [INFO][4215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" Namespace="calico-system" Pod="goldmane-768f4c5c69-m7wgp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--m7wgp-eth0" Jul 15 23:33:15.477189 containerd[1498]: time="2025-07-15T23:33:15.477151152Z" level=info msg="connecting to shim 82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e" address="unix:///run/containerd/s/1f70740a6c170f753e903ff4e2558fe0a256eca468df5eabc0fe06f6e3da7ff3" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:33:15.501068 systemd[1]: Started cri-containerd-82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e.scope - libcontainer container 82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e. Jul 15 23:33:15.511235 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:33:15.546198 systemd-networkd[1442]: cali50370ab5937: Link UP Jul 15 23:33:15.547037 systemd-networkd[1442]: cali50370ab5937: Gained carrier Jul 15 23:33:15.561910 containerd[1498]: 2025-07-15 23:33:15.355 [INFO][4233] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:33:15.561910 containerd[1498]: 2025-07-15 23:33:15.373 [INFO][4233] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0 calico-apiserver-7bb4f99774- calico-apiserver 49a97146-9785-414f-b77a-cdbbdcd61b0a 835 0 2025-07-15 23:32:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bb4f99774 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bb4f99774-qlvcz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali50370ab5937 [] [] }} ContainerID="aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" Namespace="calico-apiserver" Pod="calico-apiserver-7bb4f99774-qlvcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-" Jul 15 23:33:15.561910 containerd[1498]: 2025-07-15 23:33:15.374 [INFO][4233] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" Namespace="calico-apiserver" Pod="calico-apiserver-7bb4f99774-qlvcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0" Jul 15 23:33:15.561910 containerd[1498]: 2025-07-15 23:33:15.403 [INFO][4248] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" HandleID="k8s-pod-network.aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" Workload="localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0" Jul 15 23:33:15.562458 containerd[1498]: 2025-07-15 23:33:15.403 [INFO][4248] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" HandleID="k8s-pod-network.aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" Workload="localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bb4f99774-qlvcz", "timestamp":"2025-07-15 23:33:15.402997103 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:33:15.562458 containerd[1498]: 2025-07-15 23:33:15.403 [INFO][4248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:33:15.562458 containerd[1498]: 2025-07-15 23:33:15.438 [INFO][4248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:33:15.562458 containerd[1498]: 2025-07-15 23:33:15.438 [INFO][4248] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:33:15.562458 containerd[1498]: 2025-07-15 23:33:15.513 [INFO][4248] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" host="localhost" Jul 15 23:33:15.562458 containerd[1498]: 2025-07-15 23:33:15.518 [INFO][4248] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:33:15.562458 containerd[1498]: 2025-07-15 23:33:15.524 [INFO][4248] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:33:15.562458 containerd[1498]: 2025-07-15 23:33:15.527 [INFO][4248] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:15.562458 containerd[1498]: 2025-07-15 23:33:15.529 [INFO][4248] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:15.562458 containerd[1498]: 2025-07-15 23:33:15.529 [INFO][4248] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" host="localhost" Jul 15 23:33:15.562695 containerd[1498]: 2025-07-15 23:33:15.530 [INFO][4248] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8 Jul 15 23:33:15.562695 containerd[1498]: 2025-07-15 23:33:15.535 [INFO][4248] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" host="localhost" Jul 15 23:33:15.562695 containerd[1498]: 2025-07-15 23:33:15.541 [INFO][4248] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" host="localhost" Jul 15 23:33:15.562695 containerd[1498]: 2025-07-15 23:33:15.541 [INFO][4248] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" host="localhost" Jul 15 23:33:15.562695 containerd[1498]: 2025-07-15 23:33:15.541 [INFO][4248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:33:15.562695 containerd[1498]: 2025-07-15 23:33:15.541 [INFO][4248] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" HandleID="k8s-pod-network.aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" Workload="localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0" Jul 15 23:33:15.562815 containerd[1498]: 2025-07-15 23:33:15.544 [INFO][4233] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" Namespace="calico-apiserver" Pod="calico-apiserver-7bb4f99774-qlvcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0", GenerateName:"calico-apiserver-7bb4f99774-", Namespace:"calico-apiserver", SelfLink:"", UID:"49a97146-9785-414f-b77a-cdbbdcd61b0a", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb4f99774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bb4f99774-qlvcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50370ab5937", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:15.562867 containerd[1498]: 2025-07-15 23:33:15.544 [INFO][4233] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" Namespace="calico-apiserver" Pod="calico-apiserver-7bb4f99774-qlvcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0" Jul 15 23:33:15.562867 containerd[1498]: 2025-07-15 23:33:15.544 [INFO][4233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50370ab5937 ContainerID="aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" Namespace="calico-apiserver" Pod="calico-apiserver-7bb4f99774-qlvcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0" Jul 15 23:33:15.562867 containerd[1498]: 2025-07-15 23:33:15.546 [INFO][4233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" Namespace="calico-apiserver" Pod="calico-apiserver-7bb4f99774-qlvcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0" Jul 15 23:33:15.563067 containerd[1498]: 2025-07-15 23:33:15.546 [INFO][4233] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" Namespace="calico-apiserver" Pod="calico-apiserver-7bb4f99774-qlvcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0", GenerateName:"calico-apiserver-7bb4f99774-", Namespace:"calico-apiserver", SelfLink:"", UID:"49a97146-9785-414f-b77a-cdbbdcd61b0a", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb4f99774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8", Pod:"calico-apiserver-7bb4f99774-qlvcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50370ab5937", MAC:"f2:b3:34:e2:f6:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:15.563142 containerd[1498]: 2025-07-15 23:33:15.558 [INFO][4233] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" Namespace="calico-apiserver" Pod="calico-apiserver-7bb4f99774-qlvcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb4f99774--qlvcz-eth0" Jul 15 23:33:15.570760 containerd[1498]: time="2025-07-15T23:33:15.570600784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-m7wgp,Uid:aca63a45-0475-4e53-bb61-3e648a3e1119,Namespace:calico-system,Attempt:0,} returns sandbox id \"82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e\"" Jul 15 23:33:15.573530 containerd[1498]: time="2025-07-15T23:33:15.573494534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 23:33:15.585234 containerd[1498]: time="2025-07-15T23:33:15.585178143Z" level=info msg="connecting to shim aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8" address="unix:///run/containerd/s/6f7cf27b990107ecb71ba5c0754eeea47af982036fff346c50107abf54a0dac7" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:33:15.611126 systemd[1]: Started cri-containerd-aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8.scope - libcontainer container aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8. Jul 15 23:33:15.622311 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:33:15.641387 containerd[1498]: time="2025-07-15T23:33:15.641305145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb4f99774-qlvcz,Uid:49a97146-9785-414f-b77a-cdbbdcd61b0a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8\"" Jul 15 23:33:16.314706 containerd[1498]: time="2025-07-15T23:33:16.314669587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f5777cd-sw4nq,Uid:181456f3-ba09-4518-bb01-853bb51ff674,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:33:16.414633 systemd-networkd[1442]: cali108bcfb224f: Link UP Jul 15 23:33:16.415021 systemd-networkd[1442]: cali108bcfb224f: Gained carrier Jul 15 23:33:16.424810 containerd[1498]: 2025-07-15 23:33:16.336 [INFO][4389] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:33:16.424810 containerd[1498]: 2025-07-15 23:33:16.355 [INFO][4389] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0 calico-apiserver-87f5777cd- calico-apiserver 181456f3-ba09-4518-bb01-853bb51ff674 841 0 2025-07-15 23:32:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:87f5777cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-87f5777cd-sw4nq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali108bcfb224f [] [] }} ContainerID="fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-sw4nq" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--sw4nq-" Jul 15 23:33:16.424810 containerd[1498]: 2025-07-15 23:33:16.355 [INFO][4389] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-sw4nq" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0" Jul 15 23:33:16.424810 containerd[1498]: 2025-07-15 23:33:16.378 [INFO][4404] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" HandleID="k8s-pod-network.fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" Workload="localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0" Jul 15 23:33:16.425045 containerd[1498]: 2025-07-15 23:33:16.378 [INFO][4404] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" HandleID="k8s-pod-network.fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" Workload="localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-87f5777cd-sw4nq", "timestamp":"2025-07-15 23:33:16.378791855 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:33:16.425045 containerd[1498]: 2025-07-15 23:33:16.378 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:33:16.425045 containerd[1498]: 2025-07-15 23:33:16.379 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:33:16.425045 containerd[1498]: 2025-07-15 23:33:16.379 [INFO][4404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:33:16.425045 containerd[1498]: 2025-07-15 23:33:16.388 [INFO][4404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" host="localhost" Jul 15 23:33:16.425045 containerd[1498]: 2025-07-15 23:33:16.392 [INFO][4404] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:33:16.425045 containerd[1498]: 2025-07-15 23:33:16.395 [INFO][4404] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:33:16.425045 containerd[1498]: 2025-07-15 23:33:16.397 [INFO][4404] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:16.425045 containerd[1498]: 2025-07-15 23:33:16.399 [INFO][4404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:16.425045 containerd[1498]: 2025-07-15 23:33:16.399 [INFO][4404] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" host="localhost" Jul 15 23:33:16.425251 containerd[1498]: 2025-07-15 23:33:16.401 [INFO][4404] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924 Jul 15 23:33:16.425251 containerd[1498]: 2025-07-15 23:33:16.404 [INFO][4404] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" host="localhost" Jul 15 23:33:16.425251 containerd[1498]: 2025-07-15 23:33:16.410 [INFO][4404] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" host="localhost" Jul 15 23:33:16.425251 containerd[1498]: 2025-07-15 23:33:16.410 [INFO][4404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" host="localhost" Jul 15 23:33:16.425251 containerd[1498]: 2025-07-15 23:33:16.410 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:33:16.425251 containerd[1498]: 2025-07-15 23:33:16.410 [INFO][4404] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" HandleID="k8s-pod-network.fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" Workload="localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0" Jul 15 23:33:16.425421 containerd[1498]: 2025-07-15 23:33:16.412 [INFO][4389] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-sw4nq" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0", GenerateName:"calico-apiserver-87f5777cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"181456f3-ba09-4518-bb01-853bb51ff674", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87f5777cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-87f5777cd-sw4nq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali108bcfb224f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:16.425473 containerd[1498]: 2025-07-15 23:33:16.412 [INFO][4389] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-sw4nq" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0" Jul 15 23:33:16.425473 containerd[1498]: 2025-07-15 23:33:16.412 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali108bcfb224f ContainerID="fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-sw4nq" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0" Jul 15 23:33:16.425473 containerd[1498]: 2025-07-15 23:33:16.415 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-sw4nq" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0" Jul 15 23:33:16.425548 containerd[1498]: 2025-07-15 23:33:16.415 [INFO][4389] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-sw4nq" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0", GenerateName:"calico-apiserver-87f5777cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"181456f3-ba09-4518-bb01-853bb51ff674", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87f5777cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924", Pod:"calico-apiserver-87f5777cd-sw4nq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali108bcfb224f", MAC:"fa:92:de:d0:44:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:16.425591 containerd[1498]: 2025-07-15 23:33:16.422 [INFO][4389] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-sw4nq" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--sw4nq-eth0" Jul 15 23:33:16.440844 containerd[1498]: time="2025-07-15T23:33:16.440749098Z" level=info msg="connecting to shim fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924" address="unix:///run/containerd/s/ef6a891f97c9ef8bad64b3faa5582afbefccce9baf7e23f0c29397e842c56b17" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:33:16.470073 systemd[1]: Started cri-containerd-fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924.scope - libcontainer container fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924. Jul 15 23:33:16.482700 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:33:16.517594 containerd[1498]: time="2025-07-15T23:33:16.517533803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f5777cd-sw4nq,Uid:181456f3-ba09-4518-bb01-853bb51ff674,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924\"" Jul 15 23:33:17.140687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount450269719.mount: Deactivated successfully. Jul 15 23:33:17.150016 systemd-networkd[1442]: cali76c7b88125e: Gained IPv6LL Jul 15 23:33:17.276350 systemd-networkd[1442]: cali50370ab5937: Gained IPv6LL Jul 15 23:33:17.547753 containerd[1498]: time="2025-07-15T23:33:17.547702670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:17.548108 containerd[1498]: time="2025-07-15T23:33:17.548062706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 15 23:33:17.549137 containerd[1498]: time="2025-07-15T23:33:17.549101611Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:17.550734 containerd[1498]: time="2025-07-15T23:33:17.550702934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:17.551882 containerd[1498]: time="2025-07-15T23:33:17.551848410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 1.978316232s" Jul 15 23:33:17.551923 containerd[1498]: time="2025-07-15T23:33:17.551879653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 15 23:33:17.553387 containerd[1498]: time="2025-07-15T23:33:17.553347721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:33:17.554948 containerd[1498]: time="2025-07-15T23:33:17.554608889Z" level=info msg="CreateContainer within sandbox \"82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 23:33:17.561543 containerd[1498]: time="2025-07-15T23:33:17.561505987Z" level=info msg="Container c3361f585fa443d8d5d2cc57d6e579007b11dc32cf733dfb671cc6294ec27427: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:17.568641 containerd[1498]: time="2025-07-15T23:33:17.568602386Z" level=info msg="CreateContainer within sandbox \"82dac6d1d4d5375ead8258ee10b061e1f99e6e1e986a0922c64344470d0ba72e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c3361f585fa443d8d5d2cc57d6e579007b11dc32cf733dfb671cc6294ec27427\"" Jul 15 23:33:17.569146 containerd[1498]: time="2025-07-15T23:33:17.569117718Z" level=info msg="StartContainer for \"c3361f585fa443d8d5d2cc57d6e579007b11dc32cf733dfb671cc6294ec27427\"" Jul 15 23:33:17.570620 containerd[1498]: time="2025-07-15T23:33:17.570587507Z" level=info msg="connecting to shim c3361f585fa443d8d5d2cc57d6e579007b11dc32cf733dfb671cc6294ec27427" address="unix:///run/containerd/s/1f70740a6c170f753e903ff4e2558fe0a256eca468df5eabc0fe06f6e3da7ff3" protocol=ttrpc version=3 Jul 15 23:33:17.596104 systemd[1]: Started cri-containerd-c3361f585fa443d8d5d2cc57d6e579007b11dc32cf733dfb671cc6294ec27427.scope - libcontainer container c3361f585fa443d8d5d2cc57d6e579007b11dc32cf733dfb671cc6294ec27427. Jul 15 23:33:17.642018 containerd[1498]: time="2025-07-15T23:33:17.641971254Z" level=info msg="StartContainer for \"c3361f585fa443d8d5d2cc57d6e579007b11dc32cf733dfb671cc6294ec27427\" returns successfully" Jul 15 23:33:17.702010 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:40666.service - OpenSSH per-connection server daemon (10.0.0.1:40666). Jul 15 23:33:17.757185 sshd[4537]: Accepted publickey for core from 10.0.0.1 port 40666 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:17.758810 sshd-session[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:17.762753 systemd-logind[1476]: New session 8 of user core. Jul 15 23:33:17.773078 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:33:17.999069 sshd[4542]: Connection closed by 10.0.0.1 port 40666 Jul 15 23:33:17.999322 sshd-session[4537]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:18.003105 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:40666.service: Deactivated successfully. Jul 15 23:33:18.005178 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:33:18.006077 systemd-logind[1476]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:33:18.007842 systemd-logind[1476]: Removed session 8. Jul 15 23:33:18.314883 containerd[1498]: time="2025-07-15T23:33:18.314842127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f5777cd-jfddx,Uid:2faee96c-ca0f-4547-884b-1082f57853b3,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:33:18.315084 containerd[1498]: time="2025-07-15T23:33:18.315013383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c48566cf-hvx9f,Uid:27fd4573-e26a-489d-805e-62cf57fe804e,Namespace:calico-system,Attempt:0,}" Jul 15 23:33:18.315379 containerd[1498]: time="2025-07-15T23:33:18.315334615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t4msq,Uid:50e26b50-6338-4565-b6ac-b31e704a7c57,Namespace:kube-system,Attempt:0,}" Jul 15 23:33:18.402469 kubelet[2636]: I0715 23:33:18.402420 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:33:18.428640 systemd-networkd[1442]: cali108bcfb224f: Gained IPv6LL Jul 15 23:33:18.503701 systemd-networkd[1442]: cali5210801e512: Link UP Jul 15 23:33:18.503835 systemd-networkd[1442]: cali5210801e512: Gained carrier Jul 15 23:33:18.520768 containerd[1498]: 2025-07-15 23:33:18.382 [INFO][4580] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:33:18.520768 containerd[1498]: 2025-07-15 23:33:18.396 [INFO][4580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0 calico-kube-controllers-64c48566cf- calico-system 27fd4573-e26a-489d-805e-62cf57fe804e 839 0 2025-07-15 23:32:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64c48566cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-64c48566cf-hvx9f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5210801e512 [] [] }} ContainerID="869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" Namespace="calico-system" Pod="calico-kube-controllers-64c48566cf-hvx9f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-" Jul 15 23:33:18.520768 containerd[1498]: 2025-07-15 23:33:18.396 [INFO][4580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" Namespace="calico-system" Pod="calico-kube-controllers-64c48566cf-hvx9f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0" Jul 15 23:33:18.520768 containerd[1498]: 2025-07-15 23:33:18.440 [INFO][4634] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" HandleID="k8s-pod-network.869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" Workload="localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0" Jul 15 23:33:18.521013 containerd[1498]: 2025-07-15 23:33:18.440 [INFO][4634] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" HandleID="k8s-pod-network.869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" Workload="localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3220), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-64c48566cf-hvx9f", "timestamp":"2025-07-15 23:33:18.440008996 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:33:18.521013 containerd[1498]: 2025-07-15 23:33:18.440 [INFO][4634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:33:18.521013 containerd[1498]: 2025-07-15 23:33:18.440 [INFO][4634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:33:18.521013 containerd[1498]: 2025-07-15 23:33:18.440 [INFO][4634] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:33:18.521013 containerd[1498]: 2025-07-15 23:33:18.452 [INFO][4634] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" host="localhost" Jul 15 23:33:18.521013 containerd[1498]: 2025-07-15 23:33:18.459 [INFO][4634] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:33:18.521013 containerd[1498]: 2025-07-15 23:33:18.467 [INFO][4634] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:33:18.521013 containerd[1498]: 2025-07-15 23:33:18.471 [INFO][4634] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:18.521013 containerd[1498]: 2025-07-15 23:33:18.474 [INFO][4634] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:18.521013 containerd[1498]: 2025-07-15 23:33:18.474 [INFO][4634] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" host="localhost" Jul 15 23:33:18.521213 containerd[1498]: 2025-07-15 23:33:18.477 [INFO][4634] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1 Jul 15 23:33:18.521213 containerd[1498]: 2025-07-15 23:33:18.482 [INFO][4634] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" host="localhost" Jul 15 23:33:18.521213 containerd[1498]: 2025-07-15 23:33:18.491 [INFO][4634] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" host="localhost" Jul 15 23:33:18.521213 containerd[1498]: 2025-07-15 23:33:18.491 [INFO][4634] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" host="localhost" Jul 15 23:33:18.521213 containerd[1498]: 2025-07-15 23:33:18.491 [INFO][4634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:33:18.521213 containerd[1498]: 2025-07-15 23:33:18.491 [INFO][4634] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" HandleID="k8s-pod-network.869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" Workload="localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0" Jul 15 23:33:18.521368 containerd[1498]: 2025-07-15 23:33:18.498 [INFO][4580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" Namespace="calico-system" Pod="calico-kube-controllers-64c48566cf-hvx9f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0", GenerateName:"calico-kube-controllers-64c48566cf-", Namespace:"calico-system", SelfLink:"", UID:"27fd4573-e26a-489d-805e-62cf57fe804e", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64c48566cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-64c48566cf-hvx9f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5210801e512", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:18.521416 containerd[1498]: 2025-07-15 23:33:18.498 [INFO][4580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" Namespace="calico-system" Pod="calico-kube-controllers-64c48566cf-hvx9f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0" Jul 15 23:33:18.521416 containerd[1498]: 2025-07-15 23:33:18.499 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5210801e512 ContainerID="869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" Namespace="calico-system" Pod="calico-kube-controllers-64c48566cf-hvx9f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0" Jul 15 23:33:18.521416 containerd[1498]: 2025-07-15 23:33:18.503 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" Namespace="calico-system" Pod="calico-kube-controllers-64c48566cf-hvx9f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0" Jul 15 23:33:18.521476 containerd[1498]: 2025-07-15 23:33:18.504 [INFO][4580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" Namespace="calico-system" Pod="calico-kube-controllers-64c48566cf-hvx9f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0", GenerateName:"calico-kube-controllers-64c48566cf-", Namespace:"calico-system", SelfLink:"", UID:"27fd4573-e26a-489d-805e-62cf57fe804e", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64c48566cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1", Pod:"calico-kube-controllers-64c48566cf-hvx9f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5210801e512", MAC:"62:b0:7a:48:67:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:18.521534 containerd[1498]: 2025-07-15 23:33:18.516 [INFO][4580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" Namespace="calico-system" Pod="calico-kube-controllers-64c48566cf-hvx9f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c48566cf--hvx9f-eth0" Jul 15 23:33:18.525945 kubelet[2636]: I0715 23:33:18.524997 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-m7wgp" podStartSLOduration=21.545558913 podStartE2EDuration="23.524980379s" podCreationTimestamp="2025-07-15 23:32:55 +0000 UTC" firstStartedPulling="2025-07-15 23:33:15.573123214 +0000 UTC m=+39.351341948" lastFinishedPulling="2025-07-15 23:33:17.55254468 +0000 UTC m=+41.330763414" observedRunningTime="2025-07-15 23:33:18.497601758 +0000 UTC m=+42.275820492" watchObservedRunningTime="2025-07-15 23:33:18.524980379 +0000 UTC m=+42.303199113" Jul 15 23:33:18.547222 containerd[1498]: time="2025-07-15T23:33:18.547179329Z" level=info msg="connecting to shim 869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1" address="unix:///run/containerd/s/988190911634ddf829e06fd20a826c31636bbbdb713e73d80e8c95bb73e61abc" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:33:18.581167 systemd[1]: Started cri-containerd-869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1.scope - libcontainer container 869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1. Jul 15 23:33:18.612666 systemd-networkd[1442]: cali30c99953948: Link UP Jul 15 23:33:18.613112 systemd-networkd[1442]: cali30c99953948: Gained carrier Jul 15 23:33:18.626359 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:33:18.634079 containerd[1498]: 2025-07-15 23:33:18.373 [INFO][4581] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:33:18.634079 containerd[1498]: 2025-07-15 23:33:18.395 [INFO][4581] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0 calico-apiserver-87f5777cd- calico-apiserver 2faee96c-ca0f-4547-884b-1082f57853b3 837 0 2025-07-15 23:32:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:87f5777cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-87f5777cd-jfddx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali30c99953948 [] [] }} ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-jfddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--jfddx-" Jul 15 23:33:18.634079 containerd[1498]: 2025-07-15 23:33:18.395 [INFO][4581] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-jfddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:18.634079 containerd[1498]: 2025-07-15 23:33:18.445 [INFO][4630] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" HandleID="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Workload="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:18.634762 containerd[1498]: 2025-07-15 23:33:18.446 [INFO][4630] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" HandleID="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Workload="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000116ea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-87f5777cd-jfddx", "timestamp":"2025-07-15 23:33:18.444635732 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:33:18.634762 containerd[1498]: 2025-07-15 23:33:18.446 [INFO][4630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:33:18.634762 containerd[1498]: 2025-07-15 23:33:18.491 [INFO][4630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:33:18.634762 containerd[1498]: 2025-07-15 23:33:18.491 [INFO][4630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:33:18.634762 containerd[1498]: 2025-07-15 23:33:18.553 [INFO][4630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" host="localhost" Jul 15 23:33:18.634762 containerd[1498]: 2025-07-15 23:33:18.559 [INFO][4630] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:33:18.634762 containerd[1498]: 2025-07-15 23:33:18.567 [INFO][4630] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:33:18.634762 containerd[1498]: 2025-07-15 23:33:18.569 [INFO][4630] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:18.634762 containerd[1498]: 2025-07-15 23:33:18.574 [INFO][4630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:18.634762 containerd[1498]: 2025-07-15 23:33:18.574 [INFO][4630] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" host="localhost" Jul 15 23:33:18.635961 containerd[1498]: 2025-07-15 23:33:18.575 [INFO][4630] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be Jul 15 23:33:18.635961 containerd[1498]: 2025-07-15 23:33:18.580 [INFO][4630] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" host="localhost" Jul 15 23:33:18.635961 containerd[1498]: 2025-07-15 23:33:18.603 [INFO][4630] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" host="localhost" Jul 15 23:33:18.635961 containerd[1498]: 2025-07-15 23:33:18.603 [INFO][4630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" host="localhost" Jul 15 23:33:18.635961 containerd[1498]: 2025-07-15 23:33:18.603 [INFO][4630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:33:18.635961 containerd[1498]: 2025-07-15 23:33:18.603 [INFO][4630] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" HandleID="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Workload="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:18.636110 containerd[1498]: 2025-07-15 23:33:18.609 [INFO][4581] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-jfddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0", GenerateName:"calico-apiserver-87f5777cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"2faee96c-ca0f-4547-884b-1082f57853b3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87f5777cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-87f5777cd-jfddx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30c99953948", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:18.636166 containerd[1498]: 2025-07-15 23:33:18.609 [INFO][4581] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-jfddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:18.636166 containerd[1498]: 2025-07-15 23:33:18.610 [INFO][4581] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30c99953948 ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-jfddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:18.636166 containerd[1498]: 2025-07-15 23:33:18.613 [INFO][4581] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-jfddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:18.636230 containerd[1498]: 2025-07-15 23:33:18.614 [INFO][4581] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-jfddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0", GenerateName:"calico-apiserver-87f5777cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"2faee96c-ca0f-4547-884b-1082f57853b3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87f5777cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be", Pod:"calico-apiserver-87f5777cd-jfddx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30c99953948", MAC:"ea:43:9b:f9:36:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:18.636276 containerd[1498]: 2025-07-15 23:33:18.631 [INFO][4581] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Namespace="calico-apiserver" Pod="calico-apiserver-87f5777cd-jfddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:18.695176 containerd[1498]: time="2025-07-15T23:33:18.695138007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c48566cf-hvx9f,Uid:27fd4573-e26a-489d-805e-62cf57fe804e,Namespace:calico-system,Attempt:0,} returns sandbox id \"869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1\"" Jul 15 23:33:18.723402 containerd[1498]: time="2025-07-15T23:33:18.722912387Z" level=info msg="connecting to shim dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" address="unix:///run/containerd/s/ff5f9a399a7b06c277784f270f6de65d15c6df5c589372927c2063395730d856" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:33:18.737141 systemd-networkd[1442]: cali9b10ad1ec51: Link UP Jul 15 23:33:18.738146 systemd-networkd[1442]: cali9b10ad1ec51: Gained carrier Jul 15 23:33:18.749118 systemd[1]: Started cri-containerd-dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be.scope - libcontainer container dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be. Jul 15 23:33:18.756961 containerd[1498]: 2025-07-15 23:33:18.365 [INFO][4603] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:33:18.756961 containerd[1498]: 2025-07-15 23:33:18.382 [INFO][4603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--t4msq-eth0 coredns-668d6bf9bc- kube-system 50e26b50-6338-4565-b6ac-b31e704a7c57 840 0 2025-07-15 23:32:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-t4msq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b10ad1ec51 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4msq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t4msq-" Jul 15 23:33:18.756961 containerd[1498]: 2025-07-15 23:33:18.382 [INFO][4603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4msq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t4msq-eth0" Jul 15 23:33:18.756961 containerd[1498]: 2025-07-15 23:33:18.447 [INFO][4623] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" HandleID="k8s-pod-network.3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" Workload="localhost-k8s-coredns--668d6bf9bc--t4msq-eth0" Jul 15 23:33:18.757182 containerd[1498]: 2025-07-15 23:33:18.447 [INFO][4623] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" HandleID="k8s-pod-network.3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" Workload="localhost-k8s-coredns--668d6bf9bc--t4msq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d890), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-t4msq", "timestamp":"2025-07-15 23:33:18.447605665 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:33:18.757182 containerd[1498]: 2025-07-15 23:33:18.447 [INFO][4623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:33:18.757182 containerd[1498]: 2025-07-15 23:33:18.603 [INFO][4623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:33:18.757182 containerd[1498]: 2025-07-15 23:33:18.608 [INFO][4623] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:33:18.757182 containerd[1498]: 2025-07-15 23:33:18.653 [INFO][4623] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" host="localhost" Jul 15 23:33:18.757182 containerd[1498]: 2025-07-15 23:33:18.693 [INFO][4623] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:33:18.757182 containerd[1498]: 2025-07-15 23:33:18.700 [INFO][4623] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:33:18.757182 containerd[1498]: 2025-07-15 23:33:18.703 [INFO][4623] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:18.757182 containerd[1498]: 2025-07-15 23:33:18.706 [INFO][4623] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:18.757182 containerd[1498]: 2025-07-15 23:33:18.706 [INFO][4623] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" host="localhost" Jul 15 23:33:18.757432 containerd[1498]: 2025-07-15 23:33:18.708 [INFO][4623] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4 Jul 15 23:33:18.757432 containerd[1498]: 2025-07-15 23:33:18.714 [INFO][4623] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" host="localhost" Jul 15 23:33:18.757432 containerd[1498]: 2025-07-15 23:33:18.723 [INFO][4623] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" host="localhost" Jul 15 23:33:18.757432 containerd[1498]: 2025-07-15 23:33:18.723 [INFO][4623] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" host="localhost" Jul 15 23:33:18.757432 containerd[1498]: 2025-07-15 23:33:18.723 [INFO][4623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:33:18.757432 containerd[1498]: 2025-07-15 23:33:18.723 [INFO][4623] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" HandleID="k8s-pod-network.3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" Workload="localhost-k8s-coredns--668d6bf9bc--t4msq-eth0" Jul 15 23:33:18.757558 containerd[1498]: 2025-07-15 23:33:18.727 [INFO][4603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4msq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t4msq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--t4msq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"50e26b50-6338-4565-b6ac-b31e704a7c57", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-t4msq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b10ad1ec51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:18.757624 containerd[1498]: 2025-07-15 23:33:18.727 [INFO][4603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4msq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t4msq-eth0" Jul 15 23:33:18.757624 containerd[1498]: 2025-07-15 23:33:18.727 [INFO][4603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b10ad1ec51 ContainerID="3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4msq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t4msq-eth0" Jul 15 23:33:18.757624 containerd[1498]: 2025-07-15 23:33:18.740 [INFO][4603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4msq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t4msq-eth0" Jul 15 23:33:18.757689 containerd[1498]: 2025-07-15 23:33:18.740 [INFO][4603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4msq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t4msq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--t4msq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"50e26b50-6338-4565-b6ac-b31e704a7c57", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4", Pod:"coredns-668d6bf9bc-t4msq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b10ad1ec51", MAC:"d6:a5:06:55:90:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:18.757689 containerd[1498]: 2025-07-15 23:33:18.754 [INFO][4603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4msq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t4msq-eth0" Jul 15 23:33:18.762586 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:33:18.821412 containerd[1498]: time="2025-07-15T23:33:18.821371782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87f5777cd-jfddx,Uid:2faee96c-ca0f-4547-884b-1082f57853b3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\"" Jul 15 23:33:18.832966 containerd[1498]: time="2025-07-15T23:33:18.832432953Z" level=info msg="connecting to shim 3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4" address="unix:///run/containerd/s/edfef3b4977bb4bc44310ccd25cccc7aca1a70f5c10a7883582457236e8ce674" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:33:18.864061 systemd[1]: Started cri-containerd-3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4.scope - libcontainer container 3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4. Jul 15 23:33:18.877203 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:33:18.905154 containerd[1498]: time="2025-07-15T23:33:18.905076360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t4msq,Uid:50e26b50-6338-4565-b6ac-b31e704a7c57,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4\"" Jul 15 23:33:18.909160 containerd[1498]: time="2025-07-15T23:33:18.909126320Z" level=info msg="CreateContainer within sandbox \"3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:33:18.924614 containerd[1498]: time="2025-07-15T23:33:18.924527039Z" level=info msg="Container 36a26d97a1caf1d01ec3c38788cefa1e3d1c23b757adee859ead9be00cb9f11d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:18.932655 containerd[1498]: time="2025-07-15T23:33:18.932617237Z" level=info msg="CreateContainer within sandbox \"3ec551593b0ec640dbdcf7894ce13a37274011605246f78952e82a0229d0c8b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36a26d97a1caf1d01ec3c38788cefa1e3d1c23b757adee859ead9be00cb9f11d\"" Jul 15 23:33:18.933147 containerd[1498]: time="2025-07-15T23:33:18.933123647Z" level=info msg="StartContainer for \"36a26d97a1caf1d01ec3c38788cefa1e3d1c23b757adee859ead9be00cb9f11d\"" Jul 15 23:33:18.934313 containerd[1498]: time="2025-07-15T23:33:18.934278841Z" level=info msg="connecting to shim 36a26d97a1caf1d01ec3c38788cefa1e3d1c23b757adee859ead9be00cb9f11d" address="unix:///run/containerd/s/edfef3b4977bb4bc44310ccd25cccc7aca1a70f5c10a7883582457236e8ce674" protocol=ttrpc version=3 Jul 15 23:33:18.959101 systemd[1]: Started cri-containerd-36a26d97a1caf1d01ec3c38788cefa1e3d1c23b757adee859ead9be00cb9f11d.scope - libcontainer container 36a26d97a1caf1d01ec3c38788cefa1e3d1c23b757adee859ead9be00cb9f11d. Jul 15 23:33:18.999602 containerd[1498]: time="2025-07-15T23:33:18.997681777Z" level=info msg="StartContainer for \"36a26d97a1caf1d01ec3c38788cefa1e3d1c23b757adee859ead9be00cb9f11d\" returns successfully" Jul 15 23:33:19.315379 containerd[1498]: time="2025-07-15T23:33:19.315328076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wjtwg,Uid:7610213d-974f-4d75-b1f9-ebf5a38cf36b,Namespace:kube-system,Attempt:0,}" Jul 15 23:33:19.315558 containerd[1498]: time="2025-07-15T23:33:19.315387761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9dn4,Uid:8911d17a-4907-4974-b467-7de008d2f4c7,Namespace:calico-system,Attempt:0,}" Jul 15 23:33:19.432756 systemd-networkd[1442]: vxlan.calico: Link UP Jul 15 23:33:19.432762 systemd-networkd[1442]: vxlan.calico: Gained carrier Jul 15 23:33:19.492744 kubelet[2636]: I0715 23:33:19.492713 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:33:19.517456 kubelet[2636]: I0715 23:33:19.517394 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t4msq" podStartSLOduration=37.517375081 podStartE2EDuration="37.517375081s" podCreationTimestamp="2025-07-15 23:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:33:19.500876653 +0000 UTC m=+43.279095387" watchObservedRunningTime="2025-07-15 23:33:19.517375081 +0000 UTC m=+43.295593815" Jul 15 23:33:19.563502 systemd-networkd[1442]: caliaa2eb59a26e: Link UP Jul 15 23:33:19.563719 systemd-networkd[1442]: caliaa2eb59a26e: Gained carrier Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.407 [INFO][4906] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--h9dn4-eth0 csi-node-driver- calico-system 8911d17a-4907-4974-b467-7de008d2f4c7 737 0 2025-07-15 23:32:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-h9dn4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaa2eb59a26e [] [] }} ContainerID="2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" Namespace="calico-system" Pod="csi-node-driver-h9dn4" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9dn4-" Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.407 [INFO][4906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" Namespace="calico-system" Pod="csi-node-driver-h9dn4" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9dn4-eth0" Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.471 [INFO][4936] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" HandleID="k8s-pod-network.2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" Workload="localhost-k8s-csi--node--driver--h9dn4-eth0" Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.471 [INFO][4936] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" HandleID="k8s-pod-network.2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" Workload="localhost-k8s-csi--node--driver--h9dn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ac090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-h9dn4", "timestamp":"2025-07-15 23:33:19.470964374 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.472 [INFO][4936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.472 [INFO][4936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.472 [INFO][4936] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.485 [INFO][4936] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" host="localhost" Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.492 [INFO][4936] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.509 [INFO][4936] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.514 [INFO][4936] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.523 [INFO][4936] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.523 [INFO][4936] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" host="localhost" Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.526 [INFO][4936] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.532 [INFO][4936] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" host="localhost" Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.544 [INFO][4936] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" host="localhost" Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.544 [INFO][4936] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" host="localhost" Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.544 [INFO][4936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:33:19.578187 containerd[1498]: 2025-07-15 23:33:19.544 [INFO][4936] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" HandleID="k8s-pod-network.2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" Workload="localhost-k8s-csi--node--driver--h9dn4-eth0" Jul 15 23:33:19.579263 containerd[1498]: 2025-07-15 23:33:19.549 [INFO][4906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" Namespace="calico-system" Pod="csi-node-driver-h9dn4" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9dn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h9dn4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8911d17a-4907-4974-b467-7de008d2f4c7", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-h9dn4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa2eb59a26e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:19.579263 containerd[1498]: 2025-07-15 23:33:19.549 [INFO][4906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" Namespace="calico-system" Pod="csi-node-driver-h9dn4" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9dn4-eth0" Jul 15 23:33:19.579263 containerd[1498]: 2025-07-15 23:33:19.549 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa2eb59a26e ContainerID="2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" Namespace="calico-system" Pod="csi-node-driver-h9dn4" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9dn4-eth0" Jul 15 23:33:19.579263 containerd[1498]: 2025-07-15 23:33:19.561 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" Namespace="calico-system" Pod="csi-node-driver-h9dn4" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9dn4-eth0" Jul 15 23:33:19.579263 containerd[1498]: 2025-07-15 23:33:19.562 [INFO][4906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" Namespace="calico-system" Pod="csi-node-driver-h9dn4" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9dn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h9dn4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8911d17a-4907-4974-b467-7de008d2f4c7", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec", Pod:"csi-node-driver-h9dn4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa2eb59a26e", MAC:"26:4b:c1:d1:d8:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:19.579263 containerd[1498]: 2025-07-15 23:33:19.575 [INFO][4906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" Namespace="calico-system" Pod="csi-node-driver-h9dn4" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9dn4-eth0" Jul 15 23:33:19.617857 containerd[1498]: time="2025-07-15T23:33:19.617651172Z" level=info msg="connecting to shim 2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec" address="unix:///run/containerd/s/49a4a71709a9a190d7539a9745cb9f2ef8549ada7b261f97ce5ee7ed1cc671c6" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:33:19.638512 systemd-networkd[1442]: cali3c3ab7879bb: Link UP Jul 15 23:33:19.638657 systemd-networkd[1442]: cali3c3ab7879bb: Gained carrier Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.425 [INFO][4895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0 coredns-668d6bf9bc- kube-system 7610213d-974f-4d75-b1f9-ebf5a38cf36b 834 0 2025-07-15 23:32:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wjtwg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c3ab7879bb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjtwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjtwg-" Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.426 [INFO][4895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjtwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0" Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.477 [INFO][4949] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" HandleID="k8s-pod-network.d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" Workload="localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0" Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.478 [INFO][4949] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" HandleID="k8s-pod-network.d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" Workload="localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c690), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wjtwg", "timestamp":"2025-07-15 23:33:19.47766926 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.478 [INFO][4949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.544 [INFO][4949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.545 [INFO][4949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.585 [INFO][4949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" host="localhost" Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.592 [INFO][4949] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.604 [INFO][4949] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.607 [INFO][4949] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.610 [INFO][4949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.610 [INFO][4949] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" host="localhost" Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.615 [INFO][4949] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.620 [INFO][4949] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" host="localhost" Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.629 [INFO][4949] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" host="localhost" Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.629 [INFO][4949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" host="localhost" Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.629 [INFO][4949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:33:19.655509 containerd[1498]: 2025-07-15 23:33:19.629 [INFO][4949] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" HandleID="k8s-pod-network.d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" Workload="localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0" Jul 15 23:33:19.656757 containerd[1498]: 2025-07-15 23:33:19.635 [INFO][4895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjtwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7610213d-974f-4d75-b1f9-ebf5a38cf36b", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wjtwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c3ab7879bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:19.656757 containerd[1498]: 2025-07-15 23:33:19.635 [INFO][4895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjtwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0" Jul 15 23:33:19.656757 containerd[1498]: 2025-07-15 23:33:19.635 [INFO][4895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c3ab7879bb ContainerID="d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjtwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0" Jul 15 23:33:19.656757 containerd[1498]: 2025-07-15 23:33:19.638 [INFO][4895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjtwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0" Jul 15 23:33:19.656757 containerd[1498]: 2025-07-15 23:33:19.639 [INFO][4895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjtwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7610213d-974f-4d75-b1f9-ebf5a38cf36b", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d", Pod:"coredns-668d6bf9bc-wjtwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c3ab7879bb", MAC:"8a:1d:17:d9:2a:16", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:33:19.656757 containerd[1498]: 2025-07-15 23:33:19.650 [INFO][4895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjtwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjtwg-eth0" Jul 15 23:33:19.666099 systemd[1]: Started cri-containerd-2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec.scope - libcontainer container 2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec. Jul 15 23:33:19.681702 containerd[1498]: time="2025-07-15T23:33:19.681619569Z" level=info msg="connecting to shim d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d" address="unix:///run/containerd/s/d9c8471f53e696b74a2d2c9ef8d738ecd9b160398deb61d11cc7b3cc5e6b73fa" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:33:19.691532 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:33:19.714125 systemd[1]: Started cri-containerd-d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d.scope - libcontainer container d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d. Jul 15 23:33:19.717807 containerd[1498]: time="2025-07-15T23:33:19.717758407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9dn4,Uid:8911d17a-4907-4974-b467-7de008d2f4c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec\"" Jul 15 23:33:19.749715 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:33:19.787975 containerd[1498]: time="2025-07-15T23:33:19.787901677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wjtwg,Uid:7610213d-974f-4d75-b1f9-ebf5a38cf36b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d\"" Jul 15 23:33:19.790879 containerd[1498]: time="2025-07-15T23:33:19.790692546Z" level=info msg="CreateContainer within sandbox \"d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:33:19.799836 containerd[1498]: time="2025-07-15T23:33:19.799760699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:19.801760 containerd[1498]: time="2025-07-15T23:33:19.801534429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 15 23:33:19.802666 containerd[1498]: time="2025-07-15T23:33:19.802580890Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:19.804761 containerd[1498]: time="2025-07-15T23:33:19.804690893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:19.805707 containerd[1498]: time="2025-07-15T23:33:19.805544535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.252137408s" Jul 15 23:33:19.805707 containerd[1498]: time="2025-07-15T23:33:19.805575898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 23:33:19.806602 containerd[1498]: time="2025-07-15T23:33:19.806576595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:33:19.808767 containerd[1498]: time="2025-07-15T23:33:19.808414692Z" level=info msg="CreateContainer within sandbox \"aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:33:19.809371 containerd[1498]: time="2025-07-15T23:33:19.809338861Z" level=info msg="Container c9d8bde4873e5956aa81cd45034c173349ea9ac689768984618c0ff6baecee05: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:19.816755 containerd[1498]: time="2025-07-15T23:33:19.816701649Z" level=info msg="Container d283ba6ef164da32225ed4db7721e726e02d597f4078bb3b3ee1cc7a6bcf3066: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:19.816968 containerd[1498]: time="2025-07-15T23:33:19.816717531Z" level=info msg="CreateContainer within sandbox \"d23aa447c18f8a92ab98db7dbfa32e8535a1c17e01db5b93974644babb21804d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c9d8bde4873e5956aa81cd45034c173349ea9ac689768984618c0ff6baecee05\"" Jul 15 23:33:19.817812 containerd[1498]: time="2025-07-15T23:33:19.817777913Z" level=info msg="StartContainer for \"c9d8bde4873e5956aa81cd45034c173349ea9ac689768984618c0ff6baecee05\"" Jul 15 23:33:19.818548 containerd[1498]: time="2025-07-15T23:33:19.818500222Z" level=info msg="connecting to shim c9d8bde4873e5956aa81cd45034c173349ea9ac689768984618c0ff6baecee05" address="unix:///run/containerd/s/d9c8471f53e696b74a2d2c9ef8d738ecd9b160398deb61d11cc7b3cc5e6b73fa" protocol=ttrpc version=3 Jul 15 23:33:19.829069 containerd[1498]: time="2025-07-15T23:33:19.826911032Z" level=info msg="CreateContainer within sandbox \"aa3946cc29b0a2a092e4e654c15bd659a0ae6e85668103f2a254fd82b0f2b3a8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d283ba6ef164da32225ed4db7721e726e02d597f4078bb3b3ee1cc7a6bcf3066\"" Jul 15 23:33:19.831420 containerd[1498]: time="2025-07-15T23:33:19.831389823Z" level=info msg="StartContainer for \"d283ba6ef164da32225ed4db7721e726e02d597f4078bb3b3ee1cc7a6bcf3066\"" Jul 15 23:33:19.832513 containerd[1498]: time="2025-07-15T23:33:19.832477848Z" level=info msg="connecting to shim d283ba6ef164da32225ed4db7721e726e02d597f4078bb3b3ee1cc7a6bcf3066" address="unix:///run/containerd/s/6f7cf27b990107ecb71ba5c0754eeea47af982036fff346c50107abf54a0dac7" protocol=ttrpc version=3 Jul 15 23:33:19.836589 systemd-networkd[1442]: cali5210801e512: Gained IPv6LL Jul 15 23:33:19.850104 systemd[1]: Started cri-containerd-c9d8bde4873e5956aa81cd45034c173349ea9ac689768984618c0ff6baecee05.scope - libcontainer container c9d8bde4873e5956aa81cd45034c173349ea9ac689768984618c0ff6baecee05. Jul 15 23:33:19.853650 systemd[1]: Started cri-containerd-d283ba6ef164da32225ed4db7721e726e02d597f4078bb3b3ee1cc7a6bcf3066.scope - libcontainer container d283ba6ef164da32225ed4db7721e726e02d597f4078bb3b3ee1cc7a6bcf3066. Jul 15 23:33:19.889106 containerd[1498]: time="2025-07-15T23:33:19.889073014Z" level=info msg="StartContainer for \"c9d8bde4873e5956aa81cd45034c173349ea9ac689768984618c0ff6baecee05\" returns successfully" Jul 15 23:33:19.914108 containerd[1498]: time="2025-07-15T23:33:19.914031536Z" level=info msg="StartContainer for \"d283ba6ef164da32225ed4db7721e726e02d597f4078bb3b3ee1cc7a6bcf3066\" returns successfully" Jul 15 23:33:20.052947 containerd[1498]: time="2025-07-15T23:33:20.052789053Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:20.054970 containerd[1498]: time="2025-07-15T23:33:20.054606184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 23:33:20.056115 containerd[1498]: time="2025-07-15T23:33:20.056091364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 249.487286ms" Jul 15 23:33:20.056236 containerd[1498]: time="2025-07-15T23:33:20.056219016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 23:33:20.058283 containerd[1498]: time="2025-07-15T23:33:20.058263408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 23:33:20.059959 containerd[1498]: time="2025-07-15T23:33:20.059918163Z" level=info msg="CreateContainer within sandbox \"fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:33:20.066849 containerd[1498]: time="2025-07-15T23:33:20.066148869Z" level=info msg="Container ccc0ee3abbd0b3711528020433a9f3e9032b260848a0366d5c60820f30433e9c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:20.071970 containerd[1498]: time="2025-07-15T23:33:20.071940533Z" level=info msg="CreateContainer within sandbox \"fa821e6de9f7b4018e6cb8f8b529d599f8ab9110e4d64bf9a89de987f7437924\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ccc0ee3abbd0b3711528020433a9f3e9032b260848a0366d5c60820f30433e9c\"" Jul 15 23:33:20.072513 containerd[1498]: time="2025-07-15T23:33:20.072474303Z" level=info msg="StartContainer for \"ccc0ee3abbd0b3711528020433a9f3e9032b260848a0366d5c60820f30433e9c\"" Jul 15 23:33:20.074111 containerd[1498]: time="2025-07-15T23:33:20.074048531Z" level=info msg="connecting to shim ccc0ee3abbd0b3711528020433a9f3e9032b260848a0366d5c60820f30433e9c" address="unix:///run/containerd/s/ef6a891f97c9ef8bad64b3faa5582afbefccce9baf7e23f0c29397e842c56b17" protocol=ttrpc version=3 Jul 15 23:33:20.098065 systemd[1]: Started cri-containerd-ccc0ee3abbd0b3711528020433a9f3e9032b260848a0366d5c60820f30433e9c.scope - libcontainer container ccc0ee3abbd0b3711528020433a9f3e9032b260848a0366d5c60820f30433e9c. Jul 15 23:33:20.148169 containerd[1498]: time="2025-07-15T23:33:20.148070607Z" level=info msg="StartContainer for \"ccc0ee3abbd0b3711528020433a9f3e9032b260848a0366d5c60820f30433e9c\" returns successfully" Jul 15 23:33:20.157657 systemd-networkd[1442]: cali9b10ad1ec51: Gained IPv6LL Jul 15 23:33:20.157915 systemd-networkd[1442]: cali30c99953948: Gained IPv6LL Jul 15 23:33:20.517871 kubelet[2636]: I0715 23:33:20.517652 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bb4f99774-qlvcz" podStartSLOduration=24.354169639 podStartE2EDuration="28.517638537s" podCreationTimestamp="2025-07-15 23:32:52 +0000 UTC" firstStartedPulling="2025-07-15 23:33:15.642990005 +0000 UTC m=+39.421208739" lastFinishedPulling="2025-07-15 23:33:19.806458903 +0000 UTC m=+43.584677637" observedRunningTime="2025-07-15 23:33:20.517283104 +0000 UTC m=+44.295501838" watchObservedRunningTime="2025-07-15 23:33:20.517638537 +0000 UTC m=+44.295857271" Jul 15 23:33:20.532047 kubelet[2636]: I0715 23:33:20.530752 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-87f5777cd-sw4nq" podStartSLOduration=25.992651386 podStartE2EDuration="29.530734888s" podCreationTimestamp="2025-07-15 23:32:51 +0000 UTC" firstStartedPulling="2025-07-15 23:33:16.518855181 +0000 UTC m=+40.297073915" lastFinishedPulling="2025-07-15 23:33:20.056938683 +0000 UTC m=+43.835157417" observedRunningTime="2025-07-15 23:33:20.529911451 +0000 UTC m=+44.308130225" watchObservedRunningTime="2025-07-15 23:33:20.530734888 +0000 UTC m=+44.308953622" Jul 15 23:33:20.542537 kubelet[2636]: I0715 23:33:20.542483 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wjtwg" podStartSLOduration=38.542465551 podStartE2EDuration="38.542465551s" podCreationTimestamp="2025-07-15 23:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:33:20.541972024 +0000 UTC m=+44.320190798" watchObservedRunningTime="2025-07-15 23:33:20.542465551 +0000 UTC m=+44.320684285" Jul 15 23:33:21.372429 systemd-networkd[1442]: vxlan.calico: Gained IPv6LL Jul 15 23:33:21.437346 systemd-networkd[1442]: caliaa2eb59a26e: Gained IPv6LL Jul 15 23:33:21.516650 kubelet[2636]: I0715 23:33:21.516278 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:33:21.516650 kubelet[2636]: I0715 23:33:21.516291 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:33:21.564090 systemd-networkd[1442]: cali3c3ab7879bb: Gained IPv6LL Jul 15 23:33:21.913896 containerd[1498]: time="2025-07-15T23:33:21.913846925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:21.914643 containerd[1498]: time="2025-07-15T23:33:21.914370053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 15 23:33:21.915299 containerd[1498]: time="2025-07-15T23:33:21.915254934Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:21.917035 containerd[1498]: time="2025-07-15T23:33:21.917001895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:21.917844 containerd[1498]: time="2025-07-15T23:33:21.917792727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.85942123s" Jul 15 23:33:21.917993 containerd[1498]: time="2025-07-15T23:33:21.917822410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 15 23:33:21.919061 containerd[1498]: time="2025-07-15T23:33:21.918879747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:33:21.925064 containerd[1498]: time="2025-07-15T23:33:21.925033313Z" level=info msg="CreateContainer within sandbox \"869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 23:33:21.932452 containerd[1498]: time="2025-07-15T23:33:21.932415951Z" level=info msg="Container 27bcfb9c7558e77fb04c90a52110acb89c8abf24a99992bf903e2f1ad0c3e271: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:21.939975 containerd[1498]: time="2025-07-15T23:33:21.939904238Z" level=info msg="CreateContainer within sandbox \"869fbb0813adea1fc4b6fc6fb42e9e772955fee458e53994ec680b4f96801fa1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"27bcfb9c7558e77fb04c90a52110acb89c8abf24a99992bf903e2f1ad0c3e271\"" Jul 15 23:33:21.940607 containerd[1498]: time="2025-07-15T23:33:21.940572180Z" level=info msg="StartContainer for \"27bcfb9c7558e77fb04c90a52110acb89c8abf24a99992bf903e2f1ad0c3e271\"" Jul 15 23:33:21.941909 containerd[1498]: time="2025-07-15T23:33:21.941881220Z" level=info msg="connecting to shim 27bcfb9c7558e77fb04c90a52110acb89c8abf24a99992bf903e2f1ad0c3e271" address="unix:///run/containerd/s/988190911634ddf829e06fd20a826c31636bbbdb713e73d80e8c95bb73e61abc" protocol=ttrpc version=3 Jul 15 23:33:21.970105 systemd[1]: Started cri-containerd-27bcfb9c7558e77fb04c90a52110acb89c8abf24a99992bf903e2f1ad0c3e271.scope - libcontainer container 27bcfb9c7558e77fb04c90a52110acb89c8abf24a99992bf903e2f1ad0c3e271. Jul 15 23:33:22.005192 containerd[1498]: time="2025-07-15T23:33:22.005130381Z" level=info msg="StartContainer for \"27bcfb9c7558e77fb04c90a52110acb89c8abf24a99992bf903e2f1ad0c3e271\" returns successfully" Jul 15 23:33:22.176001 containerd[1498]: time="2025-07-15T23:33:22.175269269Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:22.177672 containerd[1498]: time="2025-07-15T23:33:22.177560795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 23:33:22.188038 containerd[1498]: time="2025-07-15T23:33:22.187988732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 269.078902ms" Jul 15 23:33:22.188038 containerd[1498]: time="2025-07-15T23:33:22.188034416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 23:33:22.189212 containerd[1498]: time="2025-07-15T23:33:22.189163557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 23:33:22.191680 containerd[1498]: time="2025-07-15T23:33:22.191630019Z" level=info msg="CreateContainer within sandbox \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:33:22.199070 containerd[1498]: time="2025-07-15T23:33:22.198841667Z" level=info msg="Container 83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:22.206653 containerd[1498]: time="2025-07-15T23:33:22.206611205Z" level=info msg="CreateContainer within sandbox \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\"" Jul 15 23:33:22.207172 containerd[1498]: time="2025-07-15T23:33:22.207125171Z" level=info msg="StartContainer for \"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\"" Jul 15 23:33:22.208724 containerd[1498]: time="2025-07-15T23:33:22.208697113Z" level=info msg="connecting to shim 83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21" address="unix:///run/containerd/s/ff5f9a399a7b06c277784f270f6de65d15c6df5c589372927c2063395730d856" protocol=ttrpc version=3 Jul 15 23:33:22.229075 systemd[1]: Started cri-containerd-83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21.scope - libcontainer container 83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21. Jul 15 23:33:22.267641 containerd[1498]: time="2025-07-15T23:33:22.267583164Z" level=info msg="StartContainer for \"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\" returns successfully" Jul 15 23:33:22.549026 kubelet[2636]: I0715 23:33:22.548356 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64c48566cf-hvx9f" podStartSLOduration=23.327357923 podStartE2EDuration="26.548337551s" podCreationTimestamp="2025-07-15 23:32:56 +0000 UTC" firstStartedPulling="2025-07-15 23:33:18.697829633 +0000 UTC m=+42.476048327" lastFinishedPulling="2025-07-15 23:33:21.918809221 +0000 UTC m=+45.697027955" observedRunningTime="2025-07-15 23:33:22.532140336 +0000 UTC m=+46.310359070" watchObservedRunningTime="2025-07-15 23:33:22.548337551 +0000 UTC m=+46.326556285" Jul 15 23:33:22.550523 kubelet[2636]: I0715 23:33:22.550408 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-87f5777cd-jfddx" podStartSLOduration=28.183802878 podStartE2EDuration="31.550396856s" podCreationTimestamp="2025-07-15 23:32:51 +0000 UTC" firstStartedPulling="2025-07-15 23:33:18.822490652 +0000 UTC m=+42.600709386" lastFinishedPulling="2025-07-15 23:33:22.18908463 +0000 UTC m=+45.967303364" observedRunningTime="2025-07-15 23:33:22.549842846 +0000 UTC m=+46.328061580" watchObservedRunningTime="2025-07-15 23:33:22.550396856 +0000 UTC m=+46.328615590" Jul 15 23:33:23.013521 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:47366.service - OpenSSH per-connection server daemon (10.0.0.1:47366). Jul 15 23:33:23.095503 sshd[5367]: Accepted publickey for core from 10.0.0.1 port 47366 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:23.098188 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:23.102964 systemd-logind[1476]: New session 9 of user core. Jul 15 23:33:23.115157 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:33:23.340830 sshd[5369]: Connection closed by 10.0.0.1 port 47366 Jul 15 23:33:23.341407 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:23.348306 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:47366.service: Deactivated successfully. Jul 15 23:33:23.350181 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:33:23.353798 systemd-logind[1476]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:33:23.355432 systemd-logind[1476]: Removed session 9. Jul 15 23:33:23.515258 containerd[1498]: time="2025-07-15T23:33:23.515207589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:23.516692 containerd[1498]: time="2025-07-15T23:33:23.516654676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 15 23:33:23.517753 containerd[1498]: time="2025-07-15T23:33:23.517713809Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:23.524335 containerd[1498]: time="2025-07-15T23:33:23.524217382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:23.525448 containerd[1498]: time="2025-07-15T23:33:23.525415927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.336217406s" Jul 15 23:33:23.525448 containerd[1498]: time="2025-07-15T23:33:23.525466572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 15 23:33:23.528347 containerd[1498]: time="2025-07-15T23:33:23.528307582Z" level=info msg="CreateContainer within sandbox \"2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 23:33:23.529531 kubelet[2636]: I0715 23:33:23.529507 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:33:23.529785 kubelet[2636]: I0715 23:33:23.529752 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:33:23.541294 containerd[1498]: time="2025-07-15T23:33:23.541257481Z" level=info msg="Container 0cb7f986f7dfbedc731b00164afa6df6c27f77a0c25c0a7df63b01a0492a324e: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:23.548783 containerd[1498]: time="2025-07-15T23:33:23.548742780Z" level=info msg="CreateContainer within sandbox \"2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0cb7f986f7dfbedc731b00164afa6df6c27f77a0c25c0a7df63b01a0492a324e\"" Jul 15 23:33:23.549445 containerd[1498]: time="2025-07-15T23:33:23.549343713Z" level=info msg="StartContainer for \"0cb7f986f7dfbedc731b00164afa6df6c27f77a0c25c0a7df63b01a0492a324e\"" Jul 15 23:33:23.551073 containerd[1498]: time="2025-07-15T23:33:23.551044542Z" level=info msg="connecting to shim 0cb7f986f7dfbedc731b00164afa6df6c27f77a0c25c0a7df63b01a0492a324e" address="unix:///run/containerd/s/49a4a71709a9a190d7539a9745cb9f2ef8549ada7b261f97ce5ee7ed1cc671c6" protocol=ttrpc version=3 Jul 15 23:33:23.580156 systemd[1]: Started cri-containerd-0cb7f986f7dfbedc731b00164afa6df6c27f77a0c25c0a7df63b01a0492a324e.scope - libcontainer container 0cb7f986f7dfbedc731b00164afa6df6c27f77a0c25c0a7df63b01a0492a324e. Jul 15 23:33:23.654782 containerd[1498]: time="2025-07-15T23:33:23.654154935Z" level=info msg="StartContainer for \"0cb7f986f7dfbedc731b00164afa6df6c27f77a0c25c0a7df63b01a0492a324e\" returns successfully" Jul 15 23:33:23.655904 containerd[1498]: time="2025-07-15T23:33:23.655875926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 23:33:25.322527 containerd[1498]: time="2025-07-15T23:33:25.322445079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:25.323186 containerd[1498]: time="2025-07-15T23:33:25.323142418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 15 23:33:25.323786 containerd[1498]: time="2025-07-15T23:33:25.323747069Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:25.325853 containerd[1498]: time="2025-07-15T23:33:25.325803283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:33:25.326388 containerd[1498]: time="2025-07-15T23:33:25.326348089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.670438761s" Jul 15 23:33:25.326388 containerd[1498]: time="2025-07-15T23:33:25.326384333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 15 23:33:25.328372 containerd[1498]: time="2025-07-15T23:33:25.328345098Z" level=info msg="CreateContainer within sandbox \"2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 23:33:25.339958 containerd[1498]: time="2025-07-15T23:33:25.337195847Z" level=info msg="Container d8a7ee878647c3cd4365d685d943fc82399fa87aa29ee326e40a10ca54815a7b: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:33:25.348100 containerd[1498]: time="2025-07-15T23:33:25.347904233Z" level=info msg="CreateContainer within sandbox \"2fb95d007bb9261c095d3ed0b55e9e84b6dbfe6e2bf9e82c3ffce224ca70e6ec\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d8a7ee878647c3cd4365d685d943fc82399fa87aa29ee326e40a10ca54815a7b\"" Jul 15 23:33:25.359047 containerd[1498]: time="2025-07-15T23:33:25.359011173Z" level=info msg="StartContainer for \"d8a7ee878647c3cd4365d685d943fc82399fa87aa29ee326e40a10ca54815a7b\"" Jul 15 23:33:25.360515 containerd[1498]: time="2025-07-15T23:33:25.360390489Z" level=info msg="connecting to shim d8a7ee878647c3cd4365d685d943fc82399fa87aa29ee326e40a10ca54815a7b" address="unix:///run/containerd/s/49a4a71709a9a190d7539a9745cb9f2ef8549ada7b261f97ce5ee7ed1cc671c6" protocol=ttrpc version=3 Jul 15 23:33:25.379153 systemd[1]: Started cri-containerd-d8a7ee878647c3cd4365d685d943fc82399fa87aa29ee326e40a10ca54815a7b.scope - libcontainer container d8a7ee878647c3cd4365d685d943fc82399fa87aa29ee326e40a10ca54815a7b. Jul 15 23:33:25.409263 containerd[1498]: time="2025-07-15T23:33:25.409196098Z" level=info msg="StartContainer for \"d8a7ee878647c3cd4365d685d943fc82399fa87aa29ee326e40a10ca54815a7b\" returns successfully" Jul 15 23:33:25.554953 kubelet[2636]: I0715 23:33:25.554663 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h9dn4" podStartSLOduration=23.953722185 podStartE2EDuration="29.554648282s" podCreationTimestamp="2025-07-15 23:32:56 +0000 UTC" firstStartedPulling="2025-07-15 23:33:19.726123892 +0000 UTC m=+43.504342626" lastFinishedPulling="2025-07-15 23:33:25.327049989 +0000 UTC m=+49.105268723" observedRunningTime="2025-07-15 23:33:25.554355457 +0000 UTC m=+49.332574191" watchObservedRunningTime="2025-07-15 23:33:25.554648282 +0000 UTC m=+49.332867016" Jul 15 23:33:26.381836 kubelet[2636]: I0715 23:33:26.381747 2636 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 23:33:26.382564 kubelet[2636]: I0715 23:33:26.382363 2636 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 23:33:27.917075 kubelet[2636]: I0715 23:33:27.917041 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:33:28.091239 containerd[1498]: time="2025-07-15T23:33:28.091195582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3361f585fa443d8d5d2cc57d6e579007b11dc32cf733dfb671cc6294ec27427\" id:\"d0ba66a04fc62093d5c07523a9604b82b1d1ec0ebd3aaa298ecd851fe7736435\" pid:5478 exit_status:1 exited_at:{seconds:1752622408 nanos:84656097}" Jul 15 23:33:28.156322 containerd[1498]: time="2025-07-15T23:33:28.156281966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3361f585fa443d8d5d2cc57d6e579007b11dc32cf733dfb671cc6294ec27427\" id:\"58e0915adba64d546f0efb6f62bae01ce3a539ae99de8fcf88c9be702ab28674\" pid:5502 exit_status:1 exited_at:{seconds:1752622408 nanos:155953819}" Jul 15 23:33:28.356154 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:47378.service - OpenSSH per-connection server daemon (10.0.0.1:47378). Jul 15 23:33:28.411448 sshd[5514]: Accepted publickey for core from 10.0.0.1 port 47378 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:28.413326 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:28.418016 systemd-logind[1476]: New session 10 of user core. Jul 15 23:33:28.428097 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:33:28.633765 sshd[5516]: Connection closed by 10.0.0.1 port 47378 Jul 15 23:33:28.633909 sshd-session[5514]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:28.651868 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:47378.service: Deactivated successfully. Jul 15 23:33:28.654082 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:33:28.655402 systemd-logind[1476]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:33:28.658641 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:47388.service - OpenSSH per-connection server daemon (10.0.0.1:47388). Jul 15 23:33:28.661819 systemd-logind[1476]: Removed session 10. Jul 15 23:33:28.712457 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 47388 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:28.713912 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:28.718955 systemd-logind[1476]: New session 11 of user core. Jul 15 23:33:28.734137 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:33:28.975290 sshd[5533]: Connection closed by 10.0.0.1 port 47388 Jul 15 23:33:28.976164 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:28.986024 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:47388.service: Deactivated successfully. Jul 15 23:33:28.988045 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:33:28.992588 systemd-logind[1476]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:33:28.997227 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:47404.service - OpenSSH per-connection server daemon (10.0.0.1:47404). Jul 15 23:33:29.001958 systemd-logind[1476]: Removed session 11. Jul 15 23:33:29.049850 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 47404 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:29.051204 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:29.055441 systemd-logind[1476]: New session 12 of user core. Jul 15 23:33:29.072108 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:33:29.223181 sshd[5551]: Connection closed by 10.0.0.1 port 47404 Jul 15 23:33:29.223870 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:29.226673 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:47404.service: Deactivated successfully. Jul 15 23:33:29.228499 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:33:29.230289 systemd-logind[1476]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:33:29.231993 systemd-logind[1476]: Removed session 12. Jul 15 23:33:29.345401 kubelet[2636]: I0715 23:33:29.345283 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:33:29.419032 kubelet[2636]: I0715 23:33:29.418995 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:33:29.423262 containerd[1498]: time="2025-07-15T23:33:29.423195028Z" level=info msg="StopContainer for \"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\" with timeout 30 (s)" Jul 15 23:33:29.424567 containerd[1498]: time="2025-07-15T23:33:29.424520692Z" level=info msg="Stop container \"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\" with signal terminated" Jul 15 23:33:29.454264 systemd[1]: cri-containerd-83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21.scope: Deactivated successfully. Jul 15 23:33:29.454572 systemd[1]: cri-containerd-83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21.scope: Consumed 1.373s CPU time, 39.7M memory peak. Jul 15 23:33:29.463338 containerd[1498]: time="2025-07-15T23:33:29.462820118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\" id:\"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\" pid:5340 exit_status:1 exited_at:{seconds:1752622409 nanos:462151385}" Jul 15 23:33:29.466683 containerd[1498]: time="2025-07-15T23:33:29.466634339Z" level=info msg="received exit event container_id:\"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\" id:\"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\" pid:5340 exit_status:1 exited_at:{seconds:1752622409 nanos:462151385}" Jul 15 23:33:29.491571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21-rootfs.mount: Deactivated successfully. Jul 15 23:33:29.563324 containerd[1498]: time="2025-07-15T23:33:29.563220208Z" level=info msg="StopContainer for \"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\" returns successfully" Jul 15 23:33:29.571317 containerd[1498]: time="2025-07-15T23:33:29.571186957Z" level=info msg="StopPodSandbox for \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\"" Jul 15 23:33:29.580923 containerd[1498]: time="2025-07-15T23:33:29.580830599Z" level=info msg="Container to stop \"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:33:29.589946 systemd[1]: cri-containerd-dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be.scope: Deactivated successfully. Jul 15 23:33:29.593381 containerd[1498]: time="2025-07-15T23:33:29.593350468Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" id:\"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" pid:4751 exit_status:137 exited_at:{seconds:1752622409 nanos:593120250}" Jul 15 23:33:29.617226 containerd[1498]: time="2025-07-15T23:33:29.617183710Z" level=info msg="shim disconnected" id=dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be namespace=k8s.io Jul 15 23:33:29.618655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be-rootfs.mount: Deactivated successfully. Jul 15 23:33:29.620161 containerd[1498]: time="2025-07-15T23:33:29.617218353Z" level=warning msg="cleaning up after shim disconnected" id=dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be namespace=k8s.io Jul 15 23:33:29.620210 containerd[1498]: time="2025-07-15T23:33:29.620160906Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:33:29.641393 containerd[1498]: time="2025-07-15T23:33:29.641203648Z" level=error msg="Failed to handle event container_id:\"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" id:\"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" pid:4751 exit_status:137 exited_at:{seconds:1752622409 nanos:593120250} for dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Jul 15 23:33:29.641849 containerd[1498]: time="2025-07-15T23:33:29.641813856Z" level=info msg="received exit event sandbox_id:\"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" exit_status:137 exited_at:{seconds:1752622409 nanos:593120250}" Jul 15 23:33:29.644970 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be-shm.mount: Deactivated successfully. Jul 15 23:33:29.698535 systemd-networkd[1442]: cali30c99953948: Link DOWN Jul 15 23:33:29.698739 systemd-networkd[1442]: cali30c99953948: Lost carrier Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.697 [INFO][5636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.697 [INFO][5636] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" iface="eth0" netns="/var/run/netns/cni-45d1647b-1b39-57b8-2851-916f12beae14" Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.697 [INFO][5636] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" iface="eth0" netns="/var/run/netns/cni-45d1647b-1b39-57b8-2851-916f12beae14" Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.704 [INFO][5636] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" after=6.503874ms iface="eth0" netns="/var/run/netns/cni-45d1647b-1b39-57b8-2851-916f12beae14" Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.704 [INFO][5636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.704 [INFO][5636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.725 [INFO][5650] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" HandleID="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Workload="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.725 [INFO][5650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.725 [INFO][5650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.767 [INFO][5650] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" HandleID="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Workload="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.767 [INFO][5650] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" HandleID="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Workload="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.774 [INFO][5650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:33:29.781641 containerd[1498]: 2025-07-15 23:33:29.778 [INFO][5636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Jul 15 23:33:29.782905 containerd[1498]: time="2025-07-15T23:33:29.782860757Z" level=info msg="TearDown network for sandbox \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" successfully" Jul 15 23:33:29.783249 containerd[1498]: time="2025-07-15T23:33:29.783068133Z" level=info msg="StopPodSandbox for \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" returns successfully" Jul 15 23:33:29.785269 systemd[1]: run-netns-cni\x2d45d1647b\x2d1b39\x2d57b8\x2d2851\x2d916f12beae14.mount: Deactivated successfully. Jul 15 23:33:29.810525 kubelet[2636]: I0715 23:33:29.810488 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2faee96c-ca0f-4547-884b-1082f57853b3-calico-apiserver-certs\") pod \"2faee96c-ca0f-4547-884b-1082f57853b3\" (UID: \"2faee96c-ca0f-4547-884b-1082f57853b3\") " Jul 15 23:33:29.810680 kubelet[2636]: I0715 23:33:29.810553 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98g7w\" (UniqueName: \"kubernetes.io/projected/2faee96c-ca0f-4547-884b-1082f57853b3-kube-api-access-98g7w\") pod \"2faee96c-ca0f-4547-884b-1082f57853b3\" (UID: \"2faee96c-ca0f-4547-884b-1082f57853b3\") " Jul 15 23:33:29.813664 kubelet[2636]: I0715 23:33:29.813627 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2faee96c-ca0f-4547-884b-1082f57853b3-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "2faee96c-ca0f-4547-884b-1082f57853b3" (UID: "2faee96c-ca0f-4547-884b-1082f57853b3"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 23:33:29.815780 systemd[1]: var-lib-kubelet-pods-2faee96c\x2dca0f\x2d4547\x2d884b\x2d1082f57853b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d98g7w.mount: Deactivated successfully. Jul 15 23:33:29.815890 systemd[1]: var-lib-kubelet-pods-2faee96c\x2dca0f\x2d4547\x2d884b\x2d1082f57853b3-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 15 23:33:29.816844 kubelet[2636]: I0715 23:33:29.816051 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2faee96c-ca0f-4547-884b-1082f57853b3-kube-api-access-98g7w" (OuterVolumeSpecName: "kube-api-access-98g7w") pod "2faee96c-ca0f-4547-884b-1082f57853b3" (UID: "2faee96c-ca0f-4547-884b-1082f57853b3"). InnerVolumeSpecName "kube-api-access-98g7w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:33:29.911252 kubelet[2636]: I0715 23:33:29.911197 2636 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-98g7w\" (UniqueName: \"kubernetes.io/projected/2faee96c-ca0f-4547-884b-1082f57853b3-kube-api-access-98g7w\") on node \"localhost\" DevicePath \"\"" Jul 15 23:33:29.911252 kubelet[2636]: I0715 23:33:29.911239 2636 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2faee96c-ca0f-4547-884b-1082f57853b3-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 15 23:33:30.324008 systemd[1]: Removed slice kubepods-besteffort-pod2faee96c_ca0f_4547_884b_1082f57853b3.slice - libcontainer container kubepods-besteffort-pod2faee96c_ca0f_4547_884b_1082f57853b3.slice. Jul 15 23:33:30.324104 systemd[1]: kubepods-besteffort-pod2faee96c_ca0f_4547_884b_1082f57853b3.slice: Consumed 1.390s CPU time, 39.9M memory peak. Jul 15 23:33:30.590015 kubelet[2636]: I0715 23:33:30.589326 2636 scope.go:117] "RemoveContainer" containerID="83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21" Jul 15 23:33:30.591356 containerd[1498]: time="2025-07-15T23:33:30.591314594Z" level=info msg="RemoveContainer for \"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\"" Jul 15 23:33:30.596246 containerd[1498]: time="2025-07-15T23:33:30.596215775Z" level=info msg="RemoveContainer for \"83ac79026774bf2178a4cb67738a1ba7a5dd2234bc17a6399d627665fb612c21\" returns successfully" Jul 15 23:33:31.563439 containerd[1498]: time="2025-07-15T23:33:31.563378471Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" id:\"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" pid:4751 exit_status:137 exited_at:{seconds:1752622409 nanos:593120250}" Jul 15 23:33:31.582876 kubelet[2636]: I0715 23:33:31.582782 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:33:31.617029 containerd[1498]: time="2025-07-15T23:33:31.616989022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27bcfb9c7558e77fb04c90a52110acb89c8abf24a99992bf903e2f1ad0c3e271\" id:\"079d547521de862808680b2f3ebac0b55b55a064e2949fe234355484779ebc30\" pid:5683 exited_at:{seconds:1752622411 nanos:616734843}" Jul 15 23:33:31.653898 containerd[1498]: time="2025-07-15T23:33:31.653849609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27bcfb9c7558e77fb04c90a52110acb89c8abf24a99992bf903e2f1ad0c3e271\" id:\"7320a568b5ff175f0bb69bbed2ea60782f51fae99a8958120f43b708d569f081\" pid:5705 exited_at:{seconds:1752622411 nanos:653233121}" Jul 15 23:33:32.317198 kubelet[2636]: I0715 23:33:32.316866 2636 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2faee96c-ca0f-4547-884b-1082f57853b3" path="/var/lib/kubelet/pods/2faee96c-ca0f-4547-884b-1082f57853b3/volumes" Jul 15 23:33:34.240086 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:54444.service - OpenSSH per-connection server daemon (10.0.0.1:54444). Jul 15 23:33:34.295054 sshd[5716]: Accepted publickey for core from 10.0.0.1 port 54444 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:34.296339 sshd-session[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:34.301387 systemd-logind[1476]: New session 13 of user core. Jul 15 23:33:34.312153 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:33:34.460541 sshd[5718]: Connection closed by 10.0.0.1 port 54444 Jul 15 23:33:34.461105 sshd-session[5716]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:34.470003 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:54444.service: Deactivated successfully. Jul 15 23:33:34.475307 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:33:34.476312 systemd-logind[1476]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:33:34.484181 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:54448.service - OpenSSH per-connection server daemon (10.0.0.1:54448). Jul 15 23:33:34.486012 systemd-logind[1476]: Removed session 13. Jul 15 23:33:34.539338 sshd[5732]: Accepted publickey for core from 10.0.0.1 port 54448 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:34.540915 sshd-session[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:34.546177 systemd-logind[1476]: New session 14 of user core. Jul 15 23:33:34.553127 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:33:34.762135 sshd[5734]: Connection closed by 10.0.0.1 port 54448 Jul 15 23:33:34.762463 sshd-session[5732]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:34.776808 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:54448.service: Deactivated successfully. Jul 15 23:33:34.780728 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:33:34.781622 systemd-logind[1476]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:33:34.784741 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:54460.service - OpenSSH per-connection server daemon (10.0.0.1:54460). Jul 15 23:33:34.785565 systemd-logind[1476]: Removed session 14. Jul 15 23:33:34.845425 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 54460 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:34.846721 sshd-session[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:34.850804 systemd-logind[1476]: New session 15 of user core. Jul 15 23:33:34.862123 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:33:35.702831 sshd[5750]: Connection closed by 10.0.0.1 port 54460 Jul 15 23:33:35.703669 sshd-session[5748]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:35.717604 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:54460.service: Deactivated successfully. Jul 15 23:33:35.719570 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:33:35.722835 systemd-logind[1476]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:33:35.728014 systemd-logind[1476]: Removed session 15. Jul 15 23:33:35.730146 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:54470.service - OpenSSH per-connection server daemon (10.0.0.1:54470). Jul 15 23:33:35.786986 sshd[5771]: Accepted publickey for core from 10.0.0.1 port 54470 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:35.787824 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:35.792078 systemd-logind[1476]: New session 16 of user core. Jul 15 23:33:35.803120 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:33:36.081738 sshd[5773]: Connection closed by 10.0.0.1 port 54470 Jul 15 23:33:36.082160 sshd-session[5771]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:36.090644 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:54470.service: Deactivated successfully. Jul 15 23:33:36.096051 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:33:36.096986 systemd-logind[1476]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:33:36.100804 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:54482.service - OpenSSH per-connection server daemon (10.0.0.1:54482). Jul 15 23:33:36.103330 systemd-logind[1476]: Removed session 16. Jul 15 23:33:36.162953 sshd[5785]: Accepted publickey for core from 10.0.0.1 port 54482 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:36.164556 sshd-session[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:36.175001 systemd-logind[1476]: New session 17 of user core. Jul 15 23:33:36.180154 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:33:36.316879 containerd[1498]: time="2025-07-15T23:33:36.316835183Z" level=info msg="StopPodSandbox for \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\"" Jul 15 23:33:36.349571 sshd[5787]: Connection closed by 10.0.0.1 port 54482 Jul 15 23:33:36.349818 sshd-session[5785]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:36.355759 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:54482.service: Deactivated successfully. Jul 15 23:33:36.359531 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:33:36.362490 systemd-logind[1476]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:33:36.363856 systemd-logind[1476]: Removed session 17. Jul 15 23:33:36.407853 containerd[1498]: 2025-07-15 23:33:36.370 [WARNING][5809] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:36.407853 containerd[1498]: 2025-07-15 23:33:36.371 [INFO][5809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Jul 15 23:33:36.407853 containerd[1498]: 2025-07-15 23:33:36.371 [INFO][5809] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" iface="eth0" netns="" Jul 15 23:33:36.407853 containerd[1498]: 2025-07-15 23:33:36.371 [INFO][5809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Jul 15 23:33:36.407853 containerd[1498]: 2025-07-15 23:33:36.371 [INFO][5809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Jul 15 23:33:36.407853 containerd[1498]: 2025-07-15 23:33:36.393 [INFO][5820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" HandleID="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Workload="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:36.407853 containerd[1498]: 2025-07-15 23:33:36.393 [INFO][5820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:33:36.407853 containerd[1498]: 2025-07-15 23:33:36.393 [INFO][5820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:33:36.407853 containerd[1498]: 2025-07-15 23:33:36.402 [WARNING][5820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" HandleID="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Workload="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:36.407853 containerd[1498]: 2025-07-15 23:33:36.402 [INFO][5820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" HandleID="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Workload="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:36.407853 containerd[1498]: 2025-07-15 23:33:36.403 [INFO][5820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:33:36.407853 containerd[1498]: 2025-07-15 23:33:36.405 [INFO][5809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Jul 15 23:33:36.408307 containerd[1498]: time="2025-07-15T23:33:36.407896226Z" level=info msg="TearDown network for sandbox \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" successfully" Jul 15 23:33:36.408307 containerd[1498]: time="2025-07-15T23:33:36.407920508Z" level=info msg="StopPodSandbox for \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" returns successfully" Jul 15 23:33:36.408501 containerd[1498]: time="2025-07-15T23:33:36.408458427Z" level=info msg="RemovePodSandbox for \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\"" Jul 15 23:33:36.411681 containerd[1498]: time="2025-07-15T23:33:36.411638416Z" level=info msg="Forcibly stopping sandbox \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\"" Jul 15 23:33:36.479949 containerd[1498]: 2025-07-15 23:33:36.443 [WARNING][5837] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" WorkloadEndpoint="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:36.479949 containerd[1498]: 2025-07-15 23:33:36.443 [INFO][5837] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Jul 15 23:33:36.479949 containerd[1498]: 2025-07-15 23:33:36.443 [INFO][5837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" iface="eth0" netns="" Jul 15 23:33:36.479949 containerd[1498]: 2025-07-15 23:33:36.443 [INFO][5837] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Jul 15 23:33:36.479949 containerd[1498]: 2025-07-15 23:33:36.443 [INFO][5837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Jul 15 23:33:36.479949 containerd[1498]: 2025-07-15 23:33:36.463 [INFO][5846] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" HandleID="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Workload="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:36.479949 containerd[1498]: 2025-07-15 23:33:36.464 [INFO][5846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:33:36.479949 containerd[1498]: 2025-07-15 23:33:36.464 [INFO][5846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:33:36.479949 containerd[1498]: 2025-07-15 23:33:36.474 [WARNING][5846] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" HandleID="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Workload="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:36.479949 containerd[1498]: 2025-07-15 23:33:36.474 [INFO][5846] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" HandleID="k8s-pod-network.dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Workload="localhost-k8s-calico--apiserver--87f5777cd--jfddx-eth0" Jul 15 23:33:36.479949 containerd[1498]: 2025-07-15 23:33:36.476 [INFO][5846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:33:36.479949 containerd[1498]: 2025-07-15 23:33:36.478 [INFO][5837] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be" Jul 15 23:33:36.480280 containerd[1498]: time="2025-07-15T23:33:36.480001903Z" level=info msg="TearDown network for sandbox \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" successfully" Jul 15 23:33:36.493237 containerd[1498]: time="2025-07-15T23:33:36.493191213Z" level=info msg="Ensure that sandbox dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be in task-service has been cleanup successfully" Jul 15 23:33:36.499336 containerd[1498]: time="2025-07-15T23:33:36.499286973Z" level=info msg="RemovePodSandbox \"dd4f8ef5eeec4ecbd82d0af0bacd15d31756939f0ee3cfa796784ef299c243be\" returns successfully" Jul 15 23:33:38.728243 kubelet[2636]: I0715 23:33:38.728108 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:33:39.506975 containerd[1498]: time="2025-07-15T23:33:39.506903189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b6c87d3554b968ed77a0b191c262b5661752a4a018214d6db21d6c7494a66e6\" id:\"fec29ac8614ef3b1b642937316a0878f8e32d51603d875c0e963a4e0f8773d5d\" pid:5871 exited_at:{seconds:1752622419 nanos:506586007}" Jul 15 23:33:41.367323 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:54486.service - OpenSSH per-connection server daemon (10.0.0.1:54486). Jul 15 23:33:41.418765 sshd[5894]: Accepted publickey for core from 10.0.0.1 port 54486 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:41.420410 sshd-session[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:41.424415 systemd-logind[1476]: New session 18 of user core. Jul 15 23:33:41.437140 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:33:41.567075 sshd[5896]: Connection closed by 10.0.0.1 port 54486 Jul 15 23:33:41.567427 sshd-session[5894]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:41.571333 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:54486.service: Deactivated successfully. Jul 15 23:33:41.574649 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:33:41.575793 systemd-logind[1476]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:33:41.577472 systemd-logind[1476]: Removed session 18. Jul 15 23:33:46.584133 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:35258.service - OpenSSH per-connection server daemon (10.0.0.1:35258). Jul 15 23:33:46.659098 sshd[5915]: Accepted publickey for core from 10.0.0.1 port 35258 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:46.660962 sshd-session[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:46.665626 systemd-logind[1476]: New session 19 of user core. Jul 15 23:33:46.680189 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:33:46.808796 sshd[5917]: Connection closed by 10.0.0.1 port 35258 Jul 15 23:33:46.809221 sshd-session[5915]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:46.813049 systemd-logind[1476]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:33:46.813307 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:35258.service: Deactivated successfully. Jul 15 23:33:46.817351 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:33:46.819444 systemd-logind[1476]: Removed session 19. Jul 15 23:33:51.821824 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:35272.service - OpenSSH per-connection server daemon (10.0.0.1:35272). Jul 15 23:33:51.869279 sshd[5934]: Accepted publickey for core from 10.0.0.1 port 35272 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:33:51.870589 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:33:51.875086 systemd-logind[1476]: New session 20 of user core. Jul 15 23:33:51.886082 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:33:52.020249 sshd[5936]: Connection closed by 10.0.0.1 port 35272 Jul 15 23:33:52.020649 sshd-session[5934]: pam_unix(sshd:session): session closed for user core Jul 15 23:33:52.026345 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:35272.service: Deactivated successfully. Jul 15 23:33:52.028680 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:33:52.029882 systemd-logind[1476]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:33:52.034371 systemd-logind[1476]: Removed session 20.