Jul 9 23:48:23.848338 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 9 23:48:23.848360 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Jul 9 22:19:33 -00 2025 Jul 9 23:48:23.848371 kernel: KASLR enabled Jul 9 23:48:23.848376 kernel: efi: EFI v2.7 by EDK II Jul 9 23:48:23.848382 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 9 23:48:23.848387 kernel: random: crng init done Jul 9 23:48:23.848394 kernel: secureboot: Secure boot disabled Jul 9 23:48:23.848399 kernel: ACPI: Early table checksum verification disabled Jul 9 23:48:23.848405 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 9 23:48:23.848412 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 9 23:48:23.848418 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.848424 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.848430 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.848435 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.848442 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.848449 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.848456 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.848462 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.848468 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:48:23.848473 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 9 23:48:23.848479 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 9 23:48:23.848485 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 23:48:23.848491 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] Jul 9 23:48:23.848497 kernel: Zone ranges: Jul 9 23:48:23.848503 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 23:48:23.848510 kernel: DMA32 empty Jul 9 23:48:23.848516 kernel: Normal empty Jul 9 23:48:23.848522 kernel: Device empty Jul 9 23:48:23.848528 kernel: Movable zone start for each node Jul 9 23:48:23.848534 kernel: Early memory node ranges Jul 9 23:48:23.848540 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 9 23:48:23.848546 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 9 23:48:23.848552 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 9 23:48:23.848558 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 9 23:48:23.848564 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 9 23:48:23.848570 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 9 23:48:23.848576 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 9 23:48:23.848583 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 9 23:48:23.848589 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 9 23:48:23.848595 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 9 23:48:23.848604 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 9 23:48:23.848610 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 9 23:48:23.848617 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 9 23:48:23.848625 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 23:48:23.848631 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 9 23:48:23.848638 kernel: psci: probing for conduit method from ACPI. Jul 9 23:48:23.848644 kernel: psci: PSCIv1.1 detected in firmware. Jul 9 23:48:23.848650 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 23:48:23.848657 kernel: psci: Trusted OS migration not required Jul 9 23:48:23.848663 kernel: psci: SMC Calling Convention v1.1 Jul 9 23:48:23.848669 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 9 23:48:23.848676 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 9 23:48:23.848682 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 9 23:48:23.848690 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 9 23:48:23.848696 kernel: Detected PIPT I-cache on CPU0 Jul 9 23:48:23.848702 kernel: CPU features: detected: GIC system register CPU interface Jul 9 23:48:23.848709 kernel: CPU features: detected: Spectre-v4 Jul 9 23:48:23.848720 kernel: CPU features: detected: Spectre-BHB Jul 9 23:48:23.848727 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 9 23:48:23.848733 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 9 23:48:23.848739 kernel: CPU features: detected: ARM erratum 1418040 Jul 9 23:48:23.848745 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 9 23:48:23.848752 kernel: alternatives: applying boot alternatives Jul 9 23:48:23.848759 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:48:23.848767 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 23:48:23.848774 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 23:48:23.848780 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 23:48:23.848787 kernel: Fallback order for Node 0: 0 Jul 9 23:48:23.848793 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 9 23:48:23.848799 kernel: Policy zone: DMA Jul 9 23:48:23.848806 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 23:48:23.848812 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 9 23:48:23.848818 kernel: software IO TLB: area num 4. Jul 9 23:48:23.848824 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 9 23:48:23.848831 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) Jul 9 23:48:23.848837 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 9 23:48:23.848845 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 23:48:23.848852 kernel: rcu: RCU event tracing is enabled. Jul 9 23:48:23.848859 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 9 23:48:23.848865 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 23:48:23.848871 kernel: Tracing variant of Tasks RCU enabled. Jul 9 23:48:23.848878 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 23:48:23.848884 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 9 23:48:23.848891 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 23:48:23.848897 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 23:48:23.848904 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 23:48:23.848910 kernel: GICv3: 256 SPIs implemented Jul 9 23:48:23.848921 kernel: GICv3: 0 Extended SPIs implemented Jul 9 23:48:23.848927 kernel: Root IRQ handler: gic_handle_irq Jul 9 23:48:23.848934 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 9 23:48:23.848940 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 9 23:48:23.848946 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 9 23:48:23.848954 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 9 23:48:23.848960 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 9 23:48:23.848967 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 9 23:48:23.848973 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 9 23:48:23.848979 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 9 23:48:23.848986 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 23:48:23.848992 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:48:23.849000 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 9 23:48:23.849007 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 9 23:48:23.849014 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 9 23:48:23.849020 kernel: arm-pv: using stolen time PV Jul 9 23:48:23.849027 kernel: Console: colour dummy device 80x25 Jul 9 23:48:23.849033 kernel: ACPI: Core revision 20240827 Jul 9 23:48:23.849040 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 9 23:48:23.849047 kernel: pid_max: default: 32768 minimum: 301 Jul 9 23:48:23.849053 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 23:48:23.849061 kernel: landlock: Up and running. Jul 9 23:48:23.849068 kernel: SELinux: Initializing. Jul 9 23:48:23.849074 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:48:23.849081 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:48:23.849088 kernel: rcu: Hierarchical SRCU implementation. Jul 9 23:48:23.849094 kernel: rcu: Max phase no-delay instances is 400. Jul 9 23:48:23.849101 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 23:48:23.849108 kernel: Remapping and enabling EFI services. Jul 9 23:48:23.849176 kernel: smp: Bringing up secondary CPUs ... Jul 9 23:48:23.849185 kernel: Detected PIPT I-cache on CPU1 Jul 9 23:48:23.849199 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 9 23:48:23.849206 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 9 23:48:23.849215 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:48:23.849222 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 9 23:48:23.849229 kernel: Detected PIPT I-cache on CPU2 Jul 9 23:48:23.849236 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 9 23:48:23.849243 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 9 23:48:23.849251 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:48:23.849258 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 9 23:48:23.849264 kernel: Detected PIPT I-cache on CPU3 Jul 9 23:48:23.849271 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 9 23:48:23.849278 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 9 23:48:23.849285 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:48:23.849292 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 9 23:48:23.849298 kernel: smp: Brought up 1 node, 4 CPUs Jul 9 23:48:23.849312 kernel: SMP: Total of 4 processors activated. Jul 9 23:48:23.849320 kernel: CPU: All CPU(s) started at EL1 Jul 9 23:48:23.849328 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 23:48:23.849336 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 9 23:48:23.849343 kernel: CPU features: detected: Common not Private translations Jul 9 23:48:23.849350 kernel: CPU features: detected: CRC32 instructions Jul 9 23:48:23.849357 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 9 23:48:23.849363 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 9 23:48:23.849371 kernel: CPU features: detected: LSE atomic instructions Jul 9 23:48:23.849381 kernel: CPU features: detected: Privileged Access Never Jul 9 23:48:23.849388 kernel: CPU features: detected: RAS Extension Support Jul 9 23:48:23.849398 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 9 23:48:23.849405 kernel: alternatives: applying system-wide alternatives Jul 9 23:48:23.849412 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 9 23:48:23.849419 kernel: Memory: 2440420K/2572288K available (11136K kernel code, 2428K rwdata, 9032K rodata, 39488K init, 1035K bss, 125920K reserved, 0K cma-reserved) Jul 9 23:48:23.849426 kernel: devtmpfs: initialized Jul 9 23:48:23.849433 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 23:48:23.849440 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 9 23:48:23.849447 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 9 23:48:23.849455 kernel: 0 pages in range for non-PLT usage Jul 9 23:48:23.849465 kernel: 508448 pages in range for PLT usage Jul 9 23:48:23.849472 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 23:48:23.849479 kernel: SMBIOS 3.0.0 present. Jul 9 23:48:23.849486 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 9 23:48:23.849493 kernel: DMI: Memory slots populated: 1/1 Jul 9 23:48:23.849499 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 23:48:23.849506 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 23:48:23.849513 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 23:48:23.849520 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 23:48:23.849529 kernel: audit: initializing netlink subsys (disabled) Jul 9 23:48:23.849536 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Jul 9 23:48:23.849542 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 23:48:23.849549 kernel: cpuidle: using governor menu Jul 9 23:48:23.849556 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 23:48:23.849563 kernel: ASID allocator initialised with 32768 entries Jul 9 23:48:23.849570 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 23:48:23.849576 kernel: Serial: AMBA PL011 UART driver Jul 9 23:48:23.849583 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 23:48:23.849592 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 23:48:23.849599 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 23:48:23.849606 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 23:48:23.849613 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 23:48:23.849620 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 23:48:23.849627 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 23:48:23.849634 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 23:48:23.849640 kernel: ACPI: Added _OSI(Module Device) Jul 9 23:48:23.849647 kernel: ACPI: Added _OSI(Processor Device) Jul 9 23:48:23.849656 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 23:48:23.849663 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 23:48:23.849669 kernel: ACPI: Interpreter enabled Jul 9 23:48:23.849676 kernel: ACPI: Using GIC for interrupt routing Jul 9 23:48:23.849683 kernel: ACPI: MCFG table detected, 1 entries Jul 9 23:48:23.849690 kernel: ACPI: CPU0 has been hot-added Jul 9 23:48:23.849697 kernel: ACPI: CPU1 has been hot-added Jul 9 23:48:23.849704 kernel: ACPI: CPU2 has been hot-added Jul 9 23:48:23.849711 kernel: ACPI: CPU3 has been hot-added Jul 9 23:48:23.849718 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 9 23:48:23.849726 kernel: printk: legacy console [ttyAMA0] enabled Jul 9 23:48:23.849733 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 23:48:23.849873 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 23:48:23.849941 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 9 23:48:23.850002 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 9 23:48:23.850060 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 9 23:48:23.850131 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 9 23:48:23.850146 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 9 23:48:23.850153 kernel: PCI host bridge to bus 0000:00 Jul 9 23:48:23.850225 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 9 23:48:23.850283 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 9 23:48:23.850348 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 9 23:48:23.850403 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 23:48:23.850477 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 9 23:48:23.850551 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 23:48:23.850611 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 9 23:48:23.850690 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 9 23:48:23.850752 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 9 23:48:23.850812 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 9 23:48:23.850871 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 9 23:48:23.850933 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 9 23:48:23.850985 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 9 23:48:23.851038 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 9 23:48:23.851088 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 9 23:48:23.851098 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 9 23:48:23.851105 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 9 23:48:23.851126 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 9 23:48:23.851134 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 9 23:48:23.851144 kernel: iommu: Default domain type: Translated Jul 9 23:48:23.851151 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 23:48:23.851158 kernel: efivars: Registered efivars operations Jul 9 23:48:23.851165 kernel: vgaarb: loaded Jul 9 23:48:23.851172 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 23:48:23.851179 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 23:48:23.851186 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 23:48:23.851193 kernel: pnp: PnP ACPI init Jul 9 23:48:23.851275 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 9 23:48:23.851288 kernel: pnp: PnP ACPI: found 1 devices Jul 9 23:48:23.851295 kernel: NET: Registered PF_INET protocol family Jul 9 23:48:23.851308 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 23:48:23.851316 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 23:48:23.851323 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 23:48:23.851330 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 23:48:23.851337 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 23:48:23.851344 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 23:48:23.851354 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:48:23.851361 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:48:23.851368 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 23:48:23.851375 kernel: PCI: CLS 0 bytes, default 64 Jul 9 23:48:23.851382 kernel: kvm [1]: HYP mode not available Jul 9 23:48:23.851389 kernel: Initialise system trusted keyrings Jul 9 23:48:23.851395 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 23:48:23.851402 kernel: Key type asymmetric registered Jul 9 23:48:23.851409 kernel: Asymmetric key parser 'x509' registered Jul 9 23:48:23.851417 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 9 23:48:23.851424 kernel: io scheduler mq-deadline registered Jul 9 23:48:23.851431 kernel: io scheduler kyber registered Jul 9 23:48:23.851438 kernel: io scheduler bfq registered Jul 9 23:48:23.851445 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 9 23:48:23.851452 kernel: ACPI: button: Power Button [PWRB] Jul 9 23:48:23.851459 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 9 23:48:23.851527 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 9 23:48:23.851537 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 23:48:23.851546 kernel: thunder_xcv, ver 1.0 Jul 9 23:48:23.851553 kernel: thunder_bgx, ver 1.0 Jul 9 23:48:23.851560 kernel: nicpf, ver 1.0 Jul 9 23:48:23.851567 kernel: nicvf, ver 1.0 Jul 9 23:48:23.851655 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 23:48:23.851712 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T23:48:23 UTC (1752104903) Jul 9 23:48:23.851722 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 23:48:23.851729 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 9 23:48:23.851738 kernel: watchdog: NMI not fully supported Jul 9 23:48:23.851745 kernel: watchdog: Hard watchdog permanently disabled Jul 9 23:48:23.851752 kernel: NET: Registered PF_INET6 protocol family Jul 9 23:48:23.851759 kernel: Segment Routing with IPv6 Jul 9 23:48:23.851766 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 23:48:23.851773 kernel: NET: Registered PF_PACKET protocol family Jul 9 23:48:23.851780 kernel: Key type dns_resolver registered Jul 9 23:48:23.851788 kernel: registered taskstats version 1 Jul 9 23:48:23.851795 kernel: Loading compiled-in X.509 certificates Jul 9 23:48:23.851802 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 11eff9deb028731c4f89f27f6fac8d1c08902e5a' Jul 9 23:48:23.851811 kernel: Demotion targets for Node 0: null Jul 9 23:48:23.851818 kernel: Key type .fscrypt registered Jul 9 23:48:23.851825 kernel: Key type fscrypt-provisioning registered Jul 9 23:48:23.851832 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 23:48:23.851839 kernel: ima: Allocated hash algorithm: sha1 Jul 9 23:48:23.851847 kernel: ima: No architecture policies found Jul 9 23:48:23.851854 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 23:48:23.851861 kernel: clk: Disabling unused clocks Jul 9 23:48:23.851870 kernel: PM: genpd: Disabling unused power domains Jul 9 23:48:23.851877 kernel: Warning: unable to open an initial console. Jul 9 23:48:23.851884 kernel: Freeing unused kernel memory: 39488K Jul 9 23:48:23.851891 kernel: Run /init as init process Jul 9 23:48:23.851898 kernel: with arguments: Jul 9 23:48:23.851905 kernel: /init Jul 9 23:48:23.851912 kernel: with environment: Jul 9 23:48:23.851919 kernel: HOME=/ Jul 9 23:48:23.851926 kernel: TERM=linux Jul 9 23:48:23.851934 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 23:48:23.851942 systemd[1]: Successfully made /usr/ read-only. Jul 9 23:48:23.851952 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:48:23.851960 systemd[1]: Detected virtualization kvm. Jul 9 23:48:23.851967 systemd[1]: Detected architecture arm64. Jul 9 23:48:23.851975 systemd[1]: Running in initrd. Jul 9 23:48:23.851982 systemd[1]: No hostname configured, using default hostname. Jul 9 23:48:23.851991 systemd[1]: Hostname set to . Jul 9 23:48:23.851999 systemd[1]: Initializing machine ID from VM UUID. Jul 9 23:48:23.852006 systemd[1]: Queued start job for default target initrd.target. Jul 9 23:48:23.852014 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:48:23.852022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:48:23.852030 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 23:48:23.852038 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:48:23.852045 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 23:48:23.852055 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 23:48:23.852064 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 23:48:23.852072 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 23:48:23.852079 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:48:23.852087 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:48:23.852095 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:48:23.852102 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:48:23.852111 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:48:23.852140 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:48:23.852147 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:48:23.852155 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:48:23.852163 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 23:48:23.852171 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 23:48:23.852179 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:48:23.852187 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:48:23.852194 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:48:23.852204 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:48:23.852212 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 23:48:23.852220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:48:23.852228 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 23:48:23.852236 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 23:48:23.852244 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 23:48:23.852251 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:48:23.852259 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:48:23.852268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:48:23.852276 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 23:48:23.852284 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:48:23.852292 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 23:48:23.852300 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:48:23.852334 systemd-journald[244]: Collecting audit messages is disabled. Jul 9 23:48:23.852353 systemd-journald[244]: Journal started Jul 9 23:48:23.852373 systemd-journald[244]: Runtime Journal (/run/log/journal/7cf3f3cf28524c0794555263a80b0f47) is 6M, max 48.5M, 42.4M free. Jul 9 23:48:23.841336 systemd-modules-load[247]: Inserted module 'overlay' Jul 9 23:48:23.855539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:48:23.858143 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:48:23.861134 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 23:48:23.861706 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:48:23.865235 systemd-modules-load[247]: Inserted module 'br_netfilter' Jul 9 23:48:23.866224 kernel: Bridge firewalling registered Jul 9 23:48:23.866505 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:48:23.868517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:48:23.871715 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:48:23.878289 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:48:23.881656 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:48:23.884464 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:48:23.886770 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 23:48:23.890429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:48:23.900300 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:48:23.903079 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:48:23.904416 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:48:23.907248 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 23:48:23.933164 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:48:23.950502 systemd-resolved[287]: Positive Trust Anchors: Jul 9 23:48:23.950522 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:48:23.950554 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:48:23.956546 systemd-resolved[287]: Defaulting to hostname 'linux'. Jul 9 23:48:23.957917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:48:23.960644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:48:24.043153 kernel: SCSI subsystem initialized Jul 9 23:48:24.052147 kernel: Loading iSCSI transport class v2.0-870. Jul 9 23:48:24.060149 kernel: iscsi: registered transport (tcp) Jul 9 23:48:24.075154 kernel: iscsi: registered transport (qla4xxx) Jul 9 23:48:24.075216 kernel: QLogic iSCSI HBA Driver Jul 9 23:48:24.093252 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:48:24.121236 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:48:24.122950 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:48:24.176968 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 23:48:24.179557 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 23:48:24.245157 kernel: raid6: neonx8 gen() 15793 MB/s Jul 9 23:48:24.262140 kernel: raid6: neonx4 gen() 15788 MB/s Jul 9 23:48:24.279139 kernel: raid6: neonx2 gen() 13190 MB/s Jul 9 23:48:24.296156 kernel: raid6: neonx1 gen() 10366 MB/s Jul 9 23:48:24.313143 kernel: raid6: int64x8 gen() 6854 MB/s Jul 9 23:48:24.330139 kernel: raid6: int64x4 gen() 7327 MB/s Jul 9 23:48:24.347139 kernel: raid6: int64x2 gen() 6096 MB/s Jul 9 23:48:24.364314 kernel: raid6: int64x1 gen() 5040 MB/s Jul 9 23:48:24.364329 kernel: raid6: using algorithm neonx8 gen() 15793 MB/s Jul 9 23:48:24.382276 kernel: raid6: .... xor() 12003 MB/s, rmw enabled Jul 9 23:48:24.382312 kernel: raid6: using neon recovery algorithm Jul 9 23:48:24.388583 kernel: xor: measuring software checksum speed Jul 9 23:48:24.388612 kernel: 8regs : 20520 MB/sec Jul 9 23:48:24.388622 kernel: 32regs : 21687 MB/sec Jul 9 23:48:24.389245 kernel: arm64_neon : 28013 MB/sec Jul 9 23:48:24.389256 kernel: xor: using function: arm64_neon (28013 MB/sec) Jul 9 23:48:24.449150 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 23:48:24.455385 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:48:24.458193 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:48:24.488545 systemd-udevd[497]: Using default interface naming scheme 'v255'. Jul 9 23:48:24.492729 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:48:24.494839 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 23:48:24.519955 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation Jul 9 23:48:24.546961 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:48:24.549445 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:48:24.611063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:48:24.613704 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 23:48:24.665676 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 9 23:48:24.667340 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 9 23:48:24.671945 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:48:24.672060 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:48:24.684436 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 23:48:24.684462 kernel: GPT:9289727 != 19775487 Jul 9 23:48:24.684480 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 23:48:24.684490 kernel: GPT:9289727 != 19775487 Jul 9 23:48:24.684499 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 23:48:24.684507 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:48:24.684408 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:48:24.686763 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:48:24.718144 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 23:48:24.719692 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:48:24.722623 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 23:48:24.736818 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 23:48:24.743391 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 23:48:24.744660 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 23:48:24.754244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 23:48:24.755609 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:48:24.757872 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:48:24.760259 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:48:24.763276 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 23:48:24.765234 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 23:48:24.786965 disk-uuid[588]: Primary Header is updated. Jul 9 23:48:24.786965 disk-uuid[588]: Secondary Entries is updated. Jul 9 23:48:24.786965 disk-uuid[588]: Secondary Header is updated. Jul 9 23:48:24.791166 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:48:24.795141 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:48:25.808150 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:48:25.808627 disk-uuid[594]: The operation has completed successfully. Jul 9 23:48:25.831030 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 23:48:25.831142 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 23:48:25.862107 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 23:48:25.879308 sh[608]: Success Jul 9 23:48:25.894892 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 23:48:25.894942 kernel: device-mapper: uevent: version 1.0.3 Jul 9 23:48:25.896801 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 23:48:25.905147 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 9 23:48:25.932946 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 23:48:25.935959 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 23:48:25.952188 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 23:48:25.958530 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 23:48:25.958591 kernel: BTRFS: device fsid 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (621) Jul 9 23:48:25.961293 kernel: BTRFS info (device dm-0): first mount of filesystem 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b Jul 9 23:48:25.961324 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:48:25.961334 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 23:48:25.965375 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 23:48:25.966707 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:48:25.968097 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 23:48:25.968918 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 23:48:25.970574 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 23:48:25.994178 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (652) Jul 9 23:48:25.996538 kernel: BTRFS info (device vda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:48:25.996583 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:48:25.997499 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 23:48:26.004150 kernel: BTRFS info (device vda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:48:26.005973 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 23:48:26.008263 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 23:48:26.082147 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:48:26.085625 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:48:26.136970 systemd-networkd[799]: lo: Link UP Jul 9 23:48:26.136983 systemd-networkd[799]: lo: Gained carrier Jul 9 23:48:26.137736 systemd-networkd[799]: Enumeration completed Jul 9 23:48:26.138441 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:48:26.138791 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:48:26.138795 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:48:26.139488 systemd-networkd[799]: eth0: Link UP Jul 9 23:48:26.139491 systemd-networkd[799]: eth0: Gained carrier Jul 9 23:48:26.139500 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:48:26.140619 systemd[1]: Reached target network.target - Network. Jul 9 23:48:26.156190 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 23:48:26.174843 ignition[695]: Ignition 2.21.0 Jul 9 23:48:26.174856 ignition[695]: Stage: fetch-offline Jul 9 23:48:26.174897 ignition[695]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:48:26.174905 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:48:26.175098 ignition[695]: parsed url from cmdline: "" Jul 9 23:48:26.175101 ignition[695]: no config URL provided Jul 9 23:48:26.175106 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:48:26.175131 ignition[695]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:48:26.175155 ignition[695]: op(1): [started] loading QEMU firmware config module Jul 9 23:48:26.175160 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 9 23:48:26.180152 ignition[695]: op(1): [finished] loading QEMU firmware config module Jul 9 23:48:26.228445 ignition[695]: parsing config with SHA512: 71fc7352940de07fdbb7cedf523beaa0e7895247a814d1f15628f8209e3fa1f846ceb99d386d8a23ef8f43b416b7b63111ce4b4f0d0415e0a347d73e41abc393 Jul 9 23:48:26.233874 unknown[695]: fetched base config from "system" Jul 9 23:48:26.233890 unknown[695]: fetched user config from "qemu" Jul 9 23:48:26.234269 ignition[695]: fetch-offline: fetch-offline passed Jul 9 23:48:26.234336 ignition[695]: Ignition finished successfully Jul 9 23:48:26.238074 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:48:26.242964 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 23:48:26.243961 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 23:48:26.274338 ignition[812]: Ignition 2.21.0 Jul 9 23:48:26.274355 ignition[812]: Stage: kargs Jul 9 23:48:26.274503 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:48:26.274512 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:48:26.275555 ignition[812]: kargs: kargs passed Jul 9 23:48:26.278486 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 23:48:26.275617 ignition[812]: Ignition finished successfully Jul 9 23:48:26.280652 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 23:48:26.307628 ignition[819]: Ignition 2.21.0 Jul 9 23:48:26.307647 ignition[819]: Stage: disks Jul 9 23:48:26.307805 ignition[819]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:48:26.307815 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:48:26.309830 ignition[819]: disks: disks passed Jul 9 23:48:26.313871 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 23:48:26.309911 ignition[819]: Ignition finished successfully Jul 9 23:48:26.315148 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 23:48:26.316961 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 23:48:26.318742 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:48:26.320646 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:48:26.322806 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:48:26.325537 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 23:48:26.357581 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 9 23:48:26.362732 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 23:48:26.365072 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 23:48:26.437144 kernel: EXT4-fs (vda9): mounted filesystem 961fd3ec-635c-4a87-8aef-ca8f12cd8be8 r/w with ordered data mode. Quota mode: none. Jul 9 23:48:26.437344 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 23:48:26.438820 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 23:48:26.444837 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:48:26.446833 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 23:48:26.447939 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 23:48:26.448007 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 23:48:26.448051 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:48:26.463373 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 23:48:26.467159 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 23:48:26.470756 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (837) Jul 9 23:48:26.470780 kernel: BTRFS info (device vda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:48:26.473029 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:48:26.473069 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 23:48:26.478092 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:48:26.542691 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 23:48:26.547393 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Jul 9 23:48:26.551499 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 23:48:26.554835 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 23:48:26.633824 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 23:48:26.636102 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 23:48:26.637926 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 23:48:26.657147 kernel: BTRFS info (device vda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:48:26.678024 ignition[951]: INFO : Ignition 2.21.0 Jul 9 23:48:26.678024 ignition[951]: INFO : Stage: mount Jul 9 23:48:26.679741 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:48:26.679741 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:48:26.679360 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 23:48:26.684843 ignition[951]: INFO : mount: mount passed Jul 9 23:48:26.684843 ignition[951]: INFO : Ignition finished successfully Jul 9 23:48:26.682187 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 23:48:26.684910 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 23:48:26.957192 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 23:48:26.958711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:48:26.995028 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (965) Jul 9 23:48:26.995082 kernel: BTRFS info (device vda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:48:26.995093 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:48:26.996760 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 23:48:26.999409 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:48:27.023942 ignition[982]: INFO : Ignition 2.21.0 Jul 9 23:48:27.023942 ignition[982]: INFO : Stage: files Jul 9 23:48:27.025964 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:48:27.025964 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:48:27.028204 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Jul 9 23:48:27.029314 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 23:48:27.029314 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 23:48:27.032778 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 23:48:27.034195 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 23:48:27.034195 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 23:48:27.033362 unknown[982]: wrote ssh authorized keys file for user: core Jul 9 23:48:27.038156 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 9 23:48:27.038156 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 9 23:48:27.075516 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 23:48:27.193676 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 9 23:48:27.193676 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 9 23:48:27.197712 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 23:48:27.197712 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:48:27.197712 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:48:27.197712 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:48:27.197712 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:48:27.197712 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:48:27.197712 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:48:27.214020 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:48:27.214020 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:48:27.214020 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 23:48:27.214020 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 23:48:27.214020 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 23:48:27.214020 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 9 23:48:27.405307 systemd-networkd[799]: eth0: Gained IPv6LL Jul 9 23:48:27.672278 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 9 23:48:28.061082 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 23:48:28.061082 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 9 23:48:28.065450 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:48:28.065450 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:48:28.069496 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 9 23:48:28.069496 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 9 23:48:28.069496 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 23:48:28.069496 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 23:48:28.069496 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 9 23:48:28.069496 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 23:48:28.080160 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 23:48:28.082034 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 23:48:28.084283 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 23:48:28.084283 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 9 23:48:28.084283 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 23:48:28.084283 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:48:28.084283 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:48:28.084283 ignition[982]: INFO : files: files passed Jul 9 23:48:28.084283 ignition[982]: INFO : Ignition finished successfully Jul 9 23:48:28.087550 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 23:48:28.091625 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 23:48:28.095241 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 23:48:28.108523 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 23:48:28.108630 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 23:48:28.111802 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Jul 9 23:48:28.113568 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:48:28.113568 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:48:28.116618 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:48:28.116194 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:48:28.117962 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 23:48:28.121097 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 23:48:28.157905 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 23:48:28.158021 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 23:48:28.160360 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 23:48:28.162227 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 23:48:28.164105 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 23:48:28.165015 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 23:48:28.196254 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:48:28.198905 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 23:48:28.226756 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:48:28.228031 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:48:28.230165 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 23:48:28.231981 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 23:48:28.232109 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:48:28.234635 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 23:48:28.236678 systemd[1]: Stopped target basic.target - Basic System. Jul 9 23:48:28.238330 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 23:48:28.240079 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:48:28.242035 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 23:48:28.244011 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:48:28.245967 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 23:48:28.247855 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:48:28.249912 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 23:48:28.251936 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 23:48:28.254050 systemd[1]: Stopped target swap.target - Swaps. Jul 9 23:48:28.255640 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 23:48:28.255776 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:48:28.258080 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:48:28.260195 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:48:28.262247 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 23:48:28.262369 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:48:28.264616 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 23:48:28.264739 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 23:48:28.267836 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 23:48:28.267985 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:48:28.270161 systemd[1]: Stopped target paths.target - Path Units. Jul 9 23:48:28.271870 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 23:48:28.276204 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:48:28.277539 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 23:48:28.279826 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 23:48:28.281513 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 23:48:28.281601 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:48:28.283237 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 23:48:28.283331 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:48:28.285040 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 23:48:28.285177 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:48:28.287097 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 23:48:28.287220 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 23:48:28.289791 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 23:48:28.292494 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 23:48:28.293702 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 23:48:28.293826 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:48:28.295758 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 23:48:28.295857 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:48:28.301210 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 23:48:28.314346 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 23:48:28.322696 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 23:48:28.328418 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 23:48:28.329080 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 23:48:28.332967 ignition[1038]: INFO : Ignition 2.21.0 Jul 9 23:48:28.332967 ignition[1038]: INFO : Stage: umount Jul 9 23:48:28.332967 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:48:28.332967 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:48:28.332967 ignition[1038]: INFO : umount: umount passed Jul 9 23:48:28.332967 ignition[1038]: INFO : Ignition finished successfully Jul 9 23:48:28.332510 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 23:48:28.332617 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 23:48:28.335639 systemd[1]: Stopped target network.target - Network. Jul 9 23:48:28.337476 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 23:48:28.337553 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 23:48:28.339655 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 23:48:28.339710 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 23:48:28.341259 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 23:48:28.341337 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 23:48:28.343090 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 23:48:28.343156 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 23:48:28.344921 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 23:48:28.344986 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 23:48:28.346970 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 23:48:28.348695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 23:48:28.357734 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 23:48:28.357994 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 23:48:28.362270 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 23:48:28.362513 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 23:48:28.362630 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 23:48:28.369238 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 23:48:28.372401 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 23:48:28.373655 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 23:48:28.373698 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:48:28.375915 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 23:48:28.376895 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 23:48:28.376957 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:48:28.378873 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:48:28.378921 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:48:28.381593 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 23:48:28.381637 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 23:48:28.384844 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 23:48:28.384892 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:48:28.388872 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:48:28.392023 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:48:28.392084 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:48:28.410869 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 23:48:28.411947 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 23:48:28.414203 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 23:48:28.414347 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:48:28.416769 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 23:48:28.416824 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 23:48:28.417966 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 23:48:28.417997 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:48:28.420102 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 23:48:28.420186 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:48:28.422858 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 23:48:28.422907 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 23:48:28.425867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:48:28.425924 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:48:28.430030 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 23:48:28.431985 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 23:48:28.432055 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:48:28.435211 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 23:48:28.435258 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:48:28.438285 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 23:48:28.438338 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:48:28.441673 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 23:48:28.441721 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:48:28.444110 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:48:28.444170 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:48:28.448449 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 23:48:28.448502 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 9 23:48:28.448533 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 23:48:28.448563 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:48:28.448867 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 23:48:28.448957 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 23:48:28.451072 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 23:48:28.453269 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 23:48:28.471890 systemd[1]: Switching root. Jul 9 23:48:28.511370 systemd-journald[244]: Journal stopped Jul 9 23:48:29.360044 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 9 23:48:29.360096 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 23:48:29.360111 kernel: SELinux: policy capability open_perms=1 Jul 9 23:48:29.360147 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 23:48:29.360156 kernel: SELinux: policy capability always_check_network=0 Jul 9 23:48:29.360166 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 23:48:29.360178 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 23:48:29.360187 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 23:48:29.360201 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 23:48:29.360212 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 23:48:29.360221 kernel: audit: type=1403 audit(1752104908.687:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 23:48:29.360235 systemd[1]: Successfully loaded SELinux policy in 52.603ms. Jul 9 23:48:29.360333 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.482ms. Jul 9 23:48:29.360354 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:48:29.360367 systemd[1]: Detected virtualization kvm. Jul 9 23:48:29.360376 systemd[1]: Detected architecture arm64. Jul 9 23:48:29.360390 systemd[1]: Detected first boot. Jul 9 23:48:29.360399 systemd[1]: Initializing machine ID from VM UUID. Jul 9 23:48:29.360412 kernel: NET: Registered PF_VSOCK protocol family Jul 9 23:48:29.360422 zram_generator::config[1084]: No configuration found. Jul 9 23:48:29.360434 systemd[1]: Populated /etc with preset unit settings. Jul 9 23:48:29.360445 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 23:48:29.360455 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 23:48:29.360465 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 23:48:29.360479 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 23:48:29.360489 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 23:48:29.360500 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 23:48:29.360510 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 23:48:29.360521 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 23:48:29.360530 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 23:48:29.360546 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 23:48:29.360557 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 23:48:29.360567 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 23:48:29.360580 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:48:29.360601 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:48:29.360613 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 23:48:29.360624 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 23:48:29.360635 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 23:48:29.360645 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:48:29.360656 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 9 23:48:29.360666 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:48:29.360676 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:48:29.360686 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 23:48:29.360698 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 23:48:29.360708 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 23:48:29.360717 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 23:48:29.360727 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:48:29.360738 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:48:29.360748 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:48:29.360758 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:48:29.360769 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 23:48:29.360778 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 23:48:29.360790 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 23:48:29.360800 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:48:29.360810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:48:29.360821 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:48:29.360831 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 23:48:29.360841 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 23:48:29.360852 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 23:48:29.360862 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 23:48:29.360873 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 23:48:29.360884 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 23:48:29.360895 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 23:48:29.360912 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 23:48:29.360927 systemd[1]: Reached target machines.target - Containers. Jul 9 23:48:29.360937 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 23:48:29.360947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:48:29.360960 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:48:29.360970 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 23:48:29.360981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:48:29.360991 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:48:29.361002 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:48:29.361012 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 23:48:29.361022 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:48:29.361032 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 23:48:29.361042 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 23:48:29.361052 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 23:48:29.361063 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 23:48:29.361073 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 23:48:29.361085 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:48:29.361094 kernel: loop: module loaded Jul 9 23:48:29.361104 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:48:29.361136 kernel: fuse: init (API version 7.41) Jul 9 23:48:29.361149 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:48:29.361159 kernel: ACPI: bus type drm_connector registered Jul 9 23:48:29.361169 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:48:29.361180 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 23:48:29.361192 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 23:48:29.361202 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:48:29.361213 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 23:48:29.361222 systemd[1]: Stopped verity-setup.service. Jul 9 23:48:29.361236 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 23:48:29.361249 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 23:48:29.361259 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 23:48:29.361271 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 23:48:29.361283 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 23:48:29.361299 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 23:48:29.361337 systemd-journald[1149]: Collecting audit messages is disabled. Jul 9 23:48:29.361359 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 23:48:29.361370 systemd-journald[1149]: Journal started Jul 9 23:48:29.361393 systemd-journald[1149]: Runtime Journal (/run/log/journal/7cf3f3cf28524c0794555263a80b0f47) is 6M, max 48.5M, 42.4M free. Jul 9 23:48:29.113911 systemd[1]: Queued start job for default target multi-user.target. Jul 9 23:48:29.138157 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 23:48:29.138552 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 23:48:29.365489 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:48:29.366350 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:48:29.368034 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 23:48:29.370310 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 23:48:29.371740 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:48:29.373157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:48:29.374604 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:48:29.374762 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:48:29.376242 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:48:29.376454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:48:29.377902 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 23:48:29.378075 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 23:48:29.379472 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:48:29.379629 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:48:29.382631 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:48:29.384085 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:48:29.385699 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 23:48:29.387398 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 23:48:29.400272 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:48:29.402848 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 23:48:29.404918 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 23:48:29.406125 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 23:48:29.406162 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:48:29.408127 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 23:48:29.415230 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 23:48:29.416446 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:48:29.417937 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 23:48:29.420204 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 23:48:29.421563 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:48:29.423246 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 23:48:29.424475 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:48:29.426262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:48:29.428364 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 23:48:29.435241 systemd-journald[1149]: Time spent on flushing to /var/log/journal/7cf3f3cf28524c0794555263a80b0f47 is 16.412ms for 886 entries. Jul 9 23:48:29.435241 systemd-journald[1149]: System Journal (/var/log/journal/7cf3f3cf28524c0794555263a80b0f47) is 8M, max 195.6M, 187.6M free. Jul 9 23:48:29.455487 systemd-journald[1149]: Received client request to flush runtime journal. Jul 9 23:48:29.455658 kernel: loop0: detected capacity change from 0 to 211168 Jul 9 23:48:29.433691 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:48:29.439213 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:48:29.442506 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 23:48:29.443985 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 23:48:29.458010 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 23:48:29.462340 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 23:48:29.464922 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 23:48:29.468482 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 23:48:29.470704 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:48:29.479170 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 23:48:29.482985 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jul 9 23:48:29.482995 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jul 9 23:48:29.487755 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:48:29.491423 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 23:48:29.502352 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 23:48:29.507167 kernel: loop1: detected capacity change from 0 to 138376 Jul 9 23:48:29.523650 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 23:48:29.526356 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:48:29.544148 kernel: loop2: detected capacity change from 0 to 107312 Jul 9 23:48:29.554422 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jul 9 23:48:29.554766 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jul 9 23:48:29.558639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:48:29.573182 kernel: loop3: detected capacity change from 0 to 211168 Jul 9 23:48:29.580135 kernel: loop4: detected capacity change from 0 to 138376 Jul 9 23:48:29.588743 kernel: loop5: detected capacity change from 0 to 107312 Jul 9 23:48:29.593640 (sd-merge)[1229]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 9 23:48:29.594129 (sd-merge)[1229]: Merged extensions into '/usr'. Jul 9 23:48:29.597727 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 23:48:29.597744 systemd[1]: Reloading... Jul 9 23:48:29.657193 zram_generator::config[1256]: No configuration found. Jul 9 23:48:29.714075 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 23:48:29.737582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:48:29.799348 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 23:48:29.799480 systemd[1]: Reloading finished in 201 ms. Jul 9 23:48:29.830806 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 23:48:29.832393 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 23:48:29.848433 systemd[1]: Starting ensure-sysext.service... Jul 9 23:48:29.850444 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:48:29.862080 systemd[1]: Reload requested from client PID 1289 ('systemctl') (unit ensure-sysext.service)... Jul 9 23:48:29.862094 systemd[1]: Reloading... Jul 9 23:48:29.865559 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 23:48:29.865590 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 23:48:29.865795 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 23:48:29.865974 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 23:48:29.866585 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 23:48:29.866793 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jul 9 23:48:29.866838 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jul 9 23:48:29.870829 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:48:29.870841 systemd-tmpfiles[1290]: Skipping /boot Jul 9 23:48:29.880340 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:48:29.880355 systemd-tmpfiles[1290]: Skipping /boot Jul 9 23:48:29.907214 zram_generator::config[1314]: No configuration found. Jul 9 23:48:29.972248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:48:30.037046 systemd[1]: Reloading finished in 174 ms. Jul 9 23:48:30.061554 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 23:48:30.068379 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:48:30.074218 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:48:30.076528 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 23:48:30.078682 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 23:48:30.082557 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:48:30.088322 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:48:30.090527 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 23:48:30.099892 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 23:48:30.103681 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 23:48:30.110109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:48:30.111563 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:48:30.115356 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:48:30.117578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:48:30.118732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:48:30.118842 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:48:30.120820 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 23:48:30.122865 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 23:48:30.128487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:48:30.128635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:48:30.128731 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:48:30.129894 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Jul 9 23:48:30.131547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:48:30.132896 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:48:30.134260 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:48:30.135188 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:48:30.136279 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:48:30.136489 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:48:30.138460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:48:30.138629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:48:30.140406 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:48:30.140557 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:48:30.145230 systemd[1]: Finished ensure-sysext.service. Jul 9 23:48:30.149251 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 23:48:30.151156 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 23:48:30.153785 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:48:30.153943 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:48:30.156366 augenrules[1390]: No rules Jul 9 23:48:30.156984 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:48:30.157935 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:48:30.161337 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 23:48:30.163053 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:48:30.174735 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:48:30.177387 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:48:30.177465 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:48:30.182380 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 23:48:30.183848 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 23:48:30.212516 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 9 23:48:30.264072 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 23:48:30.267246 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 23:48:30.305170 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 23:48:30.339037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:48:30.366447 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 23:48:30.369441 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 23:48:30.383291 systemd-networkd[1423]: lo: Link UP Jul 9 23:48:30.383304 systemd-networkd[1423]: lo: Gained carrier Jul 9 23:48:30.384211 systemd-networkd[1423]: Enumeration completed Jul 9 23:48:30.384325 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:48:30.387648 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:48:30.387656 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:48:30.388137 systemd-networkd[1423]: eth0: Link UP Jul 9 23:48:30.388329 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 23:48:30.389758 systemd-networkd[1423]: eth0: Gained carrier Jul 9 23:48:30.389780 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:48:30.391549 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 23:48:30.404195 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 23:48:30.404742 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. Jul 9 23:48:30.405580 systemd-resolved[1356]: Positive Trust Anchors: Jul 9 23:48:30.405597 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:48:30.405629 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:48:30.409192 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 9 23:48:30.410246 systemd-timesyncd[1433]: Initial clock synchronization to Wed 2025-07-09 23:48:30.399345 UTC. Jul 9 23:48:30.422393 systemd-resolved[1356]: Defaulting to hostname 'linux'. Jul 9 23:48:30.428413 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:48:30.430848 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 23:48:30.432293 systemd[1]: Reached target network.target - Network. Jul 9 23:48:30.433178 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:48:30.437049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:48:30.439432 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:48:30.440568 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 23:48:30.441768 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 23:48:30.443143 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 23:48:30.444231 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 23:48:30.445421 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 23:48:30.446600 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 23:48:30.446634 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:48:30.447571 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:48:30.449557 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 23:48:30.451856 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 23:48:30.454918 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 23:48:30.456397 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 23:48:30.457621 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 23:48:30.460717 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 23:48:30.462169 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 23:48:30.463928 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 23:48:30.465099 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:48:30.466060 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:48:30.467070 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:48:30.467101 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:48:30.468088 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 23:48:30.470066 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 23:48:30.471902 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 23:48:30.473888 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 23:48:30.475839 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 23:48:30.476932 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 23:48:30.479965 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 23:48:30.485229 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 23:48:30.488159 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 23:48:30.491140 jq[1475]: false Jul 9 23:48:30.492420 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 23:48:30.496880 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 23:48:30.498825 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 23:48:30.499316 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 23:48:30.500233 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 23:48:30.503220 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 23:48:30.506930 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 23:48:30.507848 extend-filesystems[1476]: Found /dev/vda6 Jul 9 23:48:30.510998 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 23:48:30.511262 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 23:48:30.511520 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 23:48:30.511668 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 23:48:30.514061 jq[1491]: true Jul 9 23:48:30.514988 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 23:48:30.517159 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 23:48:30.517397 extend-filesystems[1476]: Found /dev/vda9 Jul 9 23:48:30.522446 extend-filesystems[1476]: Checking size of /dev/vda9 Jul 9 23:48:30.541302 jq[1499]: true Jul 9 23:48:30.547663 tar[1497]: linux-arm64/LICENSE Jul 9 23:48:30.547909 tar[1497]: linux-arm64/helm Jul 9 23:48:30.558497 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 23:48:30.558929 extend-filesystems[1476]: Resized partition /dev/vda9 Jul 9 23:48:30.573370 extend-filesystems[1516]: resize2fs 1.47.2 (1-Jan-2025) Jul 9 23:48:30.578585 dbus-daemon[1473]: [system] SELinux support is enabled Jul 9 23:48:30.579754 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 23:48:30.582715 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 23:48:30.582747 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 23:48:30.584514 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 23:48:30.584541 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 23:48:30.593643 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 9 23:48:30.601061 update_engine[1490]: I20250709 23:48:30.600642 1490 main.cc:92] Flatcar Update Engine starting Jul 9 23:48:30.605604 update_engine[1490]: I20250709 23:48:30.605554 1490 update_check_scheduler.cc:74] Next update check in 6m3s Jul 9 23:48:30.608825 systemd[1]: Started update-engine.service - Update Engine. Jul 9 23:48:30.613271 systemd-logind[1488]: Watching system buttons on /dev/input/event0 (Power Button) Jul 9 23:48:30.613711 systemd-logind[1488]: New seat seat0. Jul 9 23:48:30.618363 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 23:48:30.619841 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 23:48:30.635144 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 9 23:48:30.645046 extend-filesystems[1516]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 23:48:30.645046 extend-filesystems[1516]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 23:48:30.645046 extend-filesystems[1516]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 9 23:48:30.657700 extend-filesystems[1476]: Resized filesystem in /dev/vda9 Jul 9 23:48:30.658640 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Jul 9 23:48:30.649450 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 23:48:30.649714 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 23:48:30.658003 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 23:48:30.661894 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 23:48:30.686816 locksmithd[1532]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 23:48:30.836919 containerd[1505]: time="2025-07-09T23:48:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 23:48:30.838479 containerd[1505]: time="2025-07-09T23:48:30.838445680Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 9 23:48:30.848484 containerd[1505]: time="2025-07-09T23:48:30.848437760Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.44µs" Jul 9 23:48:30.848484 containerd[1505]: time="2025-07-09T23:48:30.848475400Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 23:48:30.848577 containerd[1505]: time="2025-07-09T23:48:30.848500240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 23:48:30.848830 containerd[1505]: time="2025-07-09T23:48:30.848644000Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 23:48:30.848830 containerd[1505]: time="2025-07-09T23:48:30.848671240Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 23:48:30.848830 containerd[1505]: time="2025-07-09T23:48:30.848697120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:48:30.848830 containerd[1505]: time="2025-07-09T23:48:30.848754040Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:48:30.848830 containerd[1505]: time="2025-07-09T23:48:30.848769680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:48:30.849197 containerd[1505]: time="2025-07-09T23:48:30.849131160Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:48:30.849197 containerd[1505]: time="2025-07-09T23:48:30.849157560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:48:30.849197 containerd[1505]: time="2025-07-09T23:48:30.849171080Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:48:30.849197 containerd[1505]: time="2025-07-09T23:48:30.849183520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 23:48:30.849309 containerd[1505]: time="2025-07-09T23:48:30.849256960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 23:48:30.849862 containerd[1505]: time="2025-07-09T23:48:30.849455840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:48:30.849862 containerd[1505]: time="2025-07-09T23:48:30.849495680Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:48:30.849862 containerd[1505]: time="2025-07-09T23:48:30.849509680Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 23:48:30.849862 containerd[1505]: time="2025-07-09T23:48:30.849535920Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 23:48:30.849862 containerd[1505]: time="2025-07-09T23:48:30.849819040Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 23:48:30.849984 containerd[1505]: time="2025-07-09T23:48:30.849895520Z" level=info msg="metadata content store policy set" policy=shared Jul 9 23:48:30.853490 containerd[1505]: time="2025-07-09T23:48:30.853447520Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 23:48:30.853575 containerd[1505]: time="2025-07-09T23:48:30.853509840Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 23:48:30.853575 containerd[1505]: time="2025-07-09T23:48:30.853534440Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 23:48:30.853575 containerd[1505]: time="2025-07-09T23:48:30.853546280Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 23:48:30.853575 containerd[1505]: time="2025-07-09T23:48:30.853558400Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 23:48:30.853575 containerd[1505]: time="2025-07-09T23:48:30.853570160Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 23:48:30.853723 containerd[1505]: time="2025-07-09T23:48:30.853586360Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 23:48:30.853723 containerd[1505]: time="2025-07-09T23:48:30.853598640Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 23:48:30.853723 containerd[1505]: time="2025-07-09T23:48:30.853613960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 23:48:30.853723 containerd[1505]: time="2025-07-09T23:48:30.853624000Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 23:48:30.853723 containerd[1505]: time="2025-07-09T23:48:30.853633840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 23:48:30.853723 containerd[1505]: time="2025-07-09T23:48:30.853646160Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 23:48:30.853824 containerd[1505]: time="2025-07-09T23:48:30.853793680Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 23:48:30.853824 containerd[1505]: time="2025-07-09T23:48:30.853815440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 23:48:30.853862 containerd[1505]: time="2025-07-09T23:48:30.853829760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 23:48:30.853862 containerd[1505]: time="2025-07-09T23:48:30.853848240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 23:48:30.853862 containerd[1505]: time="2025-07-09T23:48:30.853858680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 23:48:30.854001 containerd[1505]: time="2025-07-09T23:48:30.853869480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 23:48:30.854001 containerd[1505]: time="2025-07-09T23:48:30.853880520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 23:48:30.854001 containerd[1505]: time="2025-07-09T23:48:30.853890160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 23:48:30.854001 containerd[1505]: time="2025-07-09T23:48:30.853900800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 23:48:30.854001 containerd[1505]: time="2025-07-09T23:48:30.853910640Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 23:48:30.854001 containerd[1505]: time="2025-07-09T23:48:30.853922840Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 23:48:30.854182 containerd[1505]: time="2025-07-09T23:48:30.854166840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 23:48:30.854221 containerd[1505]: time="2025-07-09T23:48:30.854184600Z" level=info msg="Start snapshots syncer" Jul 9 23:48:30.854221 containerd[1505]: time="2025-07-09T23:48:30.854209000Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 23:48:30.854487 containerd[1505]: time="2025-07-09T23:48:30.854425200Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 23:48:30.854586 containerd[1505]: time="2025-07-09T23:48:30.854488800Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 23:48:30.854586 containerd[1505]: time="2025-07-09T23:48:30.854554040Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 23:48:30.854687 containerd[1505]: time="2025-07-09T23:48:30.854665960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 23:48:30.854720 containerd[1505]: time="2025-07-09T23:48:30.854694960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 23:48:30.854720 containerd[1505]: time="2025-07-09T23:48:30.854706640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 23:48:30.854720 containerd[1505]: time="2025-07-09T23:48:30.854717800Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 23:48:30.854767 containerd[1505]: time="2025-07-09T23:48:30.854728760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 23:48:30.854767 containerd[1505]: time="2025-07-09T23:48:30.854738920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 23:48:30.854767 containerd[1505]: time="2025-07-09T23:48:30.854748720Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 23:48:30.854819 containerd[1505]: time="2025-07-09T23:48:30.854772760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 23:48:30.854819 containerd[1505]: time="2025-07-09T23:48:30.854789120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 23:48:30.854819 containerd[1505]: time="2025-07-09T23:48:30.854808320Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 23:48:30.854868 containerd[1505]: time="2025-07-09T23:48:30.854839280Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:48:30.854868 containerd[1505]: time="2025-07-09T23:48:30.854858920Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:48:30.854903 containerd[1505]: time="2025-07-09T23:48:30.854868280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:48:30.854903 containerd[1505]: time="2025-07-09T23:48:30.854877480Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:48:30.854903 containerd[1505]: time="2025-07-09T23:48:30.854884760Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 23:48:30.854903 containerd[1505]: time="2025-07-09T23:48:30.854897200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 23:48:30.854968 containerd[1505]: time="2025-07-09T23:48:30.854907680Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 23:48:30.855095 containerd[1505]: time="2025-07-09T23:48:30.855025800Z" level=info msg="runtime interface created" Jul 9 23:48:30.855095 containerd[1505]: time="2025-07-09T23:48:30.855036800Z" level=info msg="created NRI interface" Jul 9 23:48:30.855095 containerd[1505]: time="2025-07-09T23:48:30.855046520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 23:48:30.855095 containerd[1505]: time="2025-07-09T23:48:30.855057960Z" level=info msg="Connect containerd service" Jul 9 23:48:30.855095 containerd[1505]: time="2025-07-09T23:48:30.855083640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 23:48:30.855843 containerd[1505]: time="2025-07-09T23:48:30.855810680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:48:30.968378 containerd[1505]: time="2025-07-09T23:48:30.968309760Z" level=info msg="Start subscribing containerd event" Jul 9 23:48:30.968570 containerd[1505]: time="2025-07-09T23:48:30.968454680Z" level=info msg="Start recovering state" Jul 9 23:48:30.968811 containerd[1505]: time="2025-07-09T23:48:30.968696200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 23:48:30.968811 containerd[1505]: time="2025-07-09T23:48:30.968762520Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 23:48:30.968811 containerd[1505]: time="2025-07-09T23:48:30.968787960Z" level=info msg="Start event monitor" Jul 9 23:48:30.968988 containerd[1505]: time="2025-07-09T23:48:30.968951080Z" level=info msg="Start cni network conf syncer for default" Jul 9 23:48:30.968988 containerd[1505]: time="2025-07-09T23:48:30.968967440Z" level=info msg="Start streaming server" Jul 9 23:48:30.969138 containerd[1505]: time="2025-07-09T23:48:30.968976400Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 23:48:30.969138 containerd[1505]: time="2025-07-09T23:48:30.969105160Z" level=info msg="runtime interface starting up..." Jul 9 23:48:30.969270 containerd[1505]: time="2025-07-09T23:48:30.969230680Z" level=info msg="starting plugins..." Jul 9 23:48:30.969376 containerd[1505]: time="2025-07-09T23:48:30.969256360Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 23:48:30.970798 containerd[1505]: time="2025-07-09T23:48:30.970711320Z" level=info msg="containerd successfully booted in 0.134391s" Jul 9 23:48:30.970809 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 23:48:30.986186 tar[1497]: linux-arm64/README.md Jul 9 23:48:31.003048 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 23:48:31.675188 sshd_keygen[1496]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 23:48:31.693798 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 23:48:31.697459 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 23:48:31.718510 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 23:48:31.718743 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 23:48:31.721832 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 23:48:31.761211 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 23:48:31.764292 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 23:48:31.766375 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 9 23:48:31.767677 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 23:48:32.077254 systemd-networkd[1423]: eth0: Gained IPv6LL Jul 9 23:48:32.080071 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 23:48:32.081971 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 23:48:32.084486 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 23:48:32.086792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:48:32.097048 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 23:48:32.117063 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 23:48:32.117314 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 23:48:32.118926 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 23:48:32.121427 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 23:48:32.739317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:48:32.740987 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 23:48:32.744446 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:48:32.746237 systemd[1]: Startup finished in 2.159s (kernel) + 5.050s (initrd) + 4.114s (userspace) = 11.324s. Jul 9 23:48:33.243408 kubelet[1606]: E0709 23:48:33.243358 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:48:33.245787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:48:33.245915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:48:33.246215 systemd[1]: kubelet.service: Consumed 866ms CPU time, 258.8M memory peak. Jul 9 23:48:36.838724 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 23:48:36.839898 systemd[1]: Started sshd@0-10.0.0.68:22-10.0.0.1:36584.service - OpenSSH per-connection server daemon (10.0.0.1:36584). Jul 9 23:48:36.930363 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 36584 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:36.932616 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:36.939345 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 23:48:36.940354 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 23:48:36.945986 systemd-logind[1488]: New session 1 of user core. Jul 9 23:48:36.965152 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 23:48:36.967980 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 23:48:36.994434 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 23:48:36.996933 systemd-logind[1488]: New session c1 of user core. Jul 9 23:48:37.115133 systemd[1624]: Queued start job for default target default.target. Jul 9 23:48:37.127177 systemd[1624]: Created slice app.slice - User Application Slice. Jul 9 23:48:37.127207 systemd[1624]: Reached target paths.target - Paths. Jul 9 23:48:37.127246 systemd[1624]: Reached target timers.target - Timers. Jul 9 23:48:37.128560 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 23:48:37.139389 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 23:48:37.139506 systemd[1624]: Reached target sockets.target - Sockets. Jul 9 23:48:37.139562 systemd[1624]: Reached target basic.target - Basic System. Jul 9 23:48:37.139596 systemd[1624]: Reached target default.target - Main User Target. Jul 9 23:48:37.139624 systemd[1624]: Startup finished in 136ms. Jul 9 23:48:37.140079 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 23:48:37.153336 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 23:48:37.217275 systemd[1]: Started sshd@1-10.0.0.68:22-10.0.0.1:36644.service - OpenSSH per-connection server daemon (10.0.0.1:36644). Jul 9 23:48:37.283743 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 36644 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:37.284689 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:37.294682 systemd-logind[1488]: New session 2 of user core. Jul 9 23:48:37.309347 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 23:48:37.362132 sshd[1637]: Connection closed by 10.0.0.1 port 36644 Jul 9 23:48:37.362540 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:37.374739 systemd[1]: sshd@1-10.0.0.68:22-10.0.0.1:36644.service: Deactivated successfully. Jul 9 23:48:37.376653 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 23:48:37.378824 systemd-logind[1488]: Session 2 logged out. Waiting for processes to exit. Jul 9 23:48:37.381447 systemd[1]: Started sshd@2-10.0.0.68:22-10.0.0.1:36648.service - OpenSSH per-connection server daemon (10.0.0.1:36648). Jul 9 23:48:37.382042 systemd-logind[1488]: Removed session 2. Jul 9 23:48:37.438219 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 36648 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:37.439883 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:37.444657 systemd-logind[1488]: New session 3 of user core. Jul 9 23:48:37.453329 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 23:48:37.501429 sshd[1645]: Connection closed by 10.0.0.1 port 36648 Jul 9 23:48:37.501783 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:37.518400 systemd[1]: sshd@2-10.0.0.68:22-10.0.0.1:36648.service: Deactivated successfully. Jul 9 23:48:37.520023 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 23:48:37.521919 systemd-logind[1488]: Session 3 logged out. Waiting for processes to exit. Jul 9 23:48:37.525260 systemd[1]: Started sshd@3-10.0.0.68:22-10.0.0.1:36654.service - OpenSSH per-connection server daemon (10.0.0.1:36654). Jul 9 23:48:37.525925 systemd-logind[1488]: Removed session 3. Jul 9 23:48:37.572512 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 36654 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:37.573765 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:37.578873 systemd-logind[1488]: New session 4 of user core. Jul 9 23:48:37.587332 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 23:48:37.641872 sshd[1653]: Connection closed by 10.0.0.1 port 36654 Jul 9 23:48:37.641989 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:37.656934 systemd[1]: sshd@3-10.0.0.68:22-10.0.0.1:36654.service: Deactivated successfully. Jul 9 23:48:37.658708 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 23:48:37.659580 systemd-logind[1488]: Session 4 logged out. Waiting for processes to exit. Jul 9 23:48:37.662301 systemd[1]: Started sshd@4-10.0.0.68:22-10.0.0.1:36668.service - OpenSSH per-connection server daemon (10.0.0.1:36668). Jul 9 23:48:37.663329 systemd-logind[1488]: Removed session 4. Jul 9 23:48:37.719334 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 36668 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:37.720644 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:37.724820 systemd-logind[1488]: New session 5 of user core. Jul 9 23:48:37.736357 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 23:48:37.803050 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 23:48:37.803358 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:48:37.820745 sudo[1662]: pam_unix(sudo:session): session closed for user root Jul 9 23:48:37.822491 sshd[1661]: Connection closed by 10.0.0.1 port 36668 Jul 9 23:48:37.822930 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:37.838708 systemd[1]: sshd@4-10.0.0.68:22-10.0.0.1:36668.service: Deactivated successfully. Jul 9 23:48:37.841610 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 23:48:37.842310 systemd-logind[1488]: Session 5 logged out. Waiting for processes to exit. Jul 9 23:48:37.844569 systemd[1]: Started sshd@5-10.0.0.68:22-10.0.0.1:36672.service - OpenSSH per-connection server daemon (10.0.0.1:36672). Jul 9 23:48:37.848332 systemd-logind[1488]: Removed session 5. Jul 9 23:48:37.900803 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 36672 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:37.902380 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:37.907801 systemd-logind[1488]: New session 6 of user core. Jul 9 23:48:37.918284 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 23:48:37.970954 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 23:48:37.971554 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:48:38.054279 sudo[1672]: pam_unix(sudo:session): session closed for user root Jul 9 23:48:38.059500 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 23:48:38.059770 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:48:38.071632 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:48:38.116768 augenrules[1694]: No rules Jul 9 23:48:38.117425 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:48:38.117640 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:48:38.119335 sudo[1671]: pam_unix(sudo:session): session closed for user root Jul 9 23:48:38.121502 sshd[1670]: Connection closed by 10.0.0.1 port 36672 Jul 9 23:48:38.121897 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:38.141377 systemd[1]: sshd@5-10.0.0.68:22-10.0.0.1:36672.service: Deactivated successfully. Jul 9 23:48:38.142994 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 23:48:38.143839 systemd-logind[1488]: Session 6 logged out. Waiting for processes to exit. Jul 9 23:48:38.146692 systemd[1]: Started sshd@6-10.0.0.68:22-10.0.0.1:36678.service - OpenSSH per-connection server daemon (10.0.0.1:36678). Jul 9 23:48:38.147407 systemd-logind[1488]: Removed session 6. Jul 9 23:48:38.210701 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 36678 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:48:38.212214 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:38.217194 systemd-logind[1488]: New session 7 of user core. Jul 9 23:48:38.225317 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 23:48:38.276626 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 23:48:38.279800 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:48:38.710602 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 23:48:38.727526 (dockerd)[1727]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 23:48:39.346281 dockerd[1727]: time="2025-07-09T23:48:39.346215140Z" level=info msg="Starting up" Jul 9 23:48:39.348260 dockerd[1727]: time="2025-07-09T23:48:39.348235983Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 23:48:39.420035 dockerd[1727]: time="2025-07-09T23:48:39.419930692Z" level=info msg="Loading containers: start." Jul 9 23:48:39.431168 kernel: Initializing XFRM netlink socket Jul 9 23:48:39.698718 systemd-networkd[1423]: docker0: Link UP Jul 9 23:48:39.704402 dockerd[1727]: time="2025-07-09T23:48:39.704349964Z" level=info msg="Loading containers: done." Jul 9 23:48:39.720927 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4217295841-merged.mount: Deactivated successfully. Jul 9 23:48:39.722967 dockerd[1727]: time="2025-07-09T23:48:39.722905262Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 23:48:39.723061 dockerd[1727]: time="2025-07-09T23:48:39.723006678Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 9 23:48:39.723172 dockerd[1727]: time="2025-07-09T23:48:39.723148925Z" level=info msg="Initializing buildkit" Jul 9 23:48:39.754503 dockerd[1727]: time="2025-07-09T23:48:39.754447693Z" level=info msg="Completed buildkit initialization" Jul 9 23:48:39.762104 dockerd[1727]: time="2025-07-09T23:48:39.762023944Z" level=info msg="Daemon has completed initialization" Jul 9 23:48:39.762381 dockerd[1727]: time="2025-07-09T23:48:39.762177828Z" level=info msg="API listen on /run/docker.sock" Jul 9 23:48:39.762269 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 23:48:40.275215 containerd[1505]: time="2025-07-09T23:48:40.275164746Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 9 23:48:40.903671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount133547885.mount: Deactivated successfully. Jul 9 23:48:41.669396 containerd[1505]: time="2025-07-09T23:48:41.669339332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:41.669881 containerd[1505]: time="2025-07-09T23:48:41.669853418Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 9 23:48:41.670604 containerd[1505]: time="2025-07-09T23:48:41.670577778Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:41.675224 containerd[1505]: time="2025-07-09T23:48:41.675190635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:41.676253 containerd[1505]: time="2025-07-09T23:48:41.676224566Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.401015271s" Jul 9 23:48:41.676464 containerd[1505]: time="2025-07-09T23:48:41.676336341Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 9 23:48:41.680643 containerd[1505]: time="2025-07-09T23:48:41.680586040Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 9 23:48:42.643619 containerd[1505]: time="2025-07-09T23:48:42.643571282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:42.643989 containerd[1505]: time="2025-07-09T23:48:42.643939602Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 9 23:48:42.644810 containerd[1505]: time="2025-07-09T23:48:42.644768385Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:42.647691 containerd[1505]: time="2025-07-09T23:48:42.647659244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:42.648670 containerd[1505]: time="2025-07-09T23:48:42.648597442Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 967.953895ms" Jul 9 23:48:42.648670 containerd[1505]: time="2025-07-09T23:48:42.648667707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 9 23:48:42.649284 containerd[1505]: time="2025-07-09T23:48:42.649260020Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 9 23:48:43.496537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 23:48:43.498603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:48:43.747931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:48:43.751932 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:48:43.775134 containerd[1505]: time="2025-07-09T23:48:43.775071439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:43.775829 containerd[1505]: time="2025-07-09T23:48:43.775807166Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 9 23:48:43.776850 containerd[1505]: time="2025-07-09T23:48:43.776820075Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:43.780077 containerd[1505]: time="2025-07-09T23:48:43.780032327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:43.781481 containerd[1505]: time="2025-07-09T23:48:43.781433116Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.132130865s" Jul 9 23:48:43.781563 containerd[1505]: time="2025-07-09T23:48:43.781483625Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 9 23:48:43.782036 containerd[1505]: time="2025-07-09T23:48:43.781986041Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 9 23:48:43.791169 kubelet[2005]: E0709 23:48:43.791089 2005 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:48:43.794672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:48:43.794856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:48:43.795366 systemd[1]: kubelet.service: Consumed 144ms CPU time, 105.7M memory peak. Jul 9 23:48:44.710228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676575261.mount: Deactivated successfully. Jul 9 23:48:45.108264 containerd[1505]: time="2025-07-09T23:48:45.108143625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:45.108769 containerd[1505]: time="2025-07-09T23:48:45.108739309Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 9 23:48:45.109586 containerd[1505]: time="2025-07-09T23:48:45.109553630Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:45.111458 containerd[1505]: time="2025-07-09T23:48:45.111397190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:45.112148 containerd[1505]: time="2025-07-09T23:48:45.111918648Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.329800835s" Jul 9 23:48:45.112148 containerd[1505]: time="2025-07-09T23:48:45.111950602Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 9 23:48:45.112423 containerd[1505]: time="2025-07-09T23:48:45.112354203Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 9 23:48:45.636807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401694101.mount: Deactivated successfully. Jul 9 23:48:46.352830 containerd[1505]: time="2025-07-09T23:48:46.352784079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:46.353810 containerd[1505]: time="2025-07-09T23:48:46.353761734Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 9 23:48:46.354755 containerd[1505]: time="2025-07-09T23:48:46.354720873Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:46.357451 containerd[1505]: time="2025-07-09T23:48:46.357421602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:46.358523 containerd[1505]: time="2025-07-09T23:48:46.358479282Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.246097404s" Jul 9 23:48:46.358523 containerd[1505]: time="2025-07-09T23:48:46.358521954Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 9 23:48:46.359006 containerd[1505]: time="2025-07-09T23:48:46.358984267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 23:48:46.792596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355438968.mount: Deactivated successfully. Jul 9 23:48:46.797675 containerd[1505]: time="2025-07-09T23:48:46.797631764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:48:46.798361 containerd[1505]: time="2025-07-09T23:48:46.798319154Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 9 23:48:46.799425 containerd[1505]: time="2025-07-09T23:48:46.799396111Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:48:46.801246 containerd[1505]: time="2025-07-09T23:48:46.801215087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:48:46.801893 containerd[1505]: time="2025-07-09T23:48:46.801863724Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 442.849143ms" Jul 9 23:48:46.801933 containerd[1505]: time="2025-07-09T23:48:46.801897717Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 9 23:48:46.802486 containerd[1505]: time="2025-07-09T23:48:46.802459291Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 9 23:48:47.265403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112673220.mount: Deactivated successfully. Jul 9 23:48:48.598555 containerd[1505]: time="2025-07-09T23:48:48.598504201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:48.599604 containerd[1505]: time="2025-07-09T23:48:48.599572091Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 9 23:48:48.600315 containerd[1505]: time="2025-07-09T23:48:48.600277326Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:48.604751 containerd[1505]: time="2025-07-09T23:48:48.604685464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:48.606413 containerd[1505]: time="2025-07-09T23:48:48.606262544Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.80376726s" Jul 9 23:48:48.606413 containerd[1505]: time="2025-07-09T23:48:48.606304137Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 9 23:48:53.356340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:48:53.356480 systemd[1]: kubelet.service: Consumed 144ms CPU time, 105.7M memory peak. Jul 9 23:48:53.358684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:48:53.377795 systemd[1]: Reload requested from client PID 2163 ('systemctl') (unit session-7.scope)... Jul 9 23:48:53.377814 systemd[1]: Reloading... Jul 9 23:48:53.458213 zram_generator::config[2208]: No configuration found. Jul 9 23:48:53.578888 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:48:53.665054 systemd[1]: Reloading finished in 286 ms. Jul 9 23:48:53.720582 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 23:48:53.720657 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 23:48:53.722164 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:48:53.722218 systemd[1]: kubelet.service: Consumed 87ms CPU time, 95M memory peak. Jul 9 23:48:53.723730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:48:53.845950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:48:53.850625 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:48:53.884546 kubelet[2250]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:48:53.884546 kubelet[2250]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:48:53.884546 kubelet[2250]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:48:53.884877 kubelet[2250]: I0709 23:48:53.884601 2250 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:48:54.680158 kubelet[2250]: I0709 23:48:54.679557 2250 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 9 23:48:54.680158 kubelet[2250]: I0709 23:48:54.679600 2250 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:48:54.680158 kubelet[2250]: I0709 23:48:54.679835 2250 server.go:956] "Client rotation is on, will bootstrap in background" Jul 9 23:48:54.709958 kubelet[2250]: E0709 23:48:54.709901 2250 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 9 23:48:54.715382 kubelet[2250]: I0709 23:48:54.715331 2250 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:48:54.732107 kubelet[2250]: I0709 23:48:54.732075 2250 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:48:54.735122 kubelet[2250]: I0709 23:48:54.735093 2250 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:48:54.735473 kubelet[2250]: I0709 23:48:54.735428 2250 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:48:54.735648 kubelet[2250]: I0709 23:48:54.735462 2250 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:48:54.735741 kubelet[2250]: I0709 23:48:54.735710 2250 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:48:54.735741 kubelet[2250]: I0709 23:48:54.735719 2250 container_manager_linux.go:303] "Creating device plugin manager" Jul 9 23:48:54.736572 kubelet[2250]: I0709 23:48:54.736537 2250 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:48:54.741033 kubelet[2250]: I0709 23:48:54.740998 2250 kubelet.go:480] "Attempting to sync node with API server" Jul 9 23:48:54.741033 kubelet[2250]: I0709 23:48:54.741031 2250 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:48:54.741151 kubelet[2250]: I0709 23:48:54.741064 2250 kubelet.go:386] "Adding apiserver pod source" Jul 9 23:48:54.742239 kubelet[2250]: I0709 23:48:54.742088 2250 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:48:54.743235 kubelet[2250]: I0709 23:48:54.743190 2250 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:48:54.744555 kubelet[2250]: I0709 23:48:54.744270 2250 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 9 23:48:54.744555 kubelet[2250]: W0709 23:48:54.744455 2250 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 23:48:54.746895 kubelet[2250]: E0709 23:48:54.746862 2250 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 9 23:48:54.747106 kubelet[2250]: E0709 23:48:54.747074 2250 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 9 23:48:54.747299 kubelet[2250]: I0709 23:48:54.747271 2250 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:48:54.747349 kubelet[2250]: I0709 23:48:54.747319 2250 server.go:1289] "Started kubelet" Jul 9 23:48:54.748674 kubelet[2250]: I0709 23:48:54.748646 2250 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:48:54.752153 kubelet[2250]: I0709 23:48:54.751326 2250 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:48:54.752153 kubelet[2250]: E0709 23:48:54.751604 2250 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:48:54.752153 kubelet[2250]: I0709 23:48:54.751639 2250 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:48:54.752153 kubelet[2250]: I0709 23:48:54.751788 2250 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:48:54.752153 kubelet[2250]: I0709 23:48:54.751935 2250 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:48:54.752501 kubelet[2250]: I0709 23:48:54.752468 2250 server.go:317] "Adding debug handlers to kubelet server" Jul 9 23:48:54.752655 kubelet[2250]: E0709 23:48:54.752621 2250 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 9 23:48:54.759391 kubelet[2250]: E0709 23:48:54.757845 2250 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.68:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.68:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850ba27b4d23640 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 23:48:54.747289152 +0000 UTC m=+0.893078600,LastTimestamp:2025-07-09 23:48:54.747289152 +0000 UTC m=+0.893078600,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 23:48:54.759391 kubelet[2250]: I0709 23:48:54.759341 2250 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:48:54.760612 kubelet[2250]: E0709 23:48:54.760567 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="200ms" Jul 9 23:48:54.760848 kubelet[2250]: E0709 23:48:54.760828 2250 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:48:54.761130 kubelet[2250]: I0709 23:48:54.761043 2250 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:48:54.761337 kubelet[2250]: I0709 23:48:54.761310 2250 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:48:54.761776 kubelet[2250]: I0709 23:48:54.761754 2250 factory.go:223] Registration of the containerd container factory successfully Jul 9 23:48:54.761853 kubelet[2250]: I0709 23:48:54.761843 2250 factory.go:223] Registration of the systemd container factory successfully Jul 9 23:48:54.762005 kubelet[2250]: I0709 23:48:54.761978 2250 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:48:54.772611 kubelet[2250]: I0709 23:48:54.772587 2250 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:48:54.772754 kubelet[2250]: I0709 23:48:54.772741 2250 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:48:54.772808 kubelet[2250]: I0709 23:48:54.772800 2250 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:48:54.778971 kubelet[2250]: I0709 23:48:54.778911 2250 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 9 23:48:54.780204 kubelet[2250]: I0709 23:48:54.780168 2250 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 9 23:48:54.780204 kubelet[2250]: I0709 23:48:54.780199 2250 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 9 23:48:54.780308 kubelet[2250]: I0709 23:48:54.780219 2250 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:48:54.780308 kubelet[2250]: I0709 23:48:54.780226 2250 kubelet.go:2436] "Starting kubelet main sync loop" Jul 9 23:48:54.780308 kubelet[2250]: E0709 23:48:54.780268 2250 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:48:54.781194 kubelet[2250]: E0709 23:48:54.781166 2250 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 9 23:48:54.846891 kubelet[2250]: I0709 23:48:54.846800 2250 policy_none.go:49] "None policy: Start" Jul 9 23:48:54.846891 kubelet[2250]: I0709 23:48:54.846834 2250 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:48:54.846891 kubelet[2250]: I0709 23:48:54.846847 2250 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:48:54.851927 kubelet[2250]: E0709 23:48:54.851889 2250 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:48:54.852441 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 23:48:54.870318 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 23:48:54.873376 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 23:48:54.880751 kubelet[2250]: E0709 23:48:54.880708 2250 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 23:48:54.884229 kubelet[2250]: E0709 23:48:54.884141 2250 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 9 23:48:54.884370 kubelet[2250]: I0709 23:48:54.884355 2250 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:48:54.884839 kubelet[2250]: I0709 23:48:54.884370 2250 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:48:54.884839 kubelet[2250]: I0709 23:48:54.884564 2250 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:48:54.886269 kubelet[2250]: E0709 23:48:54.886167 2250 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:48:54.886269 kubelet[2250]: E0709 23:48:54.886214 2250 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 9 23:48:54.962140 kubelet[2250]: E0709 23:48:54.961990 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="400ms" Jul 9 23:48:54.986545 kubelet[2250]: I0709 23:48:54.986475 2250 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:48:54.986999 kubelet[2250]: E0709 23:48:54.986958 2250 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Jul 9 23:48:55.092625 systemd[1]: Created slice kubepods-burstable-podb8da5d9fb6f884d63e72a30c2315ccec.slice - libcontainer container kubepods-burstable-podb8da5d9fb6f884d63e72a30c2315ccec.slice. Jul 9 23:48:55.121659 kubelet[2250]: E0709 23:48:55.121609 2250 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:55.124744 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 9 23:48:55.126994 kubelet[2250]: E0709 23:48:55.126960 2250 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:55.128894 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 9 23:48:55.130694 kubelet[2250]: E0709 23:48:55.130657 2250 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:55.153895 kubelet[2250]: I0709 23:48:55.153853 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:55.153963 kubelet[2250]: I0709 23:48:55.153892 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:55.153963 kubelet[2250]: I0709 23:48:55.153924 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:55.153963 kubelet[2250]: I0709 23:48:55.153948 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8da5d9fb6f884d63e72a30c2315ccec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8da5d9fb6f884d63e72a30c2315ccec\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:48:55.154059 kubelet[2250]: I0709 23:48:55.153966 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:55.154059 kubelet[2250]: I0709 23:48:55.154033 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:55.154126 kubelet[2250]: I0709 23:48:55.154050 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 9 23:48:55.154206 kubelet[2250]: I0709 23:48:55.154147 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8da5d9fb6f884d63e72a30c2315ccec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8da5d9fb6f884d63e72a30c2315ccec\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:48:55.154243 kubelet[2250]: I0709 23:48:55.154197 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8da5d9fb6f884d63e72a30c2315ccec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b8da5d9fb6f884d63e72a30c2315ccec\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:48:55.188078 kubelet[2250]: I0709 23:48:55.188029 2250 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:48:55.188417 kubelet[2250]: E0709 23:48:55.188393 2250 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Jul 9 23:48:55.363449 kubelet[2250]: E0709 23:48:55.363331 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="800ms" Jul 9 23:48:55.423265 containerd[1505]: time="2025-07-09T23:48:55.423218269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b8da5d9fb6f884d63e72a30c2315ccec,Namespace:kube-system,Attempt:0,}" Jul 9 23:48:55.427753 containerd[1505]: time="2025-07-09T23:48:55.427708831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 9 23:48:55.431568 containerd[1505]: time="2025-07-09T23:48:55.431533168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 9 23:48:55.456630 containerd[1505]: time="2025-07-09T23:48:55.456473664Z" level=info msg="connecting to shim 118f899fbe643798bbc6da42ca27f4320d9b704c952d6b35959370c0ce749c36" address="unix:///run/containerd/s/f636e7efdb0227aeedb2df6f660bd98497de119de8009f0fa35b32a3fc0861c8" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:48:55.466141 containerd[1505]: time="2025-07-09T23:48:55.464483046Z" level=info msg="connecting to shim 7b000dcc1295d678ec77f39601c5fde8f08ff2e9e8fb6a2059741f69c28f40f4" address="unix:///run/containerd/s/2021ccea7183025053e8b24d406ce694dc6418a3cd5546ad27d790eab3521e9b" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:48:55.478131 containerd[1505]: time="2025-07-09T23:48:55.478070116Z" level=info msg="connecting to shim 54a6d4f7a04aa21510a629badb3cff55f92eea5e65cbebcb39d7f496cdd07a8a" address="unix:///run/containerd/s/c768efbb38272645c509f324afd0eb2c4e9b8669674a6a4ca760322ec7b0edc7" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:48:55.492355 systemd[1]: Started cri-containerd-118f899fbe643798bbc6da42ca27f4320d9b704c952d6b35959370c0ce749c36.scope - libcontainer container 118f899fbe643798bbc6da42ca27f4320d9b704c952d6b35959370c0ce749c36. Jul 9 23:48:55.495852 systemd[1]: Started cri-containerd-7b000dcc1295d678ec77f39601c5fde8f08ff2e9e8fb6a2059741f69c28f40f4.scope - libcontainer container 7b000dcc1295d678ec77f39601c5fde8f08ff2e9e8fb6a2059741f69c28f40f4. Jul 9 23:48:55.498982 systemd[1]: Started cri-containerd-54a6d4f7a04aa21510a629badb3cff55f92eea5e65cbebcb39d7f496cdd07a8a.scope - libcontainer container 54a6d4f7a04aa21510a629badb3cff55f92eea5e65cbebcb39d7f496cdd07a8a. Jul 9 23:48:55.545505 containerd[1505]: time="2025-07-09T23:48:55.545343757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b8da5d9fb6f884d63e72a30c2315ccec,Namespace:kube-system,Attempt:0,} returns sandbox id \"118f899fbe643798bbc6da42ca27f4320d9b704c952d6b35959370c0ce749c36\"" Jul 9 23:48:55.545505 containerd[1505]: time="2025-07-09T23:48:55.545509213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"54a6d4f7a04aa21510a629badb3cff55f92eea5e65cbebcb39d7f496cdd07a8a\"" Jul 9 23:48:55.548854 containerd[1505]: time="2025-07-09T23:48:55.548585096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b000dcc1295d678ec77f39601c5fde8f08ff2e9e8fb6a2059741f69c28f40f4\"" Jul 9 23:48:55.551154 containerd[1505]: time="2025-07-09T23:48:55.551097379Z" level=info msg="CreateContainer within sandbox \"118f899fbe643798bbc6da42ca27f4320d9b704c952d6b35959370c0ce749c36\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 23:48:55.552697 containerd[1505]: time="2025-07-09T23:48:55.552654238Z" level=info msg="CreateContainer within sandbox \"54a6d4f7a04aa21510a629badb3cff55f92eea5e65cbebcb39d7f496cdd07a8a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 23:48:55.562212 containerd[1505]: time="2025-07-09T23:48:55.562160528Z" level=info msg="Container 75f6d72e0f08136198800228adce30e25c00ef56b81645872bc0d384ec16179a: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:55.562949 kubelet[2250]: E0709 23:48:55.562891 2250 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 9 23:48:55.568038 containerd[1505]: time="2025-07-09T23:48:55.567918469Z" level=info msg="CreateContainer within sandbox \"7b000dcc1295d678ec77f39601c5fde8f08ff2e9e8fb6a2059741f69c28f40f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 23:48:55.576551 containerd[1505]: time="2025-07-09T23:48:55.576513968Z" level=info msg="CreateContainer within sandbox \"54a6d4f7a04aa21510a629badb3cff55f92eea5e65cbebcb39d7f496cdd07a8a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"75f6d72e0f08136198800228adce30e25c00ef56b81645872bc0d384ec16179a\"" Jul 9 23:48:55.577495 containerd[1505]: time="2025-07-09T23:48:55.577468073Z" level=info msg="StartContainer for \"75f6d72e0f08136198800228adce30e25c00ef56b81645872bc0d384ec16179a\"" Jul 9 23:48:55.577872 containerd[1505]: time="2025-07-09T23:48:55.577846219Z" level=info msg="Container 2b74a584c9814dd295c5ac4af165bfe004014e5d9e8d3d5c9b991cbb0784cf92: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:55.578586 containerd[1505]: time="2025-07-09T23:48:55.578559438Z" level=info msg="connecting to shim 75f6d72e0f08136198800228adce30e25c00ef56b81645872bc0d384ec16179a" address="unix:///run/containerd/s/c768efbb38272645c509f324afd0eb2c4e9b8669674a6a4ca760322ec7b0edc7" protocol=ttrpc version=3 Jul 9 23:48:55.584268 containerd[1505]: time="2025-07-09T23:48:55.584208475Z" level=info msg="Container d45838d9dd4642569a9a9fa9968e2ccb625d2a657d22c9da8b13545ed7b685b2: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:55.586615 containerd[1505]: time="2025-07-09T23:48:55.586561381Z" level=info msg="CreateContainer within sandbox \"118f899fbe643798bbc6da42ca27f4320d9b704c952d6b35959370c0ce749c36\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2b74a584c9814dd295c5ac4af165bfe004014e5d9e8d3d5c9b991cbb0784cf92\"" Jul 9 23:48:55.587427 containerd[1505]: time="2025-07-09T23:48:55.587389863Z" level=info msg="StartContainer for \"2b74a584c9814dd295c5ac4af165bfe004014e5d9e8d3d5c9b991cbb0784cf92\"" Jul 9 23:48:55.589531 kubelet[2250]: I0709 23:48:55.589498 2250 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:48:55.589889 containerd[1505]: time="2025-07-09T23:48:55.589276675Z" level=info msg="connecting to shim 2b74a584c9814dd295c5ac4af165bfe004014e5d9e8d3d5c9b991cbb0784cf92" address="unix:///run/containerd/s/f636e7efdb0227aeedb2df6f660bd98497de119de8009f0fa35b32a3fc0861c8" protocol=ttrpc version=3 Jul 9 23:48:55.590056 kubelet[2250]: E0709 23:48:55.589852 2250 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Jul 9 23:48:55.592337 containerd[1505]: time="2025-07-09T23:48:55.592294486Z" level=info msg="CreateContainer within sandbox \"7b000dcc1295d678ec77f39601c5fde8f08ff2e9e8fb6a2059741f69c28f40f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d45838d9dd4642569a9a9fa9968e2ccb625d2a657d22c9da8b13545ed7b685b2\"" Jul 9 23:48:55.592914 containerd[1505]: time="2025-07-09T23:48:55.592869444Z" level=info msg="StartContainer for \"d45838d9dd4642569a9a9fa9968e2ccb625d2a657d22c9da8b13545ed7b685b2\"" Jul 9 23:48:55.595336 containerd[1505]: time="2025-07-09T23:48:55.595208872Z" level=info msg="connecting to shim d45838d9dd4642569a9a9fa9968e2ccb625d2a657d22c9da8b13545ed7b685b2" address="unix:///run/containerd/s/2021ccea7183025053e8b24d406ce694dc6418a3cd5546ad27d790eab3521e9b" protocol=ttrpc version=3 Jul 9 23:48:55.597390 systemd[1]: Started cri-containerd-75f6d72e0f08136198800228adce30e25c00ef56b81645872bc0d384ec16179a.scope - libcontainer container 75f6d72e0f08136198800228adce30e25c00ef56b81645872bc0d384ec16179a. Jul 9 23:48:55.605568 systemd[1]: Started cri-containerd-2b74a584c9814dd295c5ac4af165bfe004014e5d9e8d3d5c9b991cbb0784cf92.scope - libcontainer container 2b74a584c9814dd295c5ac4af165bfe004014e5d9e8d3d5c9b991cbb0784cf92. Jul 9 23:48:55.623476 systemd[1]: Started cri-containerd-d45838d9dd4642569a9a9fa9968e2ccb625d2a657d22c9da8b13545ed7b685b2.scope - libcontainer container d45838d9dd4642569a9a9fa9968e2ccb625d2a657d22c9da8b13545ed7b685b2. Jul 9 23:48:55.638189 containerd[1505]: time="2025-07-09T23:48:55.638148851Z" level=info msg="StartContainer for \"75f6d72e0f08136198800228adce30e25c00ef56b81645872bc0d384ec16179a\" returns successfully" Jul 9 23:48:55.639300 kubelet[2250]: E0709 23:48:55.639257 2250 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 9 23:48:55.661669 containerd[1505]: time="2025-07-09T23:48:55.661556525Z" level=info msg="StartContainer for \"2b74a584c9814dd295c5ac4af165bfe004014e5d9e8d3d5c9b991cbb0784cf92\" returns successfully" Jul 9 23:48:55.714935 containerd[1505]: time="2025-07-09T23:48:55.714887188Z" level=info msg="StartContainer for \"d45838d9dd4642569a9a9fa9968e2ccb625d2a657d22c9da8b13545ed7b685b2\" returns successfully" Jul 9 23:48:55.793270 kubelet[2250]: E0709 23:48:55.793057 2250 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:55.801030 kubelet[2250]: E0709 23:48:55.801003 2250 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:55.806013 kubelet[2250]: E0709 23:48:55.805734 2250 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:55.939516 kubelet[2250]: E0709 23:48:55.939467 2250 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 9 23:48:56.392895 kubelet[2250]: I0709 23:48:56.391790 2250 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:48:56.806793 kubelet[2250]: E0709 23:48:56.806659 2250 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:56.807100 kubelet[2250]: E0709 23:48:56.807068 2250 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:56.808138 kubelet[2250]: E0709 23:48:56.807417 2250 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:58.363464 kubelet[2250]: E0709 23:48:58.363430 2250 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:48:58.414560 kubelet[2250]: E0709 23:48:58.414500 2250 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 9 23:48:58.504800 kubelet[2250]: I0709 23:48:58.504745 2250 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 23:48:58.504800 kubelet[2250]: E0709 23:48:58.504790 2250 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 9 23:48:58.513754 kubelet[2250]: E0709 23:48:58.513713 2250 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:48:58.614769 kubelet[2250]: E0709 23:48:58.614474 2250 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:48:58.715399 kubelet[2250]: E0709 23:48:58.715358 2250 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:48:58.815784 kubelet[2250]: E0709 23:48:58.815746 2250 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:48:58.916626 kubelet[2250]: E0709 23:48:58.916498 2250 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:48:59.017292 kubelet[2250]: E0709 23:48:59.017230 2250 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:48:59.161523 kubelet[2250]: I0709 23:48:59.161468 2250 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 23:48:59.167299 kubelet[2250]: E0709 23:48:59.167209 2250 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 9 23:48:59.167299 kubelet[2250]: I0709 23:48:59.167241 2250 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 23:48:59.169439 kubelet[2250]: E0709 23:48:59.169385 2250 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 9 23:48:59.169439 kubelet[2250]: I0709 23:48:59.169413 2250 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:59.171050 kubelet[2250]: E0709 23:48:59.171015 2250 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 9 23:48:59.750635 kubelet[2250]: I0709 23:48:59.750585 2250 apiserver.go:52] "Watching apiserver" Jul 9 23:48:59.852407 kubelet[2250]: I0709 23:48:59.852332 2250 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 23:49:00.837695 systemd[1]: Reload requested from client PID 2534 ('systemctl') (unit session-7.scope)... Jul 9 23:49:00.837712 systemd[1]: Reloading... Jul 9 23:49:00.918155 zram_generator::config[2580]: No configuration found. Jul 9 23:49:00.993563 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:49:01.094665 systemd[1]: Reloading finished in 256 ms. Jul 9 23:49:01.123879 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:49:01.139444 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 23:49:01.139712 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:49:01.139797 systemd[1]: kubelet.service: Consumed 1.317s CPU time, 128.9M memory peak. Jul 9 23:49:01.141848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:49:01.294566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:49:01.300610 (kubelet)[2619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:49:01.343299 kubelet[2619]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:49:01.343299 kubelet[2619]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:49:01.343299 kubelet[2619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:49:01.343299 kubelet[2619]: I0709 23:49:01.343231 2619 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:49:01.351724 kubelet[2619]: I0709 23:49:01.351592 2619 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 9 23:49:01.351724 kubelet[2619]: I0709 23:49:01.351633 2619 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:49:01.352406 kubelet[2619]: I0709 23:49:01.352374 2619 server.go:956] "Client rotation is on, will bootstrap in background" Jul 9 23:49:01.358207 kubelet[2619]: I0709 23:49:01.357905 2619 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 9 23:49:01.360531 kubelet[2619]: I0709 23:49:01.360493 2619 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:49:01.367155 kubelet[2619]: I0709 23:49:01.367067 2619 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:49:01.371478 kubelet[2619]: I0709 23:49:01.371443 2619 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:49:01.371797 kubelet[2619]: I0709 23:49:01.371768 2619 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:49:01.372053 kubelet[2619]: I0709 23:49:01.371799 2619 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:49:01.372167 kubelet[2619]: I0709 23:49:01.372063 2619 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:49:01.372167 kubelet[2619]: I0709 23:49:01.372074 2619 container_manager_linux.go:303] "Creating device plugin manager" Jul 9 23:49:01.372167 kubelet[2619]: I0709 23:49:01.372144 2619 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:49:01.372350 kubelet[2619]: I0709 23:49:01.372335 2619 kubelet.go:480] "Attempting to sync node with API server" Jul 9 23:49:01.372375 kubelet[2619]: I0709 23:49:01.372352 2619 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:49:01.372397 kubelet[2619]: I0709 23:49:01.372376 2619 kubelet.go:386] "Adding apiserver pod source" Jul 9 23:49:01.372397 kubelet[2619]: I0709 23:49:01.372391 2619 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:49:01.378124 kubelet[2619]: I0709 23:49:01.377435 2619 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:49:01.379388 kubelet[2619]: I0709 23:49:01.379321 2619 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 9 23:49:01.386924 kubelet[2619]: I0709 23:49:01.386382 2619 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:49:01.386924 kubelet[2619]: I0709 23:49:01.386425 2619 server.go:1289] "Started kubelet" Jul 9 23:49:01.389952 kubelet[2619]: I0709 23:49:01.389923 2619 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:49:01.392191 kubelet[2619]: I0709 23:49:01.392137 2619 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:49:01.394206 kubelet[2619]: I0709 23:49:01.393750 2619 factory.go:223] Registration of the systemd container factory successfully Jul 9 23:49:01.394206 kubelet[2619]: I0709 23:49:01.393872 2619 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:49:01.394206 kubelet[2619]: I0709 23:49:01.392571 2619 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:49:01.395066 kubelet[2619]: I0709 23:49:01.395014 2619 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:49:01.395189 kubelet[2619]: I0709 23:49:01.395151 2619 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:49:01.395275 kubelet[2619]: I0709 23:49:01.392561 2619 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:49:01.395497 kubelet[2619]: I0709 23:49:01.395477 2619 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:49:01.395615 kubelet[2619]: I0709 23:49:01.395055 2619 server.go:317] "Adding debug handlers to kubelet server" Jul 9 23:49:01.396027 kubelet[2619]: I0709 23:49:01.395996 2619 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:49:01.397394 kubelet[2619]: I0709 23:49:01.397357 2619 factory.go:223] Registration of the containerd container factory successfully Jul 9 23:49:01.397652 kubelet[2619]: E0709 23:49:01.397622 2619 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:49:01.409858 kubelet[2619]: I0709 23:49:01.409820 2619 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 9 23:49:01.410924 kubelet[2619]: I0709 23:49:01.410896 2619 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 9 23:49:01.410924 kubelet[2619]: I0709 23:49:01.410929 2619 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 9 23:49:01.411068 kubelet[2619]: I0709 23:49:01.411049 2619 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:49:01.411068 kubelet[2619]: I0709 23:49:01.411067 2619 kubelet.go:2436] "Starting kubelet main sync loop" Jul 9 23:49:01.412206 kubelet[2619]: E0709 23:49:01.411111 2619 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:49:01.437768 kubelet[2619]: I0709 23:49:01.437718 2619 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:49:01.437768 kubelet[2619]: I0709 23:49:01.437758 2619 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:49:01.437932 kubelet[2619]: I0709 23:49:01.437786 2619 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:49:01.437932 kubelet[2619]: I0709 23:49:01.437923 2619 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 23:49:01.437977 kubelet[2619]: I0709 23:49:01.437933 2619 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 23:49:01.437977 kubelet[2619]: I0709 23:49:01.437963 2619 policy_none.go:49] "None policy: Start" Jul 9 23:49:01.437977 kubelet[2619]: I0709 23:49:01.437974 2619 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:49:01.438037 kubelet[2619]: I0709 23:49:01.437984 2619 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:49:01.438138 kubelet[2619]: I0709 23:49:01.438068 2619 state_mem.go:75] "Updated machine memory state" Jul 9 23:49:01.443832 kubelet[2619]: E0709 23:49:01.443786 2619 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 9 23:49:01.443999 kubelet[2619]: I0709 23:49:01.443980 2619 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:49:01.444054 kubelet[2619]: I0709 23:49:01.443997 2619 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:49:01.444301 kubelet[2619]: I0709 23:49:01.444281 2619 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:49:01.445515 kubelet[2619]: E0709 23:49:01.445493 2619 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:49:01.512672 kubelet[2619]: I0709 23:49:01.512637 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:01.512989 kubelet[2619]: I0709 23:49:01.512968 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:01.513163 kubelet[2619]: I0709 23:49:01.513025 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:01.550238 kubelet[2619]: I0709 23:49:01.550195 2619 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:49:01.556995 kubelet[2619]: I0709 23:49:01.556883 2619 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 9 23:49:01.556995 kubelet[2619]: I0709 23:49:01.556974 2619 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 23:49:01.597671 kubelet[2619]: I0709 23:49:01.597562 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:01.597819 kubelet[2619]: I0709 23:49:01.597700 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8da5d9fb6f884d63e72a30c2315ccec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b8da5d9fb6f884d63e72a30c2315ccec\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:01.597819 kubelet[2619]: I0709 23:49:01.597725 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:01.597819 kubelet[2619]: I0709 23:49:01.597745 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:01.597819 kubelet[2619]: I0709 23:49:01.597761 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:01.597819 kubelet[2619]: I0709 23:49:01.597776 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:01.597926 kubelet[2619]: I0709 23:49:01.597833 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:01.597926 kubelet[2619]: I0709 23:49:01.597883 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8da5d9fb6f884d63e72a30c2315ccec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8da5d9fb6f884d63e72a30c2315ccec\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:01.597926 kubelet[2619]: I0709 23:49:01.597921 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8da5d9fb6f884d63e72a30c2315ccec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8da5d9fb6f884d63e72a30c2315ccec\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:02.373609 kubelet[2619]: I0709 23:49:02.373564 2619 apiserver.go:52] "Watching apiserver" Jul 9 23:49:02.395245 kubelet[2619]: I0709 23:49:02.395204 2619 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 23:49:02.429878 kubelet[2619]: I0709 23:49:02.429760 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:02.436307 kubelet[2619]: E0709 23:49:02.436267 2619 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:02.448281 kubelet[2619]: I0709 23:49:02.448223 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.448206551 podStartE2EDuration="1.448206551s" podCreationTimestamp="2025-07-09 23:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:02.447962699 +0000 UTC m=+1.143601540" watchObservedRunningTime="2025-07-09 23:49:02.448206551 +0000 UTC m=+1.143845352" Jul 9 23:49:02.465279 kubelet[2619]: I0709 23:49:02.465159 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.465141464 podStartE2EDuration="1.465141464s" podCreationTimestamp="2025-07-09 23:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:02.455895916 +0000 UTC m=+1.151534757" watchObservedRunningTime="2025-07-09 23:49:02.465141464 +0000 UTC m=+1.160780265" Jul 9 23:49:02.465279 kubelet[2619]: I0709 23:49:02.465245 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.465240933 podStartE2EDuration="1.465240933s" podCreationTimestamp="2025-07-09 23:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:02.463616118 +0000 UTC m=+1.159254999" watchObservedRunningTime="2025-07-09 23:49:02.465240933 +0000 UTC m=+1.160879774" Jul 9 23:49:05.250512 kubelet[2619]: I0709 23:49:05.250458 2619 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 23:49:05.251148 kubelet[2619]: I0709 23:49:05.250958 2619 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 23:49:05.251185 containerd[1505]: time="2025-07-09T23:49:05.250791327Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 23:49:06.341624 systemd[1]: Created slice kubepods-besteffort-podcdcdcd17_3ebe_404b_8923_eb26623a1a69.slice - libcontainer container kubepods-besteffort-podcdcdcd17_3ebe_404b_8923_eb26623a1a69.slice. Jul 9 23:49:06.427939 kubelet[2619]: I0709 23:49:06.427789 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cdcdcd17-3ebe-404b-8923-eb26623a1a69-kube-proxy\") pod \"kube-proxy-qsdjf\" (UID: \"cdcdcd17-3ebe-404b-8923-eb26623a1a69\") " pod="kube-system/kube-proxy-qsdjf" Jul 9 23:49:06.429409 kubelet[2619]: I0709 23:49:06.429047 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdcdcd17-3ebe-404b-8923-eb26623a1a69-xtables-lock\") pod \"kube-proxy-qsdjf\" (UID: \"cdcdcd17-3ebe-404b-8923-eb26623a1a69\") " pod="kube-system/kube-proxy-qsdjf" Jul 9 23:49:06.429409 kubelet[2619]: I0709 23:49:06.429169 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkt5l\" (UniqueName: \"kubernetes.io/projected/cdcdcd17-3ebe-404b-8923-eb26623a1a69-kube-api-access-pkt5l\") pod \"kube-proxy-qsdjf\" (UID: \"cdcdcd17-3ebe-404b-8923-eb26623a1a69\") " pod="kube-system/kube-proxy-qsdjf" Jul 9 23:49:06.429409 kubelet[2619]: I0709 23:49:06.429201 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdcdcd17-3ebe-404b-8923-eb26623a1a69-lib-modules\") pod \"kube-proxy-qsdjf\" (UID: \"cdcdcd17-3ebe-404b-8923-eb26623a1a69\") " pod="kube-system/kube-proxy-qsdjf" Jul 9 23:49:06.513048 systemd[1]: Created slice kubepods-besteffort-podb765d782_39e8_47df_9529_3aa934b6eb02.slice - libcontainer container kubepods-besteffort-podb765d782_39e8_47df_9529_3aa934b6eb02.slice. Jul 9 23:49:06.530149 kubelet[2619]: I0709 23:49:06.530088 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxtwb\" (UniqueName: \"kubernetes.io/projected/b765d782-39e8-47df-9529-3aa934b6eb02-kube-api-access-kxtwb\") pod \"tigera-operator-747864d56d-dhbpb\" (UID: \"b765d782-39e8-47df-9529-3aa934b6eb02\") " pod="tigera-operator/tigera-operator-747864d56d-dhbpb" Jul 9 23:49:06.531033 kubelet[2619]: I0709 23:49:06.530707 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b765d782-39e8-47df-9529-3aa934b6eb02-var-lib-calico\") pod \"tigera-operator-747864d56d-dhbpb\" (UID: \"b765d782-39e8-47df-9529-3aa934b6eb02\") " pod="tigera-operator/tigera-operator-747864d56d-dhbpb" Jul 9 23:49:06.659656 containerd[1505]: time="2025-07-09T23:49:06.659526589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qsdjf,Uid:cdcdcd17-3ebe-404b-8923-eb26623a1a69,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:06.676846 containerd[1505]: time="2025-07-09T23:49:06.676797259Z" level=info msg="connecting to shim 14391612b3d3a90a7de0191503d941058dad526f8da00613836809609a8dcb3c" address="unix:///run/containerd/s/6b9993d1e8c5f7ac5bc4775510be44646b6e1c82898ebf2c5175547466008135" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:06.703324 systemd[1]: Started cri-containerd-14391612b3d3a90a7de0191503d941058dad526f8da00613836809609a8dcb3c.scope - libcontainer container 14391612b3d3a90a7de0191503d941058dad526f8da00613836809609a8dcb3c. Jul 9 23:49:06.728521 containerd[1505]: time="2025-07-09T23:49:06.728481840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qsdjf,Uid:cdcdcd17-3ebe-404b-8923-eb26623a1a69,Namespace:kube-system,Attempt:0,} returns sandbox id \"14391612b3d3a90a7de0191503d941058dad526f8da00613836809609a8dcb3c\"" Jul 9 23:49:06.734693 containerd[1505]: time="2025-07-09T23:49:06.734643503Z" level=info msg="CreateContainer within sandbox \"14391612b3d3a90a7de0191503d941058dad526f8da00613836809609a8dcb3c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 23:49:06.743161 containerd[1505]: time="2025-07-09T23:49:06.742695296Z" level=info msg="Container 8645b2fbb5fd2d3c4cd62ae6fed5c7635df12e0c3a3ff548bf4c62c0551dc93a: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:06.750011 containerd[1505]: time="2025-07-09T23:49:06.749954889Z" level=info msg="CreateContainer within sandbox \"14391612b3d3a90a7de0191503d941058dad526f8da00613836809609a8dcb3c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8645b2fbb5fd2d3c4cd62ae6fed5c7635df12e0c3a3ff548bf4c62c0551dc93a\"" Jul 9 23:49:06.751198 containerd[1505]: time="2025-07-09T23:49:06.751157008Z" level=info msg="StartContainer for \"8645b2fbb5fd2d3c4cd62ae6fed5c7635df12e0c3a3ff548bf4c62c0551dc93a\"" Jul 9 23:49:06.752844 containerd[1505]: time="2025-07-09T23:49:06.752808723Z" level=info msg="connecting to shim 8645b2fbb5fd2d3c4cd62ae6fed5c7635df12e0c3a3ff548bf4c62c0551dc93a" address="unix:///run/containerd/s/6b9993d1e8c5f7ac5bc4775510be44646b6e1c82898ebf2c5175547466008135" protocol=ttrpc version=3 Jul 9 23:49:06.780345 systemd[1]: Started cri-containerd-8645b2fbb5fd2d3c4cd62ae6fed5c7635df12e0c3a3ff548bf4c62c0551dc93a.scope - libcontainer container 8645b2fbb5fd2d3c4cd62ae6fed5c7635df12e0c3a3ff548bf4c62c0551dc93a. Jul 9 23:49:06.821150 containerd[1505]: time="2025-07-09T23:49:06.819819088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-dhbpb,Uid:b765d782-39e8-47df-9529-3aa934b6eb02,Namespace:tigera-operator,Attempt:0,}" Jul 9 23:49:06.827199 containerd[1505]: time="2025-07-09T23:49:06.827160273Z" level=info msg="StartContainer for \"8645b2fbb5fd2d3c4cd62ae6fed5c7635df12e0c3a3ff548bf4c62c0551dc93a\" returns successfully" Jul 9 23:49:06.857045 containerd[1505]: time="2025-07-09T23:49:06.856928770Z" level=info msg="connecting to shim 42967c36f1ede94eb020cc0cb573dcf3a83a80811f788ca61e28ae9b7a1beee4" address="unix:///run/containerd/s/85477f3f00d747ef3c6a386bbc5981e36ca1ae8fef409ae497dcee03d70da60e" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:06.882375 systemd[1]: Started cri-containerd-42967c36f1ede94eb020cc0cb573dcf3a83a80811f788ca61e28ae9b7a1beee4.scope - libcontainer container 42967c36f1ede94eb020cc0cb573dcf3a83a80811f788ca61e28ae9b7a1beee4. Jul 9 23:49:06.922858 containerd[1505]: time="2025-07-09T23:49:06.921620568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-dhbpb,Uid:b765d782-39e8-47df-9529-3aa934b6eb02,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"42967c36f1ede94eb020cc0cb573dcf3a83a80811f788ca61e28ae9b7a1beee4\"" Jul 9 23:49:06.924910 containerd[1505]: time="2025-07-09T23:49:06.924863123Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 9 23:49:07.465456 kubelet[2619]: I0709 23:49:07.465358 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qsdjf" podStartSLOduration=1.465335743 podStartE2EDuration="1.465335743s" podCreationTimestamp="2025-07-09 23:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:07.463891163 +0000 UTC m=+6.159529964" watchObservedRunningTime="2025-07-09 23:49:07.465335743 +0000 UTC m=+6.160974584" Jul 9 23:49:07.549333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673462848.mount: Deactivated successfully. Jul 9 23:49:08.187329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28626559.mount: Deactivated successfully. Jul 9 23:49:09.533013 containerd[1505]: time="2025-07-09T23:49:09.532951138Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:09.533943 containerd[1505]: time="2025-07-09T23:49:09.533742026Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 9 23:49:09.534640 containerd[1505]: time="2025-07-09T23:49:09.534598468Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:09.537328 containerd[1505]: time="2025-07-09T23:49:09.537280343Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:09.538060 containerd[1505]: time="2025-07-09T23:49:09.538036274Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.613128155s" Jul 9 23:49:09.538126 containerd[1505]: time="2025-07-09T23:49:09.538065632Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 9 23:49:09.544176 containerd[1505]: time="2025-07-09T23:49:09.544138519Z" level=info msg="CreateContainer within sandbox \"42967c36f1ede94eb020cc0cb573dcf3a83a80811f788ca61e28ae9b7a1beee4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 9 23:49:09.555150 containerd[1505]: time="2025-07-09T23:49:09.554562049Z" level=info msg="Container a6dcf58624debbf784f1e2f3736fd8ad49fa286ddaedf69859c352296127fdb0: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:09.560688 containerd[1505]: time="2025-07-09T23:49:09.560635256Z" level=info msg="CreateContainer within sandbox \"42967c36f1ede94eb020cc0cb573dcf3a83a80811f788ca61e28ae9b7a1beee4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a6dcf58624debbf784f1e2f3736fd8ad49fa286ddaedf69859c352296127fdb0\"" Jul 9 23:49:09.561377 containerd[1505]: time="2025-07-09T23:49:09.561294996Z" level=info msg="StartContainer for \"a6dcf58624debbf784f1e2f3736fd8ad49fa286ddaedf69859c352296127fdb0\"" Jul 9 23:49:09.562408 containerd[1505]: time="2025-07-09T23:49:09.562359259Z" level=info msg="connecting to shim a6dcf58624debbf784f1e2f3736fd8ad49fa286ddaedf69859c352296127fdb0" address="unix:///run/containerd/s/85477f3f00d747ef3c6a386bbc5981e36ca1ae8fef409ae497dcee03d70da60e" protocol=ttrpc version=3 Jul 9 23:49:09.587608 systemd[1]: Started cri-containerd-a6dcf58624debbf784f1e2f3736fd8ad49fa286ddaedf69859c352296127fdb0.scope - libcontainer container a6dcf58624debbf784f1e2f3736fd8ad49fa286ddaedf69859c352296127fdb0. Jul 9 23:49:09.624353 containerd[1505]: time="2025-07-09T23:49:09.624296857Z" level=info msg="StartContainer for \"a6dcf58624debbf784f1e2f3736fd8ad49fa286ddaedf69859c352296127fdb0\" returns successfully" Jul 9 23:49:10.492913 kubelet[2619]: I0709 23:49:10.492840 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-dhbpb" podStartSLOduration=1.875935613 podStartE2EDuration="4.492821501s" podCreationTimestamp="2025-07-09 23:49:06 +0000 UTC" firstStartedPulling="2025-07-09 23:49:06.924337816 +0000 UTC m=+5.619976657" lastFinishedPulling="2025-07-09 23:49:09.541223704 +0000 UTC m=+8.236862545" observedRunningTime="2025-07-09 23:49:10.492632838 +0000 UTC m=+9.188271639" watchObservedRunningTime="2025-07-09 23:49:10.492821501 +0000 UTC m=+9.188460342" Jul 9 23:49:15.354387 sudo[1706]: pam_unix(sudo:session): session closed for user root Jul 9 23:49:15.363167 sshd[1705]: Connection closed by 10.0.0.1 port 36678 Jul 9 23:49:15.364477 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:15.369706 systemd[1]: sshd@6-10.0.0.68:22-10.0.0.1:36678.service: Deactivated successfully. Jul 9 23:49:15.375507 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 23:49:15.375880 systemd[1]: session-7.scope: Consumed 7.196s CPU time, 222.8M memory peak. Jul 9 23:49:15.377004 systemd-logind[1488]: Session 7 logged out. Waiting for processes to exit. Jul 9 23:49:15.378335 systemd-logind[1488]: Removed session 7. Jul 9 23:49:15.422212 update_engine[1490]: I20250709 23:49:15.422144 1490 update_attempter.cc:509] Updating boot flags... Jul 9 23:49:17.201731 systemd[1]: Created slice kubepods-besteffort-pod168175c6_1605_46e0_b1d0_279aac5c586c.slice - libcontainer container kubepods-besteffort-pod168175c6_1605_46e0_b1d0_279aac5c586c.slice. Jul 9 23:49:17.229360 kubelet[2619]: I0709 23:49:17.229289 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/168175c6-1605-46e0-b1d0-279aac5c586c-typha-certs\") pod \"calico-typha-6575c55c4b-hjwdb\" (UID: \"168175c6-1605-46e0-b1d0-279aac5c586c\") " pod="calico-system/calico-typha-6575c55c4b-hjwdb" Jul 9 23:49:17.229360 kubelet[2619]: I0709 23:49:17.229347 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/168175c6-1605-46e0-b1d0-279aac5c586c-tigera-ca-bundle\") pod \"calico-typha-6575c55c4b-hjwdb\" (UID: \"168175c6-1605-46e0-b1d0-279aac5c586c\") " pod="calico-system/calico-typha-6575c55c4b-hjwdb" Jul 9 23:49:17.229360 kubelet[2619]: I0709 23:49:17.229374 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgdb2\" (UniqueName: \"kubernetes.io/projected/168175c6-1605-46e0-b1d0-279aac5c586c-kube-api-access-pgdb2\") pod \"calico-typha-6575c55c4b-hjwdb\" (UID: \"168175c6-1605-46e0-b1d0-279aac5c586c\") " pod="calico-system/calico-typha-6575c55c4b-hjwdb" Jul 9 23:49:17.287066 systemd[1]: Created slice kubepods-besteffort-pod5bfc4934_2c93_4500_ae73_d29425693cae.slice - libcontainer container kubepods-besteffort-pod5bfc4934_2c93_4500_ae73_d29425693cae.slice. Jul 9 23:49:17.330074 kubelet[2619]: I0709 23:49:17.329929 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5bfc4934-2c93-4500-ae73-d29425693cae-var-run-calico\") pod \"calico-node-p5x7h\" (UID: \"5bfc4934-2c93-4500-ae73-d29425693cae\") " pod="calico-system/calico-node-p5x7h" Jul 9 23:49:17.330074 kubelet[2619]: I0709 23:49:17.330065 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bfc4934-2c93-4500-ae73-d29425693cae-xtables-lock\") pod \"calico-node-p5x7h\" (UID: \"5bfc4934-2c93-4500-ae73-d29425693cae\") " pod="calico-system/calico-node-p5x7h" Jul 9 23:49:17.330074 kubelet[2619]: I0709 23:49:17.330087 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5bfc4934-2c93-4500-ae73-d29425693cae-cni-log-dir\") pod \"calico-node-p5x7h\" (UID: \"5bfc4934-2c93-4500-ae73-d29425693cae\") " pod="calico-system/calico-node-p5x7h" Jul 9 23:49:17.330284 kubelet[2619]: I0709 23:49:17.330172 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4wrn\" (UniqueName: \"kubernetes.io/projected/5bfc4934-2c93-4500-ae73-d29425693cae-kube-api-access-b4wrn\") pod \"calico-node-p5x7h\" (UID: \"5bfc4934-2c93-4500-ae73-d29425693cae\") " pod="calico-system/calico-node-p5x7h" Jul 9 23:49:17.330284 kubelet[2619]: I0709 23:49:17.330212 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5bfc4934-2c93-4500-ae73-d29425693cae-node-certs\") pod \"calico-node-p5x7h\" (UID: \"5bfc4934-2c93-4500-ae73-d29425693cae\") " pod="calico-system/calico-node-p5x7h" Jul 9 23:49:17.330284 kubelet[2619]: I0709 23:49:17.330245 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5bfc4934-2c93-4500-ae73-d29425693cae-cni-bin-dir\") pod \"calico-node-p5x7h\" (UID: \"5bfc4934-2c93-4500-ae73-d29425693cae\") " pod="calico-system/calico-node-p5x7h" Jul 9 23:49:17.330284 kubelet[2619]: I0709 23:49:17.330260 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5bfc4934-2c93-4500-ae73-d29425693cae-flexvol-driver-host\") pod \"calico-node-p5x7h\" (UID: \"5bfc4934-2c93-4500-ae73-d29425693cae\") " pod="calico-system/calico-node-p5x7h" Jul 9 23:49:17.330284 kubelet[2619]: I0709 23:49:17.330277 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bfc4934-2c93-4500-ae73-d29425693cae-lib-modules\") pod \"calico-node-p5x7h\" (UID: \"5bfc4934-2c93-4500-ae73-d29425693cae\") " pod="calico-system/calico-node-p5x7h" Jul 9 23:49:17.330394 kubelet[2619]: I0709 23:49:17.330293 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5bfc4934-2c93-4500-ae73-d29425693cae-policysync\") pod \"calico-node-p5x7h\" (UID: \"5bfc4934-2c93-4500-ae73-d29425693cae\") " pod="calico-system/calico-node-p5x7h" Jul 9 23:49:17.330394 kubelet[2619]: I0709 23:49:17.330325 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bfc4934-2c93-4500-ae73-d29425693cae-tigera-ca-bundle\") pod \"calico-node-p5x7h\" (UID: \"5bfc4934-2c93-4500-ae73-d29425693cae\") " pod="calico-system/calico-node-p5x7h" Jul 9 23:49:17.330394 kubelet[2619]: I0709 23:49:17.330343 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5bfc4934-2c93-4500-ae73-d29425693cae-var-lib-calico\") pod \"calico-node-p5x7h\" (UID: \"5bfc4934-2c93-4500-ae73-d29425693cae\") " pod="calico-system/calico-node-p5x7h" Jul 9 23:49:17.330394 kubelet[2619]: I0709 23:49:17.330366 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5bfc4934-2c93-4500-ae73-d29425693cae-cni-net-dir\") pod \"calico-node-p5x7h\" (UID: \"5bfc4934-2c93-4500-ae73-d29425693cae\") " pod="calico-system/calico-node-p5x7h" Jul 9 23:49:17.451060 kubelet[2619]: E0709 23:49:17.450296 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.451060 kubelet[2619]: W0709 23:49:17.450328 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.454221 kubelet[2619]: E0709 23:49:17.452893 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.457911 kubelet[2619]: E0709 23:49:17.454651 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.457911 kubelet[2619]: W0709 23:49:17.454682 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.457911 kubelet[2619]: E0709 23:49:17.454704 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.462238 kubelet[2619]: E0709 23:49:17.462193 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.462238 kubelet[2619]: W0709 23:49:17.462221 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.462238 kubelet[2619]: E0709 23:49:17.462244 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.462727 kubelet[2619]: E0709 23:49:17.462685 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp8z7" podUID="a1c76dce-4758-43aa-813c-3a4ee32989f0" Jul 9 23:49:17.494861 kubelet[2619]: E0709 23:49:17.494828 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.494861 kubelet[2619]: W0709 23:49:17.494853 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.495031 kubelet[2619]: E0709 23:49:17.494877 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.495258 kubelet[2619]: E0709 23:49:17.495233 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.503085 kubelet[2619]: W0709 23:49:17.495251 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.503085 kubelet[2619]: E0709 23:49:17.503089 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.504239 kubelet[2619]: E0709 23:49:17.504151 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.504239 kubelet[2619]: W0709 23:49:17.504232 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.504386 kubelet[2619]: E0709 23:49:17.504254 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.504498 kubelet[2619]: E0709 23:49:17.504479 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.504498 kubelet[2619]: W0709 23:49:17.504493 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.504559 kubelet[2619]: E0709 23:49:17.504504 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.504941 kubelet[2619]: E0709 23:49:17.504812 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.504941 kubelet[2619]: W0709 23:49:17.504824 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.504941 kubelet[2619]: E0709 23:49:17.504834 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.505172 kubelet[2619]: E0709 23:49:17.505151 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.505172 kubelet[2619]: W0709 23:49:17.505164 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.505172 kubelet[2619]: E0709 23:49:17.505174 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.505345 kubelet[2619]: E0709 23:49:17.505329 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.505345 kubelet[2619]: W0709 23:49:17.505340 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.505410 kubelet[2619]: E0709 23:49:17.505348 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.505667 kubelet[2619]: E0709 23:49:17.505648 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.505667 kubelet[2619]: W0709 23:49:17.505664 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.505736 kubelet[2619]: E0709 23:49:17.505676 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.506445 kubelet[2619]: E0709 23:49:17.506422 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.506445 kubelet[2619]: W0709 23:49:17.506438 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.506903 kubelet[2619]: E0709 23:49:17.506451 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.506903 kubelet[2619]: E0709 23:49:17.506675 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.506903 kubelet[2619]: W0709 23:49:17.506685 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.506903 kubelet[2619]: E0709 23:49:17.506694 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.507238 kubelet[2619]: E0709 23:49:17.507217 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.507238 kubelet[2619]: W0709 23:49:17.507232 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.507339 kubelet[2619]: E0709 23:49:17.507244 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.507462 kubelet[2619]: E0709 23:49:17.507448 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.507462 kubelet[2619]: W0709 23:49:17.507459 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.507552 kubelet[2619]: E0709 23:49:17.507469 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.507650 kubelet[2619]: E0709 23:49:17.507636 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.507650 kubelet[2619]: W0709 23:49:17.507646 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.507720 kubelet[2619]: E0709 23:49:17.507658 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.507836 kubelet[2619]: E0709 23:49:17.507818 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.507836 kubelet[2619]: W0709 23:49:17.507831 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.507900 kubelet[2619]: E0709 23:49:17.507838 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.507967 kubelet[2619]: E0709 23:49:17.507954 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.507967 kubelet[2619]: W0709 23:49:17.507963 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.508029 kubelet[2619]: E0709 23:49:17.507971 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.508181 kubelet[2619]: E0709 23:49:17.508084 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.508181 kubelet[2619]: W0709 23:49:17.508095 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.508181 kubelet[2619]: E0709 23:49:17.508102 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.508580 kubelet[2619]: E0709 23:49:17.508564 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.508580 kubelet[2619]: W0709 23:49:17.508579 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.508667 kubelet[2619]: E0709 23:49:17.508606 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.508855 kubelet[2619]: E0709 23:49:17.508827 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.508855 kubelet[2619]: W0709 23:49:17.508854 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.508926 kubelet[2619]: E0709 23:49:17.508864 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.509230 kubelet[2619]: E0709 23:49:17.509216 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.509230 kubelet[2619]: W0709 23:49:17.509228 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.509317 kubelet[2619]: E0709 23:49:17.509239 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.509865 kubelet[2619]: E0709 23:49:17.509846 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.509865 kubelet[2619]: W0709 23:49:17.509863 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.509949 kubelet[2619]: E0709 23:49:17.509875 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.517044 containerd[1505]: time="2025-07-09T23:49:17.516985267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6575c55c4b-hjwdb,Uid:168175c6-1605-46e0-b1d0-279aac5c586c,Namespace:calico-system,Attempt:0,}" Jul 9 23:49:17.533145 kubelet[2619]: E0709 23:49:17.532770 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.533145 kubelet[2619]: W0709 23:49:17.532799 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.533145 kubelet[2619]: E0709 23:49:17.532822 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.533145 kubelet[2619]: I0709 23:49:17.532864 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pslmq\" (UniqueName: \"kubernetes.io/projected/a1c76dce-4758-43aa-813c-3a4ee32989f0-kube-api-access-pslmq\") pod \"csi-node-driver-gp8z7\" (UID: \"a1c76dce-4758-43aa-813c-3a4ee32989f0\") " pod="calico-system/csi-node-driver-gp8z7" Jul 9 23:49:17.534460 kubelet[2619]: E0709 23:49:17.534432 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.534779 kubelet[2619]: W0709 23:49:17.534614 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.534779 kubelet[2619]: E0709 23:49:17.534653 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.534779 kubelet[2619]: I0709 23:49:17.534692 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a1c76dce-4758-43aa-813c-3a4ee32989f0-varrun\") pod \"csi-node-driver-gp8z7\" (UID: \"a1c76dce-4758-43aa-813c-3a4ee32989f0\") " pod="calico-system/csi-node-driver-gp8z7" Jul 9 23:49:17.535260 kubelet[2619]: E0709 23:49:17.535237 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.535345 kubelet[2619]: W0709 23:49:17.535330 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.535416 kubelet[2619]: E0709 23:49:17.535405 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.535554 kubelet[2619]: I0709 23:49:17.535514 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a1c76dce-4758-43aa-813c-3a4ee32989f0-registration-dir\") pod \"csi-node-driver-gp8z7\" (UID: \"a1c76dce-4758-43aa-813c-3a4ee32989f0\") " pod="calico-system/csi-node-driver-gp8z7" Jul 9 23:49:17.537038 kubelet[2619]: E0709 23:49:17.536179 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.537038 kubelet[2619]: W0709 23:49:17.536197 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.537038 kubelet[2619]: E0709 23:49:17.536213 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.539850 kubelet[2619]: E0709 23:49:17.539819 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.539979 kubelet[2619]: W0709 23:49:17.539962 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.540041 kubelet[2619]: E0709 23:49:17.540028 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.542275 kubelet[2619]: E0709 23:49:17.541853 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.542551 kubelet[2619]: W0709 23:49:17.542431 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.542715 kubelet[2619]: E0709 23:49:17.542654 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.543304 kubelet[2619]: I0709 23:49:17.543262 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a1c76dce-4758-43aa-813c-3a4ee32989f0-socket-dir\") pod \"csi-node-driver-gp8z7\" (UID: \"a1c76dce-4758-43aa-813c-3a4ee32989f0\") " pod="calico-system/csi-node-driver-gp8z7" Jul 9 23:49:17.543852 kubelet[2619]: E0709 23:49:17.543591 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.543852 kubelet[2619]: W0709 23:49:17.543608 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.543852 kubelet[2619]: E0709 23:49:17.543635 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.544631 kubelet[2619]: E0709 23:49:17.544542 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.544631 kubelet[2619]: W0709 23:49:17.544560 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.544631 kubelet[2619]: E0709 23:49:17.544575 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.545745 kubelet[2619]: E0709 23:49:17.545185 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.545745 kubelet[2619]: W0709 23:49:17.545204 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.545989 kubelet[2619]: E0709 23:49:17.545878 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.546388 kubelet[2619]: E0709 23:49:17.546249 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.546499 kubelet[2619]: W0709 23:49:17.546453 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.546499 kubelet[2619]: E0709 23:49:17.546480 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.548259 kubelet[2619]: E0709 23:49:17.548006 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.548259 kubelet[2619]: W0709 23:49:17.548026 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.548259 kubelet[2619]: E0709 23:49:17.548044 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.549105 kubelet[2619]: E0709 23:49:17.549083 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.549488 kubelet[2619]: W0709 23:49:17.549194 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.549594 kubelet[2619]: E0709 23:49:17.549577 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.550614 kubelet[2619]: E0709 23:49:17.550578 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.550614 kubelet[2619]: W0709 23:49:17.550606 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.550723 kubelet[2619]: E0709 23:49:17.550631 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.550723 kubelet[2619]: I0709 23:49:17.550664 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1c76dce-4758-43aa-813c-3a4ee32989f0-kubelet-dir\") pod \"csi-node-driver-gp8z7\" (UID: \"a1c76dce-4758-43aa-813c-3a4ee32989f0\") " pod="calico-system/csi-node-driver-gp8z7" Jul 9 23:49:17.551611 kubelet[2619]: E0709 23:49:17.551586 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.552229 kubelet[2619]: W0709 23:49:17.552162 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.552229 kubelet[2619]: E0709 23:49:17.552197 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.552997 kubelet[2619]: E0709 23:49:17.552969 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.552997 kubelet[2619]: W0709 23:49:17.552992 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.553077 kubelet[2619]: E0709 23:49:17.553011 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.564717 containerd[1505]: time="2025-07-09T23:49:17.564649659Z" level=info msg="connecting to shim 4fb2e18e55e06a1cb329bc6b454deeb0ff80b781b7f2bfa2dd83e2a9b68cae8c" address="unix:///run/containerd/s/2bb173578c4f38cd9acc5b56d9ecc9aa911921cf901fbc3a3269bb8f8e6c9e60" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:17.593754 containerd[1505]: time="2025-07-09T23:49:17.593685767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p5x7h,Uid:5bfc4934-2c93-4500-ae73-d29425693cae,Namespace:calico-system,Attempt:0,}" Jul 9 23:49:17.598451 systemd[1]: Started cri-containerd-4fb2e18e55e06a1cb329bc6b454deeb0ff80b781b7f2bfa2dd83e2a9b68cae8c.scope - libcontainer container 4fb2e18e55e06a1cb329bc6b454deeb0ff80b781b7f2bfa2dd83e2a9b68cae8c. Jul 9 23:49:17.618013 containerd[1505]: time="2025-07-09T23:49:17.617309338Z" level=info msg="connecting to shim d5cf541bc6c153ff56e7102832becb97eb5b916b6583790c1b8b5e1eafb60771" address="unix:///run/containerd/s/8e077b3963acbafcba28d9bc8067445c0f623b6c5613c0959eb3ac52c16c957f" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:17.639986 containerd[1505]: time="2025-07-09T23:49:17.639936419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6575c55c4b-hjwdb,Uid:168175c6-1605-46e0-b1d0-279aac5c586c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4fb2e18e55e06a1cb329bc6b454deeb0ff80b781b7f2bfa2dd83e2a9b68cae8c\"" Jul 9 23:49:17.645743 systemd[1]: Started cri-containerd-d5cf541bc6c153ff56e7102832becb97eb5b916b6583790c1b8b5e1eafb60771.scope - libcontainer container d5cf541bc6c153ff56e7102832becb97eb5b916b6583790c1b8b5e1eafb60771. Jul 9 23:49:17.652021 kubelet[2619]: E0709 23:49:17.651973 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.652021 kubelet[2619]: W0709 23:49:17.651996 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.652021 kubelet[2619]: E0709 23:49:17.652032 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.652482 kubelet[2619]: E0709 23:49:17.652272 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.652482 kubelet[2619]: W0709 23:49:17.652289 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.652482 kubelet[2619]: E0709 23:49:17.652300 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.652482 kubelet[2619]: E0709 23:49:17.652473 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.652482 kubelet[2619]: W0709 23:49:17.652486 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.652591 kubelet[2619]: E0709 23:49:17.652495 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.653565 kubelet[2619]: E0709 23:49:17.652668 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.653565 kubelet[2619]: W0709 23:49:17.652683 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.653565 kubelet[2619]: E0709 23:49:17.652691 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.653565 kubelet[2619]: E0709 23:49:17.652888 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.653565 kubelet[2619]: W0709 23:49:17.652896 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.653565 kubelet[2619]: E0709 23:49:17.652905 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.653565 kubelet[2619]: E0709 23:49:17.653130 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.653565 kubelet[2619]: W0709 23:49:17.653139 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.653565 kubelet[2619]: E0709 23:49:17.653151 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.653565 kubelet[2619]: E0709 23:49:17.653358 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.653867 kubelet[2619]: W0709 23:49:17.653368 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.653867 kubelet[2619]: E0709 23:49:17.653378 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.653867 kubelet[2619]: E0709 23:49:17.653588 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.653867 kubelet[2619]: W0709 23:49:17.653598 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.653867 kubelet[2619]: E0709 23:49:17.653607 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.655276 kubelet[2619]: E0709 23:49:17.653927 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.655276 kubelet[2619]: W0709 23:49:17.653941 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.655276 kubelet[2619]: E0709 23:49:17.653953 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.655276 kubelet[2619]: E0709 23:49:17.654417 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.655276 kubelet[2619]: W0709 23:49:17.654430 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.655276 kubelet[2619]: E0709 23:49:17.654441 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.655276 kubelet[2619]: E0709 23:49:17.654625 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.655276 kubelet[2619]: W0709 23:49:17.654639 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.655276 kubelet[2619]: E0709 23:49:17.654649 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.655276 kubelet[2619]: E0709 23:49:17.654879 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.657263 kubelet[2619]: W0709 23:49:17.654947 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.657263 kubelet[2619]: E0709 23:49:17.654957 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.657263 kubelet[2619]: E0709 23:49:17.655812 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.657263 kubelet[2619]: W0709 23:49:17.655827 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.657263 kubelet[2619]: E0709 23:49:17.655838 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.657263 kubelet[2619]: E0709 23:49:17.656223 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.657263 kubelet[2619]: W0709 23:49:17.656234 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.657263 kubelet[2619]: E0709 23:49:17.656244 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.657263 kubelet[2619]: E0709 23:49:17.656766 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.657263 kubelet[2619]: W0709 23:49:17.656840 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.657478 kubelet[2619]: E0709 23:49:17.656856 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.657478 kubelet[2619]: E0709 23:49:17.657274 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.657478 kubelet[2619]: W0709 23:49:17.657309 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.657478 kubelet[2619]: E0709 23:49:17.657322 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.658143 kubelet[2619]: E0709 23:49:17.657590 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.658143 kubelet[2619]: W0709 23:49:17.657638 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.658143 kubelet[2619]: E0709 23:49:17.657651 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.658143 kubelet[2619]: E0709 23:49:17.657900 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.658143 kubelet[2619]: W0709 23:49:17.657913 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.658143 kubelet[2619]: E0709 23:49:17.657923 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.658808 kubelet[2619]: E0709 23:49:17.658245 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.658808 kubelet[2619]: W0709 23:49:17.658256 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.658808 kubelet[2619]: E0709 23:49:17.658266 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.658808 kubelet[2619]: E0709 23:49:17.658512 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.658808 kubelet[2619]: W0709 23:49:17.658523 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.658808 kubelet[2619]: E0709 23:49:17.658534 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.658808 kubelet[2619]: E0709 23:49:17.658796 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.658808 kubelet[2619]: W0709 23:49:17.658807 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.659421 containerd[1505]: time="2025-07-09T23:49:17.658293642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 9 23:49:17.659480 kubelet[2619]: E0709 23:49:17.658818 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.659780 kubelet[2619]: E0709 23:49:17.659749 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.659780 kubelet[2619]: W0709 23:49:17.659773 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.660025 kubelet[2619]: E0709 23:49:17.659814 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.660184 kubelet[2619]: E0709 23:49:17.660144 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.660184 kubelet[2619]: W0709 23:49:17.660169 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.660184 kubelet[2619]: E0709 23:49:17.660180 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.660482 kubelet[2619]: E0709 23:49:17.660465 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.660482 kubelet[2619]: W0709 23:49:17.660481 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.660545 kubelet[2619]: E0709 23:49:17.660492 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.660804 kubelet[2619]: E0709 23:49:17.660788 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.660804 kubelet[2619]: W0709 23:49:17.660803 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.660869 kubelet[2619]: E0709 23:49:17.660813 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.682585 kubelet[2619]: E0709 23:49:17.682544 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:17.682585 kubelet[2619]: W0709 23:49:17.682573 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:17.682758 kubelet[2619]: E0709 23:49:17.682600 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:17.761182 containerd[1505]: time="2025-07-09T23:49:17.759426096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p5x7h,Uid:5bfc4934-2c93-4500-ae73-d29425693cae,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5cf541bc6c153ff56e7102832becb97eb5b916b6583790c1b8b5e1eafb60771\"" Jul 9 23:49:19.088701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3065988383.mount: Deactivated successfully. Jul 9 23:49:19.415201 kubelet[2619]: E0709 23:49:19.414271 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp8z7" podUID="a1c76dce-4758-43aa-813c-3a4ee32989f0" Jul 9 23:49:19.517168 containerd[1505]: time="2025-07-09T23:49:19.516972479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:19.517617 containerd[1505]: time="2025-07-09T23:49:19.517558920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 9 23:49:19.518397 containerd[1505]: time="2025-07-09T23:49:19.518369786Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:19.520869 containerd[1505]: time="2025-07-09T23:49:19.520780066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:19.521822 containerd[1505]: time="2025-07-09T23:49:19.521777800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.863428722s" Jul 9 23:49:19.521822 containerd[1505]: time="2025-07-09T23:49:19.521810518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 9 23:49:19.523308 containerd[1505]: time="2025-07-09T23:49:19.523262661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 9 23:49:19.545559 containerd[1505]: time="2025-07-09T23:49:19.545514226Z" level=info msg="CreateContainer within sandbox \"4fb2e18e55e06a1cb329bc6b454deeb0ff80b781b7f2bfa2dd83e2a9b68cae8c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 9 23:49:19.616448 containerd[1505]: time="2025-07-09T23:49:19.615421510Z" level=info msg="Container 83a2bc342a8e4c398690a7d25b754e206e17d60ae797834cee5a7c2e2b1763fd: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:19.629429 containerd[1505]: time="2025-07-09T23:49:19.629384624Z" level=info msg="CreateContainer within sandbox \"4fb2e18e55e06a1cb329bc6b454deeb0ff80b781b7f2bfa2dd83e2a9b68cae8c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"83a2bc342a8e4c398690a7d25b754e206e17d60ae797834cee5a7c2e2b1763fd\"" Jul 9 23:49:19.630537 containerd[1505]: time="2025-07-09T23:49:19.630466433Z" level=info msg="StartContainer for \"83a2bc342a8e4c398690a7d25b754e206e17d60ae797834cee5a7c2e2b1763fd\"" Jul 9 23:49:19.631967 containerd[1505]: time="2025-07-09T23:49:19.631922976Z" level=info msg="connecting to shim 83a2bc342a8e4c398690a7d25b754e206e17d60ae797834cee5a7c2e2b1763fd" address="unix:///run/containerd/s/2bb173578c4f38cd9acc5b56d9ecc9aa911921cf901fbc3a3269bb8f8e6c9e60" protocol=ttrpc version=3 Jul 9 23:49:19.662338 systemd[1]: Started cri-containerd-83a2bc342a8e4c398690a7d25b754e206e17d60ae797834cee5a7c2e2b1763fd.scope - libcontainer container 83a2bc342a8e4c398690a7d25b754e206e17d60ae797834cee5a7c2e2b1763fd. Jul 9 23:49:19.713255 containerd[1505]: time="2025-07-09T23:49:19.713211626Z" level=info msg="StartContainer for \"83a2bc342a8e4c398690a7d25b754e206e17d60ae797834cee5a7c2e2b1763fd\" returns successfully" Jul 9 23:49:20.581566 kubelet[2619]: I0709 23:49:20.581482 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6575c55c4b-hjwdb" podStartSLOduration=1.715190168 podStartE2EDuration="3.581465012s" podCreationTimestamp="2025-07-09 23:49:17 +0000 UTC" firstStartedPulling="2025-07-09 23:49:17.656135514 +0000 UTC m=+16.351774355" lastFinishedPulling="2025-07-09 23:49:19.522410358 +0000 UTC m=+18.218049199" observedRunningTime="2025-07-09 23:49:20.580454237 +0000 UTC m=+19.276093158" watchObservedRunningTime="2025-07-09 23:49:20.581465012 +0000 UTC m=+19.277103933" Jul 9 23:49:20.628254 kubelet[2619]: E0709 23:49:20.626654 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628254 kubelet[2619]: W0709 23:49:20.626680 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628254 kubelet[2619]: E0709 23:49:20.626703 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.628254 kubelet[2619]: E0709 23:49:20.626942 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628254 kubelet[2619]: W0709 23:49:20.626953 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628254 kubelet[2619]: E0709 23:49:20.626998 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.628254 kubelet[2619]: E0709 23:49:20.627259 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628254 kubelet[2619]: W0709 23:49:20.627269 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628254 kubelet[2619]: E0709 23:49:20.627279 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.628254 kubelet[2619]: E0709 23:49:20.627465 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628587 kubelet[2619]: W0709 23:49:20.627474 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628587 kubelet[2619]: E0709 23:49:20.627483 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.628587 kubelet[2619]: E0709 23:49:20.627642 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628587 kubelet[2619]: W0709 23:49:20.627655 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628587 kubelet[2619]: E0709 23:49:20.627664 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.628587 kubelet[2619]: E0709 23:49:20.627805 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628587 kubelet[2619]: W0709 23:49:20.627813 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628587 kubelet[2619]: E0709 23:49:20.627821 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.628587 kubelet[2619]: E0709 23:49:20.627984 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628587 kubelet[2619]: W0709 23:49:20.627993 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628777 kubelet[2619]: E0709 23:49:20.628001 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.628777 kubelet[2619]: E0709 23:49:20.628179 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628777 kubelet[2619]: W0709 23:49:20.628186 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628777 kubelet[2619]: E0709 23:49:20.628194 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.628777 kubelet[2619]: E0709 23:49:20.628334 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628777 kubelet[2619]: W0709 23:49:20.628342 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628777 kubelet[2619]: E0709 23:49:20.628350 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.628777 kubelet[2619]: E0709 23:49:20.628501 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628777 kubelet[2619]: W0709 23:49:20.628509 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628777 kubelet[2619]: E0709 23:49:20.628516 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.628965 kubelet[2619]: E0709 23:49:20.628628 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628965 kubelet[2619]: W0709 23:49:20.628635 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628965 kubelet[2619]: E0709 23:49:20.628641 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.628965 kubelet[2619]: E0709 23:49:20.628754 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628965 kubelet[2619]: W0709 23:49:20.628761 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628965 kubelet[2619]: E0709 23:49:20.628768 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.628965 kubelet[2619]: E0709 23:49:20.628889 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.628965 kubelet[2619]: W0709 23:49:20.628897 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.628965 kubelet[2619]: E0709 23:49:20.628907 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.629160 kubelet[2619]: E0709 23:49:20.629031 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.629160 kubelet[2619]: W0709 23:49:20.629038 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.629160 kubelet[2619]: E0709 23:49:20.629045 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.629235 kubelet[2619]: E0709 23:49:20.629185 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.629235 kubelet[2619]: W0709 23:49:20.629193 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.629235 kubelet[2619]: E0709 23:49:20.629200 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.686247 kubelet[2619]: E0709 23:49:20.686213 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.686389 kubelet[2619]: W0709 23:49:20.686373 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.686498 kubelet[2619]: E0709 23:49:20.686484 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.686848 kubelet[2619]: E0709 23:49:20.686830 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.686931 kubelet[2619]: W0709 23:49:20.686918 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.686994 kubelet[2619]: E0709 23:49:20.686982 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.687332 kubelet[2619]: E0709 23:49:20.687309 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.687332 kubelet[2619]: W0709 23:49:20.687330 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.687427 kubelet[2619]: E0709 23:49:20.687344 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.687540 kubelet[2619]: E0709 23:49:20.687527 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.687573 kubelet[2619]: W0709 23:49:20.687541 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.687573 kubelet[2619]: E0709 23:49:20.687551 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.687754 kubelet[2619]: E0709 23:49:20.687737 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.687754 kubelet[2619]: W0709 23:49:20.687749 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.687835 kubelet[2619]: E0709 23:49:20.687757 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.687983 kubelet[2619]: E0709 23:49:20.687964 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.687983 kubelet[2619]: W0709 23:49:20.687973 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.687983 kubelet[2619]: E0709 23:49:20.687984 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.689931 kubelet[2619]: E0709 23:49:20.688266 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.689931 kubelet[2619]: W0709 23:49:20.688279 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.689931 kubelet[2619]: E0709 23:49:20.688291 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.689931 kubelet[2619]: E0709 23:49:20.688507 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.689931 kubelet[2619]: W0709 23:49:20.688519 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.689931 kubelet[2619]: E0709 23:49:20.688530 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.689931 kubelet[2619]: E0709 23:49:20.688944 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.689931 kubelet[2619]: W0709 23:49:20.688955 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.689931 kubelet[2619]: E0709 23:49:20.688966 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.689931 kubelet[2619]: E0709 23:49:20.689171 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.690179 kubelet[2619]: W0709 23:49:20.689179 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.690179 kubelet[2619]: E0709 23:49:20.689189 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.690179 kubelet[2619]: E0709 23:49:20.689354 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.690179 kubelet[2619]: W0709 23:49:20.689362 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.690179 kubelet[2619]: E0709 23:49:20.689406 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.690179 kubelet[2619]: E0709 23:49:20.689686 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.690179 kubelet[2619]: W0709 23:49:20.689702 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.690179 kubelet[2619]: E0709 23:49:20.689715 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.692388 kubelet[2619]: E0709 23:49:20.692216 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.692388 kubelet[2619]: W0709 23:49:20.692242 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.692388 kubelet[2619]: E0709 23:49:20.692263 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.692588 kubelet[2619]: E0709 23:49:20.692573 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.692779 kubelet[2619]: W0709 23:49:20.692641 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.692779 kubelet[2619]: E0709 23:49:20.692660 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.692925 kubelet[2619]: E0709 23:49:20.692908 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.692984 kubelet[2619]: W0709 23:49:20.692971 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.693038 kubelet[2619]: E0709 23:49:20.693027 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.693370 kubelet[2619]: E0709 23:49:20.693329 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.693370 kubelet[2619]: W0709 23:49:20.693345 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.693370 kubelet[2619]: E0709 23:49:20.693357 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.694020 kubelet[2619]: E0709 23:49:20.693802 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.694020 kubelet[2619]: W0709 23:49:20.693819 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.694020 kubelet[2619]: E0709 23:49:20.693834 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.694262 kubelet[2619]: E0709 23:49:20.694231 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:49:20.694262 kubelet[2619]: W0709 23:49:20.694246 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:49:20.694262 kubelet[2619]: E0709 23:49:20.694258 2619 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:49:20.863377 containerd[1505]: time="2025-07-09T23:49:20.863183595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:20.864756 containerd[1505]: time="2025-07-09T23:49:20.864017061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 9 23:49:20.870212 containerd[1505]: time="2025-07-09T23:49:20.870082391Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:20.872021 containerd[1505]: time="2025-07-09T23:49:20.871991309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:20.872651 containerd[1505]: time="2025-07-09T23:49:20.872622748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.34931761s" Jul 9 23:49:20.872701 containerd[1505]: time="2025-07-09T23:49:20.872658346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 9 23:49:20.877134 containerd[1505]: time="2025-07-09T23:49:20.877063543Z" level=info msg="CreateContainer within sandbox \"d5cf541bc6c153ff56e7102832becb97eb5b916b6583790c1b8b5e1eafb60771\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 9 23:49:20.904508 containerd[1505]: time="2025-07-09T23:49:20.903640796Z" level=info msg="Container 5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:20.916900 containerd[1505]: time="2025-07-09T23:49:20.916832308Z" level=info msg="CreateContainer within sandbox \"d5cf541bc6c153ff56e7102832becb97eb5b916b6583790c1b8b5e1eafb60771\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5\"" Jul 9 23:49:20.919744 containerd[1505]: time="2025-07-09T23:49:20.919682005Z" level=info msg="StartContainer for \"5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5\"" Jul 9 23:49:20.921294 containerd[1505]: time="2025-07-09T23:49:20.921250624Z" level=info msg="connecting to shim 5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5" address="unix:///run/containerd/s/8e077b3963acbafcba28d9bc8067445c0f623b6c5613c0959eb3ac52c16c957f" protocol=ttrpc version=3 Jul 9 23:49:21.010366 systemd[1]: Started cri-containerd-5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5.scope - libcontainer container 5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5. Jul 9 23:49:21.078638 containerd[1505]: time="2025-07-09T23:49:21.078588713Z" level=info msg="StartContainer for \"5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5\" returns successfully" Jul 9 23:49:21.083279 systemd[1]: cri-containerd-5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5.scope: Deactivated successfully. Jul 9 23:49:21.114125 containerd[1505]: time="2025-07-09T23:49:21.113977070Z" level=info msg="received exit event container_id:\"5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5\" id:\"5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5\" pid:3321 exited_at:{seconds:1752104961 nanos:102185084}" Jul 9 23:49:21.115508 containerd[1505]: time="2025-07-09T23:49:21.115378983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5\" id:\"5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5\" pid:3321 exited_at:{seconds:1752104961 nanos:102185084}" Jul 9 23:49:21.174196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f51b8279e22384910cbc33819479634f4184a7a5a49b8303caf2a71efa239f5-rootfs.mount: Deactivated successfully. Jul 9 23:49:21.415239 kubelet[2619]: E0709 23:49:21.415055 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp8z7" podUID="a1c76dce-4758-43aa-813c-3a4ee32989f0" Jul 9 23:49:21.528502 kubelet[2619]: I0709 23:49:21.528460 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:49:21.529567 containerd[1505]: time="2025-07-09T23:49:21.529525930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 9 23:49:23.435982 kubelet[2619]: E0709 23:49:23.435937 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp8z7" podUID="a1c76dce-4758-43aa-813c-3a4ee32989f0" Jul 9 23:49:23.614459 containerd[1505]: time="2025-07-09T23:49:23.614416037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:23.615608 containerd[1505]: time="2025-07-09T23:49:23.615522372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 9 23:49:23.616435 containerd[1505]: time="2025-07-09T23:49:23.616383722Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:23.618698 containerd[1505]: time="2025-07-09T23:49:23.618203696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:23.618881 containerd[1505]: time="2025-07-09T23:49:23.618848698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.08912862s" Jul 9 23:49:23.618930 containerd[1505]: time="2025-07-09T23:49:23.618881376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 9 23:49:23.624443 containerd[1505]: time="2025-07-09T23:49:23.624362456Z" level=info msg="CreateContainer within sandbox \"d5cf541bc6c153ff56e7102832becb97eb5b916b6583790c1b8b5e1eafb60771\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 9 23:49:23.631831 containerd[1505]: time="2025-07-09T23:49:23.631616312Z" level=info msg="Container e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:23.650382 containerd[1505]: time="2025-07-09T23:49:23.650337699Z" level=info msg="CreateContainer within sandbox \"d5cf541bc6c153ff56e7102832becb97eb5b916b6583790c1b8b5e1eafb60771\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75\"" Jul 9 23:49:23.652180 containerd[1505]: time="2025-07-09T23:49:23.651457793Z" level=info msg="StartContainer for \"e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75\"" Jul 9 23:49:23.653960 containerd[1505]: time="2025-07-09T23:49:23.653911130Z" level=info msg="connecting to shim e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75" address="unix:///run/containerd/s/8e077b3963acbafcba28d9bc8067445c0f623b6c5613c0959eb3ac52c16c957f" protocol=ttrpc version=3 Jul 9 23:49:23.692315 systemd[1]: Started cri-containerd-e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75.scope - libcontainer container e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75. Jul 9 23:49:23.733909 containerd[1505]: time="2025-07-09T23:49:23.733846062Z" level=info msg="StartContainer for \"e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75\" returns successfully" Jul 9 23:49:24.442351 systemd[1]: cri-containerd-e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75.scope: Deactivated successfully. Jul 9 23:49:24.442663 systemd[1]: cri-containerd-e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75.scope: Consumed 513ms CPU time, 178.3M memory peak, 3.7M read from disk, 165.8M written to disk. Jul 9 23:49:24.456538 containerd[1505]: time="2025-07-09T23:49:24.456498169Z" level=info msg="received exit event container_id:\"e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75\" id:\"e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75\" pid:3385 exited_at:{seconds:1752104964 nanos:456271102}" Jul 9 23:49:24.456801 containerd[1505]: time="2025-07-09T23:49:24.456632442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75\" id:\"e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75\" pid:3385 exited_at:{seconds:1752104964 nanos:456271102}" Jul 9 23:49:24.481467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e170206a5e975ae864bd38d3cc2da4355c09c71d160d1eee9ea8887909847c75-rootfs.mount: Deactivated successfully. Jul 9 23:49:24.498979 kubelet[2619]: I0709 23:49:24.498936 2619 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 23:49:24.581796 systemd[1]: Created slice kubepods-besteffort-pod381a0306_41d5_4f71_bfd4_ee80e0272fef.slice - libcontainer container kubepods-besteffort-pod381a0306_41d5_4f71_bfd4_ee80e0272fef.slice. Jul 9 23:49:24.594430 systemd[1]: Created slice kubepods-besteffort-podb356b2b4_7758_4ace_9446_60f3ea47c743.slice - libcontainer container kubepods-besteffort-podb356b2b4_7758_4ace_9446_60f3ea47c743.slice. Jul 9 23:49:24.601565 systemd[1]: Created slice kubepods-besteffort-pod7873676d_bab6_438f_81fb_991448bf022b.slice - libcontainer container kubepods-besteffort-pod7873676d_bab6_438f_81fb_991448bf022b.slice. Jul 9 23:49:24.610216 systemd[1]: Created slice kubepods-burstable-pod20e4c898_92b0_4a0b_be4c_bd32b26e85d2.slice - libcontainer container kubepods-burstable-pod20e4c898_92b0_4a0b_be4c_bd32b26e85d2.slice. Jul 9 23:49:24.618106 kubelet[2619]: I0709 23:49:24.617300 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b356b2b4-7758-4ace-9446-60f3ea47c743-calico-apiserver-certs\") pod \"calico-apiserver-7b57986d96-zmxgv\" (UID: \"b356b2b4-7758-4ace-9446-60f3ea47c743\") " pod="calico-apiserver/calico-apiserver-7b57986d96-zmxgv" Jul 9 23:49:24.619452 kubelet[2619]: I0709 23:49:24.619376 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2qfj\" (UniqueName: \"kubernetes.io/projected/b356b2b4-7758-4ace-9446-60f3ea47c743-kube-api-access-s2qfj\") pod \"calico-apiserver-7b57986d96-zmxgv\" (UID: \"b356b2b4-7758-4ace-9446-60f3ea47c743\") " pod="calico-apiserver/calico-apiserver-7b57986d96-zmxgv" Jul 9 23:49:24.619452 kubelet[2619]: I0709 23:49:24.619424 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381a0306-41d5-4f71-bfd4-ee80e0272fef-tigera-ca-bundle\") pod \"calico-kube-controllers-5dd7ff9649-lmhwq\" (UID: \"381a0306-41d5-4f71-bfd4-ee80e0272fef\") " pod="calico-system/calico-kube-controllers-5dd7ff9649-lmhwq" Jul 9 23:49:24.619552 kubelet[2619]: I0709 23:49:24.619500 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h95v9\" (UniqueName: \"kubernetes.io/projected/381a0306-41d5-4f71-bfd4-ee80e0272fef-kube-api-access-h95v9\") pod \"calico-kube-controllers-5dd7ff9649-lmhwq\" (UID: \"381a0306-41d5-4f71-bfd4-ee80e0272fef\") " pod="calico-system/calico-kube-controllers-5dd7ff9649-lmhwq" Jul 9 23:49:24.620653 systemd[1]: Created slice kubepods-besteffort-pod20a13ece_652a_4dc0_900e_33e27bb0e9cb.slice - libcontainer container kubepods-besteffort-pod20a13ece_652a_4dc0_900e_33e27bb0e9cb.slice. Jul 9 23:49:24.626883 systemd[1]: Created slice kubepods-burstable-pod83aae808_6f16_47f6_8edc_3bf2373c863d.slice - libcontainer container kubepods-burstable-pod83aae808_6f16_47f6_8edc_3bf2373c863d.slice. Jul 9 23:49:24.631677 systemd[1]: Created slice kubepods-besteffort-pod8b8c0181_4489_4422_98b9_8db8e4a479eb.slice - libcontainer container kubepods-besteffort-pod8b8c0181_4489_4422_98b9_8db8e4a479eb.slice. Jul 9 23:49:24.719950 kubelet[2619]: I0709 23:49:24.719835 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j92dl\" (UniqueName: \"kubernetes.io/projected/83aae808-6f16-47f6-8edc-3bf2373c863d-kube-api-access-j92dl\") pod \"coredns-674b8bbfcf-clg8f\" (UID: \"83aae808-6f16-47f6-8edc-3bf2373c863d\") " pod="kube-system/coredns-674b8bbfcf-clg8f" Jul 9 23:49:24.720207 kubelet[2619]: I0709 23:49:24.720186 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20e4c898-92b0-4a0b-be4c-bd32b26e85d2-config-volume\") pod \"coredns-674b8bbfcf-qfz5c\" (UID: \"20e4c898-92b0-4a0b-be4c-bd32b26e85d2\") " pod="kube-system/coredns-674b8bbfcf-qfz5c" Jul 9 23:49:24.720431 kubelet[2619]: I0709 23:49:24.720283 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffxf7\" (UniqueName: \"kubernetes.io/projected/20e4c898-92b0-4a0b-be4c-bd32b26e85d2-kube-api-access-ffxf7\") pod \"coredns-674b8bbfcf-qfz5c\" (UID: \"20e4c898-92b0-4a0b-be4c-bd32b26e85d2\") " pod="kube-system/coredns-674b8bbfcf-qfz5c" Jul 9 23:49:24.720543 kubelet[2619]: I0709 23:49:24.720525 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20a13ece-652a-4dc0-900e-33e27bb0e9cb-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-crqbt\" (UID: \"20a13ece-652a-4dc0-900e-33e27bb0e9cb\") " pod="calico-system/goldmane-768f4c5c69-crqbt" Jul 9 23:49:24.722105 kubelet[2619]: I0709 23:49:24.722079 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5dxc\" (UniqueName: \"kubernetes.io/projected/8b8c0181-4489-4422-98b9-8db8e4a479eb-kube-api-access-w5dxc\") pod \"whisker-768789c9b4-5d45j\" (UID: \"8b8c0181-4489-4422-98b9-8db8e4a479eb\") " pod="calico-system/whisker-768789c9b4-5d45j" Jul 9 23:49:24.722251 kubelet[2619]: I0709 23:49:24.722235 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83aae808-6f16-47f6-8edc-3bf2373c863d-config-volume\") pod \"coredns-674b8bbfcf-clg8f\" (UID: \"83aae808-6f16-47f6-8edc-3bf2373c863d\") " pod="kube-system/coredns-674b8bbfcf-clg8f" Jul 9 23:49:24.722317 kubelet[2619]: I0709 23:49:24.722305 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7873676d-bab6-438f-81fb-991448bf022b-calico-apiserver-certs\") pod \"calico-apiserver-7b57986d96-pcljx\" (UID: \"7873676d-bab6-438f-81fb-991448bf022b\") " pod="calico-apiserver/calico-apiserver-7b57986d96-pcljx" Jul 9 23:49:24.722377 kubelet[2619]: I0709 23:49:24.722365 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20a13ece-652a-4dc0-900e-33e27bb0e9cb-config\") pod \"goldmane-768f4c5c69-crqbt\" (UID: \"20a13ece-652a-4dc0-900e-33e27bb0e9cb\") " pod="calico-system/goldmane-768f4c5c69-crqbt" Jul 9 23:49:24.722479 kubelet[2619]: I0709 23:49:24.722465 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b8c0181-4489-4422-98b9-8db8e4a479eb-whisker-ca-bundle\") pod \"whisker-768789c9b4-5d45j\" (UID: \"8b8c0181-4489-4422-98b9-8db8e4a479eb\") " pod="calico-system/whisker-768789c9b4-5d45j" Jul 9 23:49:24.722561 kubelet[2619]: I0709 23:49:24.722547 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b8c0181-4489-4422-98b9-8db8e4a479eb-whisker-backend-key-pair\") pod \"whisker-768789c9b4-5d45j\" (UID: \"8b8c0181-4489-4422-98b9-8db8e4a479eb\") " pod="calico-system/whisker-768789c9b4-5d45j" Jul 9 23:49:24.722619 kubelet[2619]: I0709 23:49:24.722608 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4d79\" (UniqueName: \"kubernetes.io/projected/7873676d-bab6-438f-81fb-991448bf022b-kube-api-access-k4d79\") pod \"calico-apiserver-7b57986d96-pcljx\" (UID: \"7873676d-bab6-438f-81fb-991448bf022b\") " pod="calico-apiserver/calico-apiserver-7b57986d96-pcljx" Jul 9 23:49:24.722702 kubelet[2619]: I0709 23:49:24.722687 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/20a13ece-652a-4dc0-900e-33e27bb0e9cb-goldmane-key-pair\") pod \"goldmane-768f4c5c69-crqbt\" (UID: \"20a13ece-652a-4dc0-900e-33e27bb0e9cb\") " pod="calico-system/goldmane-768f4c5c69-crqbt" Jul 9 23:49:24.722775 kubelet[2619]: I0709 23:49:24.722761 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skx7f\" (UniqueName: \"kubernetes.io/projected/20a13ece-652a-4dc0-900e-33e27bb0e9cb-kube-api-access-skx7f\") pod \"goldmane-768f4c5c69-crqbt\" (UID: \"20a13ece-652a-4dc0-900e-33e27bb0e9cb\") " pod="calico-system/goldmane-768f4c5c69-crqbt" Jul 9 23:49:24.889417 containerd[1505]: time="2025-07-09T23:49:24.889376718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dd7ff9649-lmhwq,Uid:381a0306-41d5-4f71-bfd4-ee80e0272fef,Namespace:calico-system,Attempt:0,}" Jul 9 23:49:24.898946 containerd[1505]: time="2025-07-09T23:49:24.898901699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b57986d96-zmxgv,Uid:b356b2b4-7758-4ace-9446-60f3ea47c743,Namespace:calico-apiserver,Attempt:0,}" Jul 9 23:49:24.907246 containerd[1505]: time="2025-07-09T23:49:24.907204109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b57986d96-pcljx,Uid:7873676d-bab6-438f-81fb-991448bf022b,Namespace:calico-apiserver,Attempt:0,}" Jul 9 23:49:24.917156 containerd[1505]: time="2025-07-09T23:49:24.917037633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qfz5c,Uid:20e4c898-92b0-4a0b-be4c-bd32b26e85d2,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:24.928593 containerd[1505]: time="2025-07-09T23:49:24.924468732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-crqbt,Uid:20a13ece-652a-4dc0-900e-33e27bb0e9cb,Namespace:calico-system,Attempt:0,}" Jul 9 23:49:24.932882 containerd[1505]: time="2025-07-09T23:49:24.932809460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-clg8f,Uid:83aae808-6f16-47f6-8edc-3bf2373c863d,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:24.936757 containerd[1505]: time="2025-07-09T23:49:24.935612582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768789c9b4-5d45j,Uid:8b8c0181-4489-4422-98b9-8db8e4a479eb,Namespace:calico-system,Attempt:0,}" Jul 9 23:49:25.376864 containerd[1505]: time="2025-07-09T23:49:25.376805126Z" level=error msg="Failed to destroy network for sandbox \"dbd79320e000695979742f6b38559e0d86b06a2876530ac27cd96ae46131522f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.378516 containerd[1505]: time="2025-07-09T23:49:25.378429957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dd7ff9649-lmhwq,Uid:381a0306-41d5-4f71-bfd4-ee80e0272fef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd79320e000695979742f6b38559e0d86b06a2876530ac27cd96ae46131522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.384542 kubelet[2619]: E0709 23:49:25.384478 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd79320e000695979742f6b38559e0d86b06a2876530ac27cd96ae46131522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.384773 kubelet[2619]: E0709 23:49:25.384755 2619 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd79320e000695979742f6b38559e0d86b06a2876530ac27cd96ae46131522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dd7ff9649-lmhwq" Jul 9 23:49:25.384863 kubelet[2619]: E0709 23:49:25.384847 2619 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd79320e000695979742f6b38559e0d86b06a2876530ac27cd96ae46131522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dd7ff9649-lmhwq" Jul 9 23:49:25.384983 kubelet[2619]: E0709 23:49:25.384958 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dd7ff9649-lmhwq_calico-system(381a0306-41d5-4f71-bfd4-ee80e0272fef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dd7ff9649-lmhwq_calico-system(381a0306-41d5-4f71-bfd4-ee80e0272fef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbd79320e000695979742f6b38559e0d86b06a2876530ac27cd96ae46131522f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dd7ff9649-lmhwq" podUID="381a0306-41d5-4f71-bfd4-ee80e0272fef" Jul 9 23:49:25.387024 containerd[1505]: time="2025-07-09T23:49:25.386974128Z" level=error msg="Failed to destroy network for sandbox \"034b617faa7d32714569718b318037488a0111604b7053f3a7b8d9510f4cb9d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.390016 containerd[1505]: time="2025-07-09T23:49:25.389869970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-clg8f,Uid:83aae808-6f16-47f6-8edc-3bf2373c863d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"034b617faa7d32714569718b318037488a0111604b7053f3a7b8d9510f4cb9d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.391822 kubelet[2619]: E0709 23:49:25.391781 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"034b617faa7d32714569718b318037488a0111604b7053f3a7b8d9510f4cb9d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.391991 kubelet[2619]: E0709 23:49:25.391969 2619 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"034b617faa7d32714569718b318037488a0111604b7053f3a7b8d9510f4cb9d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-clg8f" Jul 9 23:49:25.392079 kubelet[2619]: E0709 23:49:25.392063 2619 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"034b617faa7d32714569718b318037488a0111604b7053f3a7b8d9510f4cb9d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-clg8f" Jul 9 23:49:25.392261 kubelet[2619]: E0709 23:49:25.392190 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-clg8f_kube-system(83aae808-6f16-47f6-8edc-3bf2373c863d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-clg8f_kube-system(83aae808-6f16-47f6-8edc-3bf2373c863d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"034b617faa7d32714569718b318037488a0111604b7053f3a7b8d9510f4cb9d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-clg8f" podUID="83aae808-6f16-47f6-8edc-3bf2373c863d" Jul 9 23:49:25.395427 containerd[1505]: time="2025-07-09T23:49:25.395352349Z" level=error msg="Failed to destroy network for sandbox \"84a6a18718d4056cdbeb30fa88b3d5c3e1d368925f37ed4c46d12fa73247665d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.400059 containerd[1505]: time="2025-07-09T23:49:25.399983175Z" level=error msg="Failed to destroy network for sandbox \"a34205d4711e660d959429266a6a3bd1c31d16360c42fe46f5872e9bc89c34c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.401623 containerd[1505]: time="2025-07-09T23:49:25.401555889Z" level=error msg="Failed to destroy network for sandbox \"eb4f9e843446e5fe2ee626ce427c716f987664ca0886fe49b7849ffd5248ebe4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.403560 containerd[1505]: time="2025-07-09T23:49:25.403518382Z" level=error msg="Failed to destroy network for sandbox \"aabe709c8d3bdbf4c9a0460c89806bf295b70dafc1b30061a2e0f3fd17904bdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.404318 containerd[1505]: time="2025-07-09T23:49:25.404272540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-crqbt,Uid:20a13ece-652a-4dc0-900e-33e27bb0e9cb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a6a18718d4056cdbeb30fa88b3d5c3e1d368925f37ed4c46d12fa73247665d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.404702 kubelet[2619]: E0709 23:49:25.404652 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a6a18718d4056cdbeb30fa88b3d5c3e1d368925f37ed4c46d12fa73247665d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.404855 kubelet[2619]: E0709 23:49:25.404832 2619 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a6a18718d4056cdbeb30fa88b3d5c3e1d368925f37ed4c46d12fa73247665d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-crqbt" Jul 9 23:49:25.404894 kubelet[2619]: E0709 23:49:25.404863 2619 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a6a18718d4056cdbeb30fa88b3d5c3e1d368925f37ed4c46d12fa73247665d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-crqbt" Jul 9 23:49:25.404939 kubelet[2619]: E0709 23:49:25.404916 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-crqbt_calico-system(20a13ece-652a-4dc0-900e-33e27bb0e9cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-crqbt_calico-system(20a13ece-652a-4dc0-900e-33e27bb0e9cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84a6a18718d4056cdbeb30fa88b3d5c3e1d368925f37ed4c46d12fa73247665d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-crqbt" podUID="20a13ece-652a-4dc0-900e-33e27bb0e9cb" Jul 9 23:49:25.406476 containerd[1505]: time="2025-07-09T23:49:25.406438822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b57986d96-zmxgv,Uid:b356b2b4-7758-4ace-9446-60f3ea47c743,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a34205d4711e660d959429266a6a3bd1c31d16360c42fe46f5872e9bc89c34c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.406906 kubelet[2619]: E0709 23:49:25.406879 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a34205d4711e660d959429266a6a3bd1c31d16360c42fe46f5872e9bc89c34c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.407192 kubelet[2619]: E0709 23:49:25.407162 2619 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a34205d4711e660d959429266a6a3bd1c31d16360c42fe46f5872e9bc89c34c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b57986d96-zmxgv" Jul 9 23:49:25.407320 kubelet[2619]: E0709 23:49:25.407282 2619 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a34205d4711e660d959429266a6a3bd1c31d16360c42fe46f5872e9bc89c34c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b57986d96-zmxgv" Jul 9 23:49:25.407708 kubelet[2619]: E0709 23:49:25.407390 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b57986d96-zmxgv_calico-apiserver(b356b2b4-7758-4ace-9446-60f3ea47c743)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b57986d96-zmxgv_calico-apiserver(b356b2b4-7758-4ace-9446-60f3ea47c743)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a34205d4711e660d959429266a6a3bd1c31d16360c42fe46f5872e9bc89c34c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b57986d96-zmxgv" podUID="b356b2b4-7758-4ace-9446-60f3ea47c743" Jul 9 23:49:25.408224 containerd[1505]: time="2025-07-09T23:49:25.407648595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b57986d96-pcljx,Uid:7873676d-bab6-438f-81fb-991448bf022b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb4f9e843446e5fe2ee626ce427c716f987664ca0886fe49b7849ffd5248ebe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.408326 kubelet[2619]: E0709 23:49:25.408300 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb4f9e843446e5fe2ee626ce427c716f987664ca0886fe49b7849ffd5248ebe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.408359 kubelet[2619]: E0709 23:49:25.408340 2619 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb4f9e843446e5fe2ee626ce427c716f987664ca0886fe49b7849ffd5248ebe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b57986d96-pcljx" Jul 9 23:49:25.408383 kubelet[2619]: E0709 23:49:25.408362 2619 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb4f9e843446e5fe2ee626ce427c716f987664ca0886fe49b7849ffd5248ebe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b57986d96-pcljx" Jul 9 23:49:25.408457 kubelet[2619]: E0709 23:49:25.408397 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b57986d96-pcljx_calico-apiserver(7873676d-bab6-438f-81fb-991448bf022b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b57986d96-pcljx_calico-apiserver(7873676d-bab6-438f-81fb-991448bf022b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb4f9e843446e5fe2ee626ce427c716f987664ca0886fe49b7849ffd5248ebe4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b57986d96-pcljx" podUID="7873676d-bab6-438f-81fb-991448bf022b" Jul 9 23:49:25.408569 containerd[1505]: time="2025-07-09T23:49:25.408455351Z" level=error msg="Failed to destroy network for sandbox \"1b05302771e55accd748aea08e79cbf2a4e773e3d0772b591b57cebd3c2bda2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.409290 containerd[1505]: time="2025-07-09T23:49:25.409170312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qfz5c,Uid:20e4c898-92b0-4a0b-be4c-bd32b26e85d2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aabe709c8d3bdbf4c9a0460c89806bf295b70dafc1b30061a2e0f3fd17904bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.409753 kubelet[2619]: E0709 23:49:25.409581 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aabe709c8d3bdbf4c9a0460c89806bf295b70dafc1b30061a2e0f3fd17904bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.409753 kubelet[2619]: E0709 23:49:25.409630 2619 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aabe709c8d3bdbf4c9a0460c89806bf295b70dafc1b30061a2e0f3fd17904bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qfz5c" Jul 9 23:49:25.409753 kubelet[2619]: E0709 23:49:25.409648 2619 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aabe709c8d3bdbf4c9a0460c89806bf295b70dafc1b30061a2e0f3fd17904bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qfz5c" Jul 9 23:49:25.409882 kubelet[2619]: E0709 23:49:25.409712 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qfz5c_kube-system(20e4c898-92b0-4a0b-be4c-bd32b26e85d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qfz5c_kube-system(20e4c898-92b0-4a0b-be4c-bd32b26e85d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aabe709c8d3bdbf4c9a0460c89806bf295b70dafc1b30061a2e0f3fd17904bdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qfz5c" podUID="20e4c898-92b0-4a0b-be4c-bd32b26e85d2" Jul 9 23:49:25.410248 containerd[1505]: time="2025-07-09T23:49:25.410155258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768789c9b4-5d45j,Uid:8b8c0181-4489-4422-98b9-8db8e4a479eb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b05302771e55accd748aea08e79cbf2a4e773e3d0772b591b57cebd3c2bda2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.410521 kubelet[2619]: E0709 23:49:25.410451 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b05302771e55accd748aea08e79cbf2a4e773e3d0772b591b57cebd3c2bda2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.410521 kubelet[2619]: E0709 23:49:25.410492 2619 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b05302771e55accd748aea08e79cbf2a4e773e3d0772b591b57cebd3c2bda2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-768789c9b4-5d45j" Jul 9 23:49:25.410673 kubelet[2619]: E0709 23:49:25.410640 2619 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b05302771e55accd748aea08e79cbf2a4e773e3d0772b591b57cebd3c2bda2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-768789c9b4-5d45j" Jul 9 23:49:25.410783 kubelet[2619]: E0709 23:49:25.410732 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-768789c9b4-5d45j_calico-system(8b8c0181-4489-4422-98b9-8db8e4a479eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-768789c9b4-5d45j_calico-system(8b8c0181-4489-4422-98b9-8db8e4a479eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b05302771e55accd748aea08e79cbf2a4e773e3d0772b591b57cebd3c2bda2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-768789c9b4-5d45j" podUID="8b8c0181-4489-4422-98b9-8db8e4a479eb" Jul 9 23:49:25.419479 systemd[1]: Created slice kubepods-besteffort-poda1c76dce_4758_43aa_813c_3a4ee32989f0.slice - libcontainer container kubepods-besteffort-poda1c76dce_4758_43aa_813c_3a4ee32989f0.slice. Jul 9 23:49:25.422050 containerd[1505]: time="2025-07-09T23:49:25.421937492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gp8z7,Uid:a1c76dce-4758-43aa-813c-3a4ee32989f0,Namespace:calico-system,Attempt:0,}" Jul 9 23:49:25.471377 containerd[1505]: time="2025-07-09T23:49:25.471263709Z" level=error msg="Failed to destroy network for sandbox \"52ad66781ef761d023727d3bd3c7588f6f386a1a183384cc604a857f4bf6a0ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.472560 containerd[1505]: time="2025-07-09T23:49:25.472515880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gp8z7,Uid:a1c76dce-4758-43aa-813c-3a4ee32989f0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ad66781ef761d023727d3bd3c7588f6f386a1a183384cc604a857f4bf6a0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.472957 kubelet[2619]: E0709 23:49:25.472913 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ad66781ef761d023727d3bd3c7588f6f386a1a183384cc604a857f4bf6a0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:49:25.473019 kubelet[2619]: E0709 23:49:25.472983 2619 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ad66781ef761d023727d3bd3c7588f6f386a1a183384cc604a857f4bf6a0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gp8z7" Jul 9 23:49:25.473019 kubelet[2619]: E0709 23:49:25.473004 2619 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ad66781ef761d023727d3bd3c7588f6f386a1a183384cc604a857f4bf6a0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gp8z7" Jul 9 23:49:25.473096 kubelet[2619]: E0709 23:49:25.473057 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gp8z7_calico-system(a1c76dce-4758-43aa-813c-3a4ee32989f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gp8z7_calico-system(a1c76dce-4758-43aa-813c-3a4ee32989f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52ad66781ef761d023727d3bd3c7588f6f386a1a183384cc604a857f4bf6a0ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gp8z7" podUID="a1c76dce-4758-43aa-813c-3a4ee32989f0" Jul 9 23:49:25.547001 containerd[1505]: time="2025-07-09T23:49:25.546959320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 9 23:49:28.687075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152879796.mount: Deactivated successfully. Jul 9 23:49:29.045500 containerd[1505]: time="2025-07-09T23:49:29.045259927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 9 23:49:29.049298 containerd[1505]: time="2025-07-09T23:49:29.049153379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:29.049883 containerd[1505]: time="2025-07-09T23:49:29.049848986Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:29.050380 containerd[1505]: time="2025-07-09T23:49:29.050357481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:29.051421 containerd[1505]: time="2025-07-09T23:49:29.050999010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.503996533s" Jul 9 23:49:29.051421 containerd[1505]: time="2025-07-09T23:49:29.051035528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 9 23:49:29.070316 containerd[1505]: time="2025-07-09T23:49:29.070262520Z" level=info msg="CreateContainer within sandbox \"d5cf541bc6c153ff56e7102832becb97eb5b916b6583790c1b8b5e1eafb60771\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 9 23:49:29.083159 containerd[1505]: time="2025-07-09T23:49:29.081875680Z" level=info msg="Container ec2fd7ab715834da04e899a05f2f6bdda20d9486c5b48c298436f508f198c10f: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:29.094739 containerd[1505]: time="2025-07-09T23:49:29.094681621Z" level=info msg="CreateContainer within sandbox \"d5cf541bc6c153ff56e7102832becb97eb5b916b6583790c1b8b5e1eafb60771\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ec2fd7ab715834da04e899a05f2f6bdda20d9486c5b48c298436f508f198c10f\"" Jul 9 23:49:29.096082 containerd[1505]: time="2025-07-09T23:49:29.096035036Z" level=info msg="StartContainer for \"ec2fd7ab715834da04e899a05f2f6bdda20d9486c5b48c298436f508f198c10f\"" Jul 9 23:49:29.098458 containerd[1505]: time="2025-07-09T23:49:29.098291287Z" level=info msg="connecting to shim ec2fd7ab715834da04e899a05f2f6bdda20d9486c5b48c298436f508f198c10f" address="unix:///run/containerd/s/8e077b3963acbafcba28d9bc8067445c0f623b6c5613c0959eb3ac52c16c957f" protocol=ttrpc version=3 Jul 9 23:49:29.124342 systemd[1]: Started cri-containerd-ec2fd7ab715834da04e899a05f2f6bdda20d9486c5b48c298436f508f198c10f.scope - libcontainer container ec2fd7ab715834da04e899a05f2f6bdda20d9486c5b48c298436f508f198c10f. Jul 9 23:49:29.172398 containerd[1505]: time="2025-07-09T23:49:29.172289755Z" level=info msg="StartContainer for \"ec2fd7ab715834da04e899a05f2f6bdda20d9486c5b48c298436f508f198c10f\" returns successfully" Jul 9 23:49:29.441042 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 9 23:49:29.441216 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 9 23:49:29.583413 kubelet[2619]: I0709 23:49:29.583252 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p5x7h" podStartSLOduration=1.291934239 podStartE2EDuration="12.583234198s" podCreationTimestamp="2025-07-09 23:49:17 +0000 UTC" firstStartedPulling="2025-07-09 23:49:17.760804558 +0000 UTC m=+16.456443399" lastFinishedPulling="2025-07-09 23:49:29.052104557 +0000 UTC m=+27.747743358" observedRunningTime="2025-07-09 23:49:29.581338969 +0000 UTC m=+28.276977810" watchObservedRunningTime="2025-07-09 23:49:29.583234198 +0000 UTC m=+28.278872999" Jul 9 23:49:29.783512 kubelet[2619]: I0709 23:49:29.782621 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5dxc\" (UniqueName: \"kubernetes.io/projected/8b8c0181-4489-4422-98b9-8db8e4a479eb-kube-api-access-w5dxc\") pod \"8b8c0181-4489-4422-98b9-8db8e4a479eb\" (UID: \"8b8c0181-4489-4422-98b9-8db8e4a479eb\") " Jul 9 23:49:29.783512 kubelet[2619]: I0709 23:49:29.782711 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b8c0181-4489-4422-98b9-8db8e4a479eb-whisker-ca-bundle\") pod \"8b8c0181-4489-4422-98b9-8db8e4a479eb\" (UID: \"8b8c0181-4489-4422-98b9-8db8e4a479eb\") " Jul 9 23:49:29.783512 kubelet[2619]: I0709 23:49:29.783306 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b8c0181-4489-4422-98b9-8db8e4a479eb-whisker-backend-key-pair\") pod \"8b8c0181-4489-4422-98b9-8db8e4a479eb\" (UID: \"8b8c0181-4489-4422-98b9-8db8e4a479eb\") " Jul 9 23:49:29.793281 systemd[1]: var-lib-kubelet-pods-8b8c0181\x2d4489\x2d4422\x2d98b9\x2d8db8e4a479eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5dxc.mount: Deactivated successfully. Jul 9 23:49:29.795449 kubelet[2619]: I0709 23:49:29.795358 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8c0181-4489-4422-98b9-8db8e4a479eb-kube-api-access-w5dxc" (OuterVolumeSpecName: "kube-api-access-w5dxc") pod "8b8c0181-4489-4422-98b9-8db8e4a479eb" (UID: "8b8c0181-4489-4422-98b9-8db8e4a479eb"). InnerVolumeSpecName "kube-api-access-w5dxc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:49:29.796822 systemd[1]: var-lib-kubelet-pods-8b8c0181\x2d4489\x2d4422\x2d98b9\x2d8db8e4a479eb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 9 23:49:29.797624 kubelet[2619]: I0709 23:49:29.797577 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8c0181-4489-4422-98b9-8db8e4a479eb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8b8c0181-4489-4422-98b9-8db8e4a479eb" (UID: "8b8c0181-4489-4422-98b9-8db8e4a479eb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 23:49:29.799508 kubelet[2619]: I0709 23:49:29.799467 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8c0181-4489-4422-98b9-8db8e4a479eb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8b8c0181-4489-4422-98b9-8db8e4a479eb" (UID: "8b8c0181-4489-4422-98b9-8db8e4a479eb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 23:49:29.884459 kubelet[2619]: I0709 23:49:29.884388 2619 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b8c0181-4489-4422-98b9-8db8e4a479eb-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 9 23:49:29.884459 kubelet[2619]: I0709 23:49:29.884427 2619 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w5dxc\" (UniqueName: \"kubernetes.io/projected/8b8c0181-4489-4422-98b9-8db8e4a479eb-kube-api-access-w5dxc\") on node \"localhost\" DevicePath \"\"" Jul 9 23:49:29.884459 kubelet[2619]: I0709 23:49:29.884437 2619 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b8c0181-4489-4422-98b9-8db8e4a479eb-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 9 23:49:30.562264 kubelet[2619]: I0709 23:49:30.562216 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:49:30.570685 systemd[1]: Removed slice kubepods-besteffort-pod8b8c0181_4489_4422_98b9_8db8e4a479eb.slice - libcontainer container kubepods-besteffort-pod8b8c0181_4489_4422_98b9_8db8e4a479eb.slice. Jul 9 23:49:30.635017 systemd[1]: Created slice kubepods-besteffort-pod364a0a63_f0ab_4443_842e_8f827e82967c.slice - libcontainer container kubepods-besteffort-pod364a0a63_f0ab_4443_842e_8f827e82967c.slice. Jul 9 23:49:30.691591 kubelet[2619]: I0709 23:49:30.691539 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sclwg\" (UniqueName: \"kubernetes.io/projected/364a0a63-f0ab-4443-842e-8f827e82967c-kube-api-access-sclwg\") pod \"whisker-7d797d74d8-bnxpf\" (UID: \"364a0a63-f0ab-4443-842e-8f827e82967c\") " pod="calico-system/whisker-7d797d74d8-bnxpf" Jul 9 23:49:30.691980 kubelet[2619]: I0709 23:49:30.691607 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/364a0a63-f0ab-4443-842e-8f827e82967c-whisker-ca-bundle\") pod \"whisker-7d797d74d8-bnxpf\" (UID: \"364a0a63-f0ab-4443-842e-8f827e82967c\") " pod="calico-system/whisker-7d797d74d8-bnxpf" Jul 9 23:49:30.691980 kubelet[2619]: I0709 23:49:30.691656 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/364a0a63-f0ab-4443-842e-8f827e82967c-whisker-backend-key-pair\") pod \"whisker-7d797d74d8-bnxpf\" (UID: \"364a0a63-f0ab-4443-842e-8f827e82967c\") " pod="calico-system/whisker-7d797d74d8-bnxpf" Jul 9 23:49:30.941565 containerd[1505]: time="2025-07-09T23:49:30.940640164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d797d74d8-bnxpf,Uid:364a0a63-f0ab-4443-842e-8f827e82967c,Namespace:calico-system,Attempt:0,}" Jul 9 23:49:31.216726 systemd-networkd[1423]: cali861da89fb69: Link UP Jul 9 23:49:31.216925 systemd-networkd[1423]: cali861da89fb69: Gained carrier Jul 9 23:49:31.231347 containerd[1505]: 2025-07-09 23:49:30.988 [INFO][3858] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 23:49:31.231347 containerd[1505]: 2025-07-09 23:49:31.057 [INFO][3858] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7d797d74d8--bnxpf-eth0 whisker-7d797d74d8- calico-system 364a0a63-f0ab-4443-842e-8f827e82967c 896 0 2025-07-09 23:49:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d797d74d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7d797d74d8-bnxpf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali861da89fb69 [] [] }} ContainerID="112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" Namespace="calico-system" Pod="whisker-7d797d74d8-bnxpf" WorkloadEndpoint="localhost-k8s-whisker--7d797d74d8--bnxpf-" Jul 9 23:49:31.231347 containerd[1505]: 2025-07-09 23:49:31.057 [INFO][3858] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" Namespace="calico-system" Pod="whisker-7d797d74d8-bnxpf" WorkloadEndpoint="localhost-k8s-whisker--7d797d74d8--bnxpf-eth0" Jul 9 23:49:31.231347 containerd[1505]: 2025-07-09 23:49:31.164 [INFO][3876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" HandleID="k8s-pod-network.112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" Workload="localhost-k8s-whisker--7d797d74d8--bnxpf-eth0" Jul 9 23:49:31.231585 containerd[1505]: 2025-07-09 23:49:31.164 [INFO][3876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" HandleID="k8s-pod-network.112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" Workload="localhost-k8s-whisker--7d797d74d8--bnxpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033b210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7d797d74d8-bnxpf", "timestamp":"2025-07-09 23:49:31.164652007 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:49:31.231585 containerd[1505]: 2025-07-09 23:49:31.164 [INFO][3876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:49:31.231585 containerd[1505]: 2025-07-09 23:49:31.164 [INFO][3876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:49:31.231585 containerd[1505]: 2025-07-09 23:49:31.165 [INFO][3876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 23:49:31.231585 containerd[1505]: 2025-07-09 23:49:31.176 [INFO][3876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" host="localhost" Jul 9 23:49:31.231585 containerd[1505]: 2025-07-09 23:49:31.183 [INFO][3876] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 23:49:31.231585 containerd[1505]: 2025-07-09 23:49:31.187 [INFO][3876] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 23:49:31.231585 containerd[1505]: 2025-07-09 23:49:31.189 [INFO][3876] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:31.231585 containerd[1505]: 2025-07-09 23:49:31.191 [INFO][3876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:31.231585 containerd[1505]: 2025-07-09 23:49:31.191 [INFO][3876] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" host="localhost" Jul 9 23:49:31.231832 containerd[1505]: 2025-07-09 23:49:31.193 [INFO][3876] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db Jul 9 23:49:31.231832 containerd[1505]: 2025-07-09 23:49:31.197 [INFO][3876] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" host="localhost" Jul 9 23:49:31.231832 containerd[1505]: 2025-07-09 23:49:31.202 [INFO][3876] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" host="localhost" Jul 9 23:49:31.231832 containerd[1505]: 2025-07-09 23:49:31.202 [INFO][3876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" host="localhost" Jul 9 23:49:31.231832 containerd[1505]: 2025-07-09 23:49:31.202 [INFO][3876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:49:31.231832 containerd[1505]: 2025-07-09 23:49:31.202 [INFO][3876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" HandleID="k8s-pod-network.112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" Workload="localhost-k8s-whisker--7d797d74d8--bnxpf-eth0" Jul 9 23:49:31.231965 containerd[1505]: 2025-07-09 23:49:31.204 [INFO][3858] cni-plugin/k8s.go 418: Populated endpoint ContainerID="112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" Namespace="calico-system" Pod="whisker-7d797d74d8-bnxpf" WorkloadEndpoint="localhost-k8s-whisker--7d797d74d8--bnxpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d797d74d8--bnxpf-eth0", GenerateName:"whisker-7d797d74d8-", Namespace:"calico-system", SelfLink:"", UID:"364a0a63-f0ab-4443-842e-8f827e82967c", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d797d74d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7d797d74d8-bnxpf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali861da89fb69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:31.231965 containerd[1505]: 2025-07-09 23:49:31.205 [INFO][3858] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" Namespace="calico-system" Pod="whisker-7d797d74d8-bnxpf" WorkloadEndpoint="localhost-k8s-whisker--7d797d74d8--bnxpf-eth0" Jul 9 23:49:31.232178 containerd[1505]: 2025-07-09 23:49:31.205 [INFO][3858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali861da89fb69 ContainerID="112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" Namespace="calico-system" Pod="whisker-7d797d74d8-bnxpf" WorkloadEndpoint="localhost-k8s-whisker--7d797d74d8--bnxpf-eth0" Jul 9 23:49:31.232178 containerd[1505]: 2025-07-09 23:49:31.218 [INFO][3858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" Namespace="calico-system" Pod="whisker-7d797d74d8-bnxpf" WorkloadEndpoint="localhost-k8s-whisker--7d797d74d8--bnxpf-eth0" Jul 9 23:49:31.232245 containerd[1505]: 2025-07-09 23:49:31.219 [INFO][3858] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" Namespace="calico-system" Pod="whisker-7d797d74d8-bnxpf" WorkloadEndpoint="localhost-k8s-whisker--7d797d74d8--bnxpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d797d74d8--bnxpf-eth0", GenerateName:"whisker-7d797d74d8-", Namespace:"calico-system", SelfLink:"", UID:"364a0a63-f0ab-4443-842e-8f827e82967c", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d797d74d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db", Pod:"whisker-7d797d74d8-bnxpf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali861da89fb69", MAC:"42:3c:dc:2c:ee:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:31.232323 containerd[1505]: 2025-07-09 23:49:31.227 [INFO][3858] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" Namespace="calico-system" Pod="whisker-7d797d74d8-bnxpf" WorkloadEndpoint="localhost-k8s-whisker--7d797d74d8--bnxpf-eth0" Jul 9 23:49:31.276515 containerd[1505]: time="2025-07-09T23:49:31.276468421Z" level=info msg="connecting to shim 112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db" address="unix:///run/containerd/s/0c98e142ca3618146a1425b12b18c6edde6d8876d8c6d91cac5bc44fc08f2e57" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:31.304293 systemd[1]: Started cri-containerd-112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db.scope - libcontainer container 112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db. Jul 9 23:49:31.315839 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:49:31.343459 containerd[1505]: time="2025-07-09T23:49:31.343417548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d797d74d8-bnxpf,Uid:364a0a63-f0ab-4443-842e-8f827e82967c,Namespace:calico-system,Attempt:0,} returns sandbox id \"112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db\"" Jul 9 23:49:31.345307 containerd[1505]: time="2025-07-09T23:49:31.345276904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 9 23:49:31.415962 kubelet[2619]: I0709 23:49:31.415915 2619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8c0181-4489-4422-98b9-8db8e4a479eb" path="/var/lib/kubelet/pods/8b8c0181-4489-4422-98b9-8db8e4a479eb/volumes" Jul 9 23:49:32.240260 containerd[1505]: time="2025-07-09T23:49:32.240200620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:32.242486 containerd[1505]: time="2025-07-09T23:49:32.242265609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 9 23:49:32.243677 containerd[1505]: time="2025-07-09T23:49:32.243643229Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:32.246494 containerd[1505]: time="2025-07-09T23:49:32.245817733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:32.246579 containerd[1505]: time="2025-07-09T23:49:32.246505463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 901.192321ms" Jul 9 23:49:32.246579 containerd[1505]: time="2025-07-09T23:49:32.246546861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 9 23:49:32.251377 containerd[1505]: time="2025-07-09T23:49:32.251336531Z" level=info msg="CreateContainer within sandbox \"112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 9 23:49:32.258790 containerd[1505]: time="2025-07-09T23:49:32.258736686Z" level=info msg="Container ed14bc392528a4c1187e3c9b508e1ee2f89cfe2288e64f87ed5a33f268109089: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:32.266921 containerd[1505]: time="2025-07-09T23:49:32.266873089Z" level=info msg="CreateContainer within sandbox \"112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ed14bc392528a4c1187e3c9b508e1ee2f89cfe2288e64f87ed5a33f268109089\"" Jul 9 23:49:32.267837 containerd[1505]: time="2025-07-09T23:49:32.267806928Z" level=info msg="StartContainer for \"ed14bc392528a4c1187e3c9b508e1ee2f89cfe2288e64f87ed5a33f268109089\"" Jul 9 23:49:32.268960 containerd[1505]: time="2025-07-09T23:49:32.268931239Z" level=info msg="connecting to shim ed14bc392528a4c1187e3c9b508e1ee2f89cfe2288e64f87ed5a33f268109089" address="unix:///run/containerd/s/0c98e142ca3618146a1425b12b18c6edde6d8876d8c6d91cac5bc44fc08f2e57" protocol=ttrpc version=3 Jul 9 23:49:32.289329 systemd[1]: Started cri-containerd-ed14bc392528a4c1187e3c9b508e1ee2f89cfe2288e64f87ed5a33f268109089.scope - libcontainer container ed14bc392528a4c1187e3c9b508e1ee2f89cfe2288e64f87ed5a33f268109089. Jul 9 23:49:32.336103 containerd[1505]: time="2025-07-09T23:49:32.335975057Z" level=info msg="StartContainer for \"ed14bc392528a4c1187e3c9b508e1ee2f89cfe2288e64f87ed5a33f268109089\" returns successfully" Jul 9 23:49:32.337753 containerd[1505]: time="2025-07-09T23:49:32.337683342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 9 23:49:32.941278 systemd-networkd[1423]: cali861da89fb69: Gained IPv6LL Jul 9 23:49:33.549567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount128235838.mount: Deactivated successfully. Jul 9 23:49:33.600238 containerd[1505]: time="2025-07-09T23:49:33.600188476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:33.601055 containerd[1505]: time="2025-07-09T23:49:33.601032840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 9 23:49:33.602104 containerd[1505]: time="2025-07-09T23:49:33.602069556Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:33.604739 containerd[1505]: time="2025-07-09T23:49:33.604659405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:33.605845 containerd[1505]: time="2025-07-09T23:49:33.605804477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.267864746s" Jul 9 23:49:33.605845 containerd[1505]: time="2025-07-09T23:49:33.605835795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 9 23:49:33.611370 containerd[1505]: time="2025-07-09T23:49:33.611311283Z" level=info msg="CreateContainer within sandbox \"112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 9 23:49:33.636351 containerd[1505]: time="2025-07-09T23:49:33.636304900Z" level=info msg="Container 0d8ac7f20c4cf762533578f743a5ad6860e1fc2edf806fbe3b7fbe4c2b4cd840: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:33.647020 containerd[1505]: time="2025-07-09T23:49:33.646969007Z" level=info msg="CreateContainer within sandbox \"112eb7dabeade7f7c7ba119f877ca81942d35c63746d6818b4301d184e1782db\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0d8ac7f20c4cf762533578f743a5ad6860e1fc2edf806fbe3b7fbe4c2b4cd840\"" Jul 9 23:49:33.647773 containerd[1505]: time="2025-07-09T23:49:33.647750493Z" level=info msg="StartContainer for \"0d8ac7f20c4cf762533578f743a5ad6860e1fc2edf806fbe3b7fbe4c2b4cd840\"" Jul 9 23:49:33.649073 containerd[1505]: time="2025-07-09T23:49:33.649047398Z" level=info msg="connecting to shim 0d8ac7f20c4cf762533578f743a5ad6860e1fc2edf806fbe3b7fbe4c2b4cd840" address="unix:///run/containerd/s/0c98e142ca3618146a1425b12b18c6edde6d8876d8c6d91cac5bc44fc08f2e57" protocol=ttrpc version=3 Jul 9 23:49:33.682321 systemd[1]: Started cri-containerd-0d8ac7f20c4cf762533578f743a5ad6860e1fc2edf806fbe3b7fbe4c2b4cd840.scope - libcontainer container 0d8ac7f20c4cf762533578f743a5ad6860e1fc2edf806fbe3b7fbe4c2b4cd840. Jul 9 23:49:33.720865 containerd[1505]: time="2025-07-09T23:49:33.719556201Z" level=info msg="StartContainer for \"0d8ac7f20c4cf762533578f743a5ad6860e1fc2edf806fbe3b7fbe4c2b4cd840\" returns successfully" Jul 9 23:49:34.611355 kubelet[2619]: I0709 23:49:34.611252 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7d797d74d8-bnxpf" podStartSLOduration=2.349794368 podStartE2EDuration="4.611233022s" podCreationTimestamp="2025-07-09 23:49:30 +0000 UTC" firstStartedPulling="2025-07-09 23:49:31.345025515 +0000 UTC m=+30.040664356" lastFinishedPulling="2025-07-09 23:49:33.606464169 +0000 UTC m=+32.302103010" observedRunningTime="2025-07-09 23:49:34.610892596 +0000 UTC m=+33.306531557" watchObservedRunningTime="2025-07-09 23:49:34.611233022 +0000 UTC m=+33.306871863" Jul 9 23:49:36.412756 containerd[1505]: time="2025-07-09T23:49:36.412707155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dd7ff9649-lmhwq,Uid:381a0306-41d5-4f71-bfd4-ee80e0272fef,Namespace:calico-system,Attempt:0,}" Jul 9 23:49:36.413617 containerd[1505]: time="2025-07-09T23:49:36.413566202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-crqbt,Uid:20a13ece-652a-4dc0-900e-33e27bb0e9cb,Namespace:calico-system,Attempt:0,}" Jul 9 23:49:36.569319 systemd-networkd[1423]: calibb0e455a022: Link UP Jul 9 23:49:36.570330 systemd-networkd[1423]: calibb0e455a022: Gained carrier Jul 9 23:49:36.584800 containerd[1505]: 2025-07-09 23:49:36.460 [INFO][4147] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 23:49:36.584800 containerd[1505]: 2025-07-09 23:49:36.486 [INFO][4147] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0 calico-kube-controllers-5dd7ff9649- calico-system 381a0306-41d5-4f71-bfd4-ee80e0272fef 827 0 2025-07-09 23:49:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5dd7ff9649 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5dd7ff9649-lmhwq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibb0e455a022 [] [] }} ContainerID="7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" Namespace="calico-system" Pod="calico-kube-controllers-5dd7ff9649-lmhwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-" Jul 9 23:49:36.584800 containerd[1505]: 2025-07-09 23:49:36.486 [INFO][4147] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" Namespace="calico-system" Pod="calico-kube-controllers-5dd7ff9649-lmhwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0" Jul 9 23:49:36.584800 containerd[1505]: 2025-07-09 23:49:36.519 [INFO][4165] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" HandleID="k8s-pod-network.7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" Workload="localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0" Jul 9 23:49:36.585029 containerd[1505]: 2025-07-09 23:49:36.519 [INFO][4165] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" HandleID="k8s-pod-network.7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" Workload="localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5dd7ff9649-lmhwq", "timestamp":"2025-07-09 23:49:36.519029766 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:49:36.585029 containerd[1505]: 2025-07-09 23:49:36.519 [INFO][4165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:49:36.585029 containerd[1505]: 2025-07-09 23:49:36.519 [INFO][4165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:49:36.585029 containerd[1505]: 2025-07-09 23:49:36.519 [INFO][4165] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 23:49:36.585029 containerd[1505]: 2025-07-09 23:49:36.530 [INFO][4165] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" host="localhost" Jul 9 23:49:36.585029 containerd[1505]: 2025-07-09 23:49:36.538 [INFO][4165] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 23:49:36.585029 containerd[1505]: 2025-07-09 23:49:36.543 [INFO][4165] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 23:49:36.585029 containerd[1505]: 2025-07-09 23:49:36.545 [INFO][4165] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:36.585029 containerd[1505]: 2025-07-09 23:49:36.548 [INFO][4165] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:36.585029 containerd[1505]: 2025-07-09 23:49:36.548 [INFO][4165] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" host="localhost" Jul 9 23:49:36.585275 containerd[1505]: 2025-07-09 23:49:36.550 [INFO][4165] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd Jul 9 23:49:36.585275 containerd[1505]: 2025-07-09 23:49:36.554 [INFO][4165] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" host="localhost" Jul 9 23:49:36.585275 containerd[1505]: 2025-07-09 23:49:36.560 [INFO][4165] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" host="localhost" Jul 9 23:49:36.585275 containerd[1505]: 2025-07-09 23:49:36.561 [INFO][4165] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" host="localhost" Jul 9 23:49:36.585275 containerd[1505]: 2025-07-09 23:49:36.561 [INFO][4165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:49:36.585275 containerd[1505]: 2025-07-09 23:49:36.561 [INFO][4165] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" HandleID="k8s-pod-network.7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" Workload="localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0" Jul 9 23:49:36.585711 containerd[1505]: 2025-07-09 23:49:36.567 [INFO][4147] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" Namespace="calico-system" Pod="calico-kube-controllers-5dd7ff9649-lmhwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0", GenerateName:"calico-kube-controllers-5dd7ff9649-", Namespace:"calico-system", SelfLink:"", UID:"381a0306-41d5-4f71-bfd4-ee80e0272fef", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dd7ff9649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5dd7ff9649-lmhwq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb0e455a022", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:36.585783 containerd[1505]: 2025-07-09 23:49:36.567 [INFO][4147] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" Namespace="calico-system" Pod="calico-kube-controllers-5dd7ff9649-lmhwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0" Jul 9 23:49:36.585783 containerd[1505]: 2025-07-09 23:49:36.567 [INFO][4147] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb0e455a022 ContainerID="7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" Namespace="calico-system" Pod="calico-kube-controllers-5dd7ff9649-lmhwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0" Jul 9 23:49:36.585783 containerd[1505]: 2025-07-09 23:49:36.571 [INFO][4147] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" Namespace="calico-system" Pod="calico-kube-controllers-5dd7ff9649-lmhwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0" Jul 9 23:49:36.585852 containerd[1505]: 2025-07-09 23:49:36.571 [INFO][4147] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" Namespace="calico-system" Pod="calico-kube-controllers-5dd7ff9649-lmhwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0", GenerateName:"calico-kube-controllers-5dd7ff9649-", Namespace:"calico-system", SelfLink:"", UID:"381a0306-41d5-4f71-bfd4-ee80e0272fef", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dd7ff9649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd", Pod:"calico-kube-controllers-5dd7ff9649-lmhwq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb0e455a022", MAC:"9e:56:8c:b7:fb:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:36.585900 containerd[1505]: 2025-07-09 23:49:36.582 [INFO][4147] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" Namespace="calico-system" Pod="calico-kube-controllers-5dd7ff9649-lmhwq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dd7ff9649--lmhwq-eth0" Jul 9 23:49:36.606905 containerd[1505]: time="2025-07-09T23:49:36.606858451Z" level=info msg="connecting to shim 7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd" address="unix:///run/containerd/s/2cd6119f26bd610e21da0818a9e7c042a6b565553c2b9c9ea984e689b72307c8" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:36.634362 systemd[1]: Started cri-containerd-7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd.scope - libcontainer container 7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd. Jul 9 23:49:36.653507 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:49:36.672922 systemd-networkd[1423]: calibc858e306bc: Link UP Jul 9 23:49:36.673088 systemd-networkd[1423]: calibc858e306bc: Gained carrier Jul 9 23:49:36.683632 containerd[1505]: time="2025-07-09T23:49:36.681680039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dd7ff9649-lmhwq,Uid:381a0306-41d5-4f71-bfd4-ee80e0272fef,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd\"" Jul 9 23:49:36.687888 containerd[1505]: time="2025-07-09T23:49:36.687837201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 9 23:49:36.690948 containerd[1505]: 2025-07-09 23:49:36.467 [INFO][4137] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 23:49:36.690948 containerd[1505]: 2025-07-09 23:49:36.488 [INFO][4137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--crqbt-eth0 goldmane-768f4c5c69- calico-system 20a13ece-652a-4dc0-900e-33e27bb0e9cb 833 0 2025-07-09 23:49:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-crqbt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibc858e306bc [] [] }} ContainerID="4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" Namespace="calico-system" Pod="goldmane-768f4c5c69-crqbt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--crqbt-" Jul 9 23:49:36.690948 containerd[1505]: 2025-07-09 23:49:36.488 [INFO][4137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" Namespace="calico-system" Pod="goldmane-768f4c5c69-crqbt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--crqbt-eth0" Jul 9 23:49:36.690948 containerd[1505]: 2025-07-09 23:49:36.520 [INFO][4167] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" HandleID="k8s-pod-network.4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" Workload="localhost-k8s-goldmane--768f4c5c69--crqbt-eth0" Jul 9 23:49:36.691319 containerd[1505]: 2025-07-09 23:49:36.520 [INFO][4167] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" HandleID="k8s-pod-network.4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" Workload="localhost-k8s-goldmane--768f4c5c69--crqbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-crqbt", "timestamp":"2025-07-09 23:49:36.520383313 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:49:36.691319 containerd[1505]: 2025-07-09 23:49:36.520 [INFO][4167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:49:36.691319 containerd[1505]: 2025-07-09 23:49:36.561 [INFO][4167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:49:36.691319 containerd[1505]: 2025-07-09 23:49:36.561 [INFO][4167] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 23:49:36.691319 containerd[1505]: 2025-07-09 23:49:36.630 [INFO][4167] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" host="localhost" Jul 9 23:49:36.691319 containerd[1505]: 2025-07-09 23:49:36.636 [INFO][4167] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 23:49:36.691319 containerd[1505]: 2025-07-09 23:49:36.645 [INFO][4167] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 23:49:36.691319 containerd[1505]: 2025-07-09 23:49:36.649 [INFO][4167] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:36.691319 containerd[1505]: 2025-07-09 23:49:36.651 [INFO][4167] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:36.691319 containerd[1505]: 2025-07-09 23:49:36.652 [INFO][4167] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" host="localhost" Jul 9 23:49:36.691530 containerd[1505]: 2025-07-09 23:49:36.654 [INFO][4167] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145 Jul 9 23:49:36.691530 containerd[1505]: 2025-07-09 23:49:36.658 [INFO][4167] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" host="localhost" Jul 9 23:49:36.691530 containerd[1505]: 2025-07-09 23:49:36.665 [INFO][4167] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" host="localhost" Jul 9 23:49:36.691530 containerd[1505]: 2025-07-09 23:49:36.665 [INFO][4167] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" host="localhost" Jul 9 23:49:36.691530 containerd[1505]: 2025-07-09 23:49:36.665 [INFO][4167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:49:36.691530 containerd[1505]: 2025-07-09 23:49:36.665 [INFO][4167] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" HandleID="k8s-pod-network.4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" Workload="localhost-k8s-goldmane--768f4c5c69--crqbt-eth0" Jul 9 23:49:36.691653 containerd[1505]: 2025-07-09 23:49:36.669 [INFO][4137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" Namespace="calico-system" Pod="goldmane-768f4c5c69-crqbt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--crqbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--crqbt-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"20a13ece-652a-4dc0-900e-33e27bb0e9cb", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-crqbt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibc858e306bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:36.691653 containerd[1505]: 2025-07-09 23:49:36.670 [INFO][4137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" Namespace="calico-system" Pod="goldmane-768f4c5c69-crqbt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--crqbt-eth0" Jul 9 23:49:36.691722 containerd[1505]: 2025-07-09 23:49:36.670 [INFO][4137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc858e306bc ContainerID="4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" Namespace="calico-system" Pod="goldmane-768f4c5c69-crqbt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--crqbt-eth0" Jul 9 23:49:36.691722 containerd[1505]: 2025-07-09 23:49:36.672 [INFO][4137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" Namespace="calico-system" Pod="goldmane-768f4c5c69-crqbt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--crqbt-eth0" Jul 9 23:49:36.691762 containerd[1505]: 2025-07-09 23:49:36.674 [INFO][4137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" Namespace="calico-system" Pod="goldmane-768f4c5c69-crqbt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--crqbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--crqbt-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"20a13ece-652a-4dc0-900e-33e27bb0e9cb", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145", Pod:"goldmane-768f4c5c69-crqbt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibc858e306bc", MAC:"8a:95:68:5c:89:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:36.691809 containerd[1505]: 2025-07-09 23:49:36.687 [INFO][4137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" Namespace="calico-system" Pod="goldmane-768f4c5c69-crqbt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--crqbt-eth0" Jul 9 23:49:36.714718 containerd[1505]: time="2025-07-09T23:49:36.714605646Z" level=info msg="connecting to shim 4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145" address="unix:///run/containerd/s/205517e3ff52471b84e94f03209f28bd89a3830232efc7ceacc8e171c00b8f9b" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:36.735320 systemd[1]: Started cri-containerd-4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145.scope - libcontainer container 4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145. Jul 9 23:49:36.747426 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:49:36.769588 containerd[1505]: time="2025-07-09T23:49:36.769533163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-crqbt,Uid:20a13ece-652a-4dc0-900e-33e27bb0e9cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145\"" Jul 9 23:49:37.933261 systemd-networkd[1423]: calibc858e306bc: Gained IPv6LL Jul 9 23:49:38.412488 containerd[1505]: time="2025-07-09T23:49:38.412085660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b57986d96-pcljx,Uid:7873676d-bab6-438f-81fb-991448bf022b,Namespace:calico-apiserver,Attempt:0,}" Jul 9 23:49:38.412488 containerd[1505]: time="2025-07-09T23:49:38.412085740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-clg8f,Uid:83aae808-6f16-47f6-8edc-3bf2373c863d,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:38.445238 systemd-networkd[1423]: calibb0e455a022: Gained IPv6LL Jul 9 23:49:38.554139 containerd[1505]: time="2025-07-09T23:49:38.554059750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 9 23:49:38.554741 containerd[1505]: time="2025-07-09T23:49:38.554632329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:38.558190 containerd[1505]: time="2025-07-09T23:49:38.558158561Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:38.558916 containerd[1505]: time="2025-07-09T23:49:38.558876935Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.870989056s" Jul 9 23:49:38.559002 containerd[1505]: time="2025-07-09T23:49:38.558917693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 9 23:49:38.565645 containerd[1505]: time="2025-07-09T23:49:38.565600651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:38.570452 containerd[1505]: time="2025-07-09T23:49:38.570370398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 9 23:49:38.588008 containerd[1505]: time="2025-07-09T23:49:38.587592853Z" level=info msg="CreateContainer within sandbox \"7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 9 23:49:38.695949 containerd[1505]: time="2025-07-09T23:49:38.685898207Z" level=info msg="Container ee05ddb36131f81f3bf08a6fc5e6d335fa92f47a961e7dd737035fd6c894a344: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:38.849765 containerd[1505]: time="2025-07-09T23:49:38.849702745Z" level=info msg="CreateContainer within sandbox \"7a520acbd02a131f028173839b924a36e4dd87662759b964ac65bd3003ee64cd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ee05ddb36131f81f3bf08a6fc5e6d335fa92f47a961e7dd737035fd6c894a344\"" Jul 9 23:49:38.850352 containerd[1505]: time="2025-07-09T23:49:38.850325003Z" level=info msg="StartContainer for \"ee05ddb36131f81f3bf08a6fc5e6d335fa92f47a961e7dd737035fd6c894a344\"" Jul 9 23:49:38.851530 containerd[1505]: time="2025-07-09T23:49:38.851503600Z" level=info msg="connecting to shim ee05ddb36131f81f3bf08a6fc5e6d335fa92f47a961e7dd737035fd6c894a344" address="unix:///run/containerd/s/2cd6119f26bd610e21da0818a9e7c042a6b565553c2b9c9ea984e689b72307c8" protocol=ttrpc version=3 Jul 9 23:49:38.870292 systemd[1]: Started cri-containerd-ee05ddb36131f81f3bf08a6fc5e6d335fa92f47a961e7dd737035fd6c894a344.scope - libcontainer container ee05ddb36131f81f3bf08a6fc5e6d335fa92f47a961e7dd737035fd6c894a344. Jul 9 23:49:38.925760 containerd[1505]: time="2025-07-09T23:49:38.925666510Z" level=info msg="StartContainer for \"ee05ddb36131f81f3bf08a6fc5e6d335fa92f47a961e7dd737035fd6c894a344\" returns successfully" Jul 9 23:49:38.932324 systemd-networkd[1423]: cali83b2c63f4c2: Link UP Jul 9 23:49:38.932499 systemd-networkd[1423]: cali83b2c63f4c2: Gained carrier Jul 9 23:49:38.956852 containerd[1505]: 2025-07-09 23:49:38.577 [INFO][4334] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 23:49:38.956852 containerd[1505]: 2025-07-09 23:49:38.616 [INFO][4334] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--clg8f-eth0 coredns-674b8bbfcf- kube-system 83aae808-6f16-47f6-8edc-3bf2373c863d 834 0 2025-07-09 23:49:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-clg8f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali83b2c63f4c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" Namespace="kube-system" Pod="coredns-674b8bbfcf-clg8f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clg8f-" Jul 9 23:49:38.956852 containerd[1505]: 2025-07-09 23:49:38.617 [INFO][4334] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" Namespace="kube-system" Pod="coredns-674b8bbfcf-clg8f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clg8f-eth0" Jul 9 23:49:38.956852 containerd[1505]: 2025-07-09 23:49:38.678 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" HandleID="k8s-pod-network.5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" Workload="localhost-k8s-coredns--674b8bbfcf--clg8f-eth0" Jul 9 23:49:38.957079 containerd[1505]: 2025-07-09 23:49:38.678 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" HandleID="k8s-pod-network.5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" Workload="localhost-k8s-coredns--674b8bbfcf--clg8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1750), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-clg8f", "timestamp":"2025-07-09 23:49:38.678182047 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:49:38.957079 containerd[1505]: 2025-07-09 23:49:38.678 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:49:38.957079 containerd[1505]: 2025-07-09 23:49:38.678 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:49:38.957079 containerd[1505]: 2025-07-09 23:49:38.678 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 23:49:38.957079 containerd[1505]: 2025-07-09 23:49:38.697 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" host="localhost" Jul 9 23:49:38.957079 containerd[1505]: 2025-07-09 23:49:38.705 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 23:49:38.957079 containerd[1505]: 2025-07-09 23:49:38.755 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 23:49:38.957079 containerd[1505]: 2025-07-09 23:49:38.757 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:38.957079 containerd[1505]: 2025-07-09 23:49:38.759 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:38.957079 containerd[1505]: 2025-07-09 23:49:38.759 [INFO][4379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" host="localhost" Jul 9 23:49:38.958609 containerd[1505]: 2025-07-09 23:49:38.761 [INFO][4379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b Jul 9 23:49:38.958609 containerd[1505]: 2025-07-09 23:49:38.792 [INFO][4379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" host="localhost" Jul 9 23:49:38.958609 containerd[1505]: 2025-07-09 23:49:38.920 [INFO][4379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" host="localhost" Jul 9 23:49:38.958609 containerd[1505]: 2025-07-09 23:49:38.921 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" host="localhost" Jul 9 23:49:38.958609 containerd[1505]: 2025-07-09 23:49:38.921 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:49:38.958609 containerd[1505]: 2025-07-09 23:49:38.921 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" HandleID="k8s-pod-network.5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" Workload="localhost-k8s-coredns--674b8bbfcf--clg8f-eth0" Jul 9 23:49:38.959340 containerd[1505]: 2025-07-09 23:49:38.926 [INFO][4334] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" Namespace="kube-system" Pod="coredns-674b8bbfcf-clg8f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clg8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--clg8f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"83aae808-6f16-47f6-8edc-3bf2373c863d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-clg8f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83b2c63f4c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:38.959542 containerd[1505]: 2025-07-09 23:49:38.926 [INFO][4334] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" Namespace="kube-system" Pod="coredns-674b8bbfcf-clg8f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clg8f-eth0" Jul 9 23:49:38.959542 containerd[1505]: 2025-07-09 23:49:38.926 [INFO][4334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83b2c63f4c2 ContainerID="5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" Namespace="kube-system" Pod="coredns-674b8bbfcf-clg8f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clg8f-eth0" Jul 9 23:49:38.959542 containerd[1505]: 2025-07-09 23:49:38.931 [INFO][4334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" Namespace="kube-system" Pod="coredns-674b8bbfcf-clg8f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clg8f-eth0" Jul 9 23:49:38.959671 containerd[1505]: 2025-07-09 23:49:38.937 [INFO][4334] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" Namespace="kube-system" Pod="coredns-674b8bbfcf-clg8f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clg8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--clg8f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"83aae808-6f16-47f6-8edc-3bf2373c863d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b", Pod:"coredns-674b8bbfcf-clg8f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83b2c63f4c2", MAC:"0e:f1:ba:79:a6:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:38.959671 containerd[1505]: 2025-07-09 23:49:38.949 [INFO][4334] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" Namespace="kube-system" Pod="coredns-674b8bbfcf-clg8f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clg8f-eth0" Jul 9 23:49:38.993021 containerd[1505]: time="2025-07-09T23:49:38.992971228Z" level=info msg="connecting to shim 5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b" address="unix:///run/containerd/s/1a10da61b2299b6568ca7a607a3e97c18580099d402aa1136450d5d4709206ac" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:38.997450 systemd-networkd[1423]: cali582b0671e9f: Link UP Jul 9 23:49:38.998714 systemd-networkd[1423]: cali582b0671e9f: Gained carrier Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.576 [INFO][4333] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.608 [INFO][4333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0 calico-apiserver-7b57986d96- calico-apiserver 7873676d-bab6-438f-81fb-991448bf022b 832 0 2025-07-09 23:49:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b57986d96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b57986d96-pcljx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali582b0671e9f [] [] }} ContainerID="058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-pcljx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--pcljx-" Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.608 [INFO][4333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-pcljx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0" Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.680 [INFO][4376] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" HandleID="k8s-pod-network.058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" Workload="localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0" Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.681 [INFO][4376] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" HandleID="k8s-pod-network.058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" Workload="localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b57986d96-pcljx", "timestamp":"2025-07-09 23:49:38.680878789 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.681 [INFO][4376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.921 [INFO][4376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.921 [INFO][4376] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.944 [INFO][4376] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" host="localhost" Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.952 [INFO][4376] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.968 [INFO][4376] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.972 [INFO][4376] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.975 [INFO][4376] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.975 [INFO][4376] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" host="localhost" Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.977 [INFO][4376] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.981 [INFO][4376] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" host="localhost" Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.989 [INFO][4376] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" host="localhost" Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.989 [INFO][4376] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" host="localhost" Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.989 [INFO][4376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:49:39.019734 containerd[1505]: 2025-07-09 23:49:38.989 [INFO][4376] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" HandleID="k8s-pod-network.058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" Workload="localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0" Jul 9 23:49:39.020723 containerd[1505]: 2025-07-09 23:49:38.995 [INFO][4333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-pcljx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0", GenerateName:"calico-apiserver-7b57986d96-", Namespace:"calico-apiserver", SelfLink:"", UID:"7873676d-bab6-438f-81fb-991448bf022b", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b57986d96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b57986d96-pcljx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali582b0671e9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:39.020723 containerd[1505]: 2025-07-09 23:49:38.995 [INFO][4333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-pcljx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0" Jul 9 23:49:39.020723 containerd[1505]: 2025-07-09 23:49:38.995 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali582b0671e9f ContainerID="058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-pcljx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0" Jul 9 23:49:39.020723 containerd[1505]: 2025-07-09 23:49:38.998 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-pcljx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0" Jul 9 23:49:39.020723 containerd[1505]: 2025-07-09 23:49:38.999 [INFO][4333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-pcljx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0", GenerateName:"calico-apiserver-7b57986d96-", Namespace:"calico-apiserver", SelfLink:"", UID:"7873676d-bab6-438f-81fb-991448bf022b", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b57986d96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a", Pod:"calico-apiserver-7b57986d96-pcljx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali582b0671e9f", MAC:"76:48:aa:91:95:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:39.020723 containerd[1505]: 2025-07-09 23:49:39.013 [INFO][4333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-pcljx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--pcljx-eth0" Jul 9 23:49:39.033353 systemd[1]: Started cri-containerd-5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b.scope - libcontainer container 5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b. Jul 9 23:49:39.052624 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:49:39.066152 containerd[1505]: time="2025-07-09T23:49:39.065951694Z" level=info msg="connecting to shim 058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a" address="unix:///run/containerd/s/0d48fd983e67ba4a94244e9a0168c04746795d844dd5995481d3e885b91830e2" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:39.089683 containerd[1505]: time="2025-07-09T23:49:39.089624102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-clg8f,Uid:83aae808-6f16-47f6-8edc-3bf2373c863d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b\"" Jul 9 23:49:39.096828 containerd[1505]: time="2025-07-09T23:49:39.096758332Z" level=info msg="CreateContainer within sandbox \"5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:49:39.116364 systemd[1]: Started cri-containerd-058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a.scope - libcontainer container 058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a. Jul 9 23:49:39.127776 containerd[1505]: time="2025-07-09T23:49:39.127298899Z" level=info msg="Container c865bcecc36e70333e80bc746973cda11be90346944a296ef2817d67eb2c42a4: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:39.138924 containerd[1505]: time="2025-07-09T23:49:39.138858972Z" level=info msg="CreateContainer within sandbox \"5f7feb95a2a5e45a8ee465222b414840a3b26debd97f2adcc42e42ac27d8467b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c865bcecc36e70333e80bc746973cda11be90346944a296ef2817d67eb2c42a4\"" Jul 9 23:49:39.139517 containerd[1505]: time="2025-07-09T23:49:39.139390594Z" level=info msg="StartContainer for \"c865bcecc36e70333e80bc746973cda11be90346944a296ef2817d67eb2c42a4\"" Jul 9 23:49:39.140466 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:49:39.140848 containerd[1505]: time="2025-07-09T23:49:39.140822703Z" level=info msg="connecting to shim c865bcecc36e70333e80bc746973cda11be90346944a296ef2817d67eb2c42a4" address="unix:///run/containerd/s/1a10da61b2299b6568ca7a607a3e97c18580099d402aa1136450d5d4709206ac" protocol=ttrpc version=3 Jul 9 23:49:39.165333 systemd[1]: Started cri-containerd-c865bcecc36e70333e80bc746973cda11be90346944a296ef2817d67eb2c42a4.scope - libcontainer container c865bcecc36e70333e80bc746973cda11be90346944a296ef2817d67eb2c42a4. Jul 9 23:49:39.184629 containerd[1505]: time="2025-07-09T23:49:39.184590165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b57986d96-pcljx,Uid:7873676d-bab6-438f-81fb-991448bf022b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a\"" Jul 9 23:49:39.211501 containerd[1505]: time="2025-07-09T23:49:39.210350740Z" level=info msg="StartContainer for \"c865bcecc36e70333e80bc746973cda11be90346944a296ef2817d67eb2c42a4\" returns successfully" Jul 9 23:49:39.638758 kubelet[2619]: I0709 23:49:39.638610 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5dd7ff9649-lmhwq" podStartSLOduration=20.759769321 podStartE2EDuration="22.638592371s" podCreationTimestamp="2025-07-09 23:49:17 +0000 UTC" firstStartedPulling="2025-07-09 23:49:36.687248784 +0000 UTC m=+35.382887585" lastFinishedPulling="2025-07-09 23:49:38.566071794 +0000 UTC m=+37.261710635" observedRunningTime="2025-07-09 23:49:39.637549128 +0000 UTC m=+38.333188089" watchObservedRunningTime="2025-07-09 23:49:39.638592371 +0000 UTC m=+38.334231212" Jul 9 23:49:39.651815 kubelet[2619]: I0709 23:49:39.651678 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-clg8f" podStartSLOduration=33.651663672 podStartE2EDuration="33.651663672s" podCreationTimestamp="2025-07-09 23:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:39.650784383 +0000 UTC m=+38.346423384" watchObservedRunningTime="2025-07-09 23:49:39.651663672 +0000 UTC m=+38.347302513" Jul 9 23:49:39.748651 kubelet[2619]: I0709 23:49:39.746772 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:49:40.230167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3253789399.mount: Deactivated successfully. Jul 9 23:49:40.238240 systemd-networkd[1423]: cali582b0671e9f: Gained IPv6LL Jul 9 23:49:40.412485 containerd[1505]: time="2025-07-09T23:49:40.412422870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gp8z7,Uid:a1c76dce-4758-43aa-813c-3a4ee32989f0,Namespace:calico-system,Attempt:0,}" Jul 9 23:49:40.413097 containerd[1505]: time="2025-07-09T23:49:40.412424510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qfz5c,Uid:20e4c898-92b0-4a0b-be4c-bd32b26e85d2,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:40.414316 containerd[1505]: time="2025-07-09T23:49:40.414289326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b57986d96-zmxgv,Uid:b356b2b4-7758-4ace-9446-60f3ea47c743,Namespace:calico-apiserver,Attempt:0,}" Jul 9 23:49:40.633205 kubelet[2619]: I0709 23:49:40.632740 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:49:40.635468 systemd-networkd[1423]: vxlan.calico: Link UP Jul 9 23:49:40.635475 systemd-networkd[1423]: vxlan.calico: Gained carrier Jul 9 23:49:40.649645 systemd-networkd[1423]: cali5b53c702f72: Link UP Jul 9 23:49:40.650085 systemd-networkd[1423]: cali5b53c702f72: Gained carrier Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.485 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0 coredns-674b8bbfcf- kube-system 20e4c898-92b0-4a0b-be4c-bd32b26e85d2 835 0 2025-07-09 23:49:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qfz5c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5b53c702f72 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qfz5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qfz5c-" Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.485 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qfz5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0" Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.561 [INFO][4700] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" HandleID="k8s-pod-network.d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" Workload="localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0" Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.561 [INFO][4700] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" HandleID="k8s-pod-network.d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" Workload="localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1740), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qfz5c", "timestamp":"2025-07-09 23:49:40.561172326 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.561 [INFO][4700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.561 [INFO][4700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.561 [INFO][4700] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.578 [INFO][4700] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" host="localhost" Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.587 [INFO][4700] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.597 [INFO][4700] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.601 [INFO][4700] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.607 [INFO][4700] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.607 [INFO][4700] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" host="localhost" Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.609 [INFO][4700] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.616 [INFO][4700] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" host="localhost" Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.627 [INFO][4700] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" host="localhost" Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.628 [INFO][4700] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" host="localhost" Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.628 [INFO][4700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:49:40.669549 containerd[1505]: 2025-07-09 23:49:40.628 [INFO][4700] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" HandleID="k8s-pod-network.d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" Workload="localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0" Jul 9 23:49:40.670077 containerd[1505]: 2025-07-09 23:49:40.637 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qfz5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"20e4c898-92b0-4a0b-be4c-bd32b26e85d2", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qfz5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b53c702f72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:40.670077 containerd[1505]: 2025-07-09 23:49:40.638 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qfz5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0" Jul 9 23:49:40.670077 containerd[1505]: 2025-07-09 23:49:40.639 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b53c702f72 ContainerID="d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qfz5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0" Jul 9 23:49:40.670077 containerd[1505]: 2025-07-09 23:49:40.650 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qfz5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0" Jul 9 23:49:40.670077 containerd[1505]: 2025-07-09 23:49:40.651 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qfz5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"20e4c898-92b0-4a0b-be4c-bd32b26e85d2", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c", Pod:"coredns-674b8bbfcf-qfz5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b53c702f72", MAC:"c6:9f:94:d8:d6:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:40.670077 containerd[1505]: 2025-07-09 23:49:40.665 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" Namespace="kube-system" Pod="coredns-674b8bbfcf-qfz5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qfz5c-eth0" Jul 9 23:49:40.792530 containerd[1505]: time="2025-07-09T23:49:40.792029467Z" level=info msg="connecting to shim d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c" address="unix:///run/containerd/s/230d0fdbdbd8ff33a954d7af077a52b4163f9c9991efcf5ef7d0351e2cf3f322" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:40.801579 systemd-networkd[1423]: calieb984eefd0c: Link UP Jul 9 23:49:40.804613 systemd-networkd[1423]: calieb984eefd0c: Gained carrier Jul 9 23:49:40.834289 systemd[1]: Started cri-containerd-d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c.scope - libcontainer container d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c. Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.480 [INFO][4666] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0 calico-apiserver-7b57986d96- calico-apiserver b356b2b4-7758-4ace-9446-60f3ea47c743 830 0 2025-07-09 23:49:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b57986d96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b57986d96-zmxgv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieb984eefd0c [] [] }} ContainerID="1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-zmxgv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--zmxgv-" Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.481 [INFO][4666] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-zmxgv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0" Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.567 [INFO][4707] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" HandleID="k8s-pod-network.1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" Workload="localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0" Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.567 [INFO][4707] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" HandleID="k8s-pod-network.1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" Workload="localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042c510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b57986d96-zmxgv", "timestamp":"2025-07-09 23:49:40.563509126 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.567 [INFO][4707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.628 [INFO][4707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.628 [INFO][4707] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.678 [INFO][4707] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" host="localhost" Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.687 [INFO][4707] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.694 [INFO][4707] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.697 [INFO][4707] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.699 [INFO][4707] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.699 [INFO][4707] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" host="localhost" Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.702 [INFO][4707] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.707 [INFO][4707] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" host="localhost" Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.776 [INFO][4707] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" host="localhost" Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.776 [INFO][4707] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" host="localhost" Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.776 [INFO][4707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:49:40.838934 containerd[1505]: 2025-07-09 23:49:40.776 [INFO][4707] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" HandleID="k8s-pod-network.1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" Workload="localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0" Jul 9 23:49:40.839477 containerd[1505]: 2025-07-09 23:49:40.785 [INFO][4666] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-zmxgv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0", GenerateName:"calico-apiserver-7b57986d96-", Namespace:"calico-apiserver", SelfLink:"", UID:"b356b2b4-7758-4ace-9446-60f3ea47c743", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b57986d96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b57986d96-zmxgv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb984eefd0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:40.839477 containerd[1505]: 2025-07-09 23:49:40.786 [INFO][4666] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-zmxgv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0" Jul 9 23:49:40.839477 containerd[1505]: 2025-07-09 23:49:40.787 [INFO][4666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb984eefd0c ContainerID="1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-zmxgv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0" Jul 9 23:49:40.839477 containerd[1505]: 2025-07-09 23:49:40.807 [INFO][4666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-zmxgv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0" Jul 9 23:49:40.839477 containerd[1505]: 2025-07-09 23:49:40.814 [INFO][4666] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-zmxgv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0", GenerateName:"calico-apiserver-7b57986d96-", Namespace:"calico-apiserver", SelfLink:"", UID:"b356b2b4-7758-4ace-9446-60f3ea47c743", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b57986d96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f", Pod:"calico-apiserver-7b57986d96-zmxgv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb984eefd0c", MAC:"8e:02:4f:c3:ca:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:40.839477 containerd[1505]: 2025-07-09 23:49:40.831 [INFO][4666] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" Namespace="calico-apiserver" Pod="calico-apiserver-7b57986d96-zmxgv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b57986d96--zmxgv-eth0" Jul 9 23:49:40.877945 systemd-networkd[1423]: cali536494c0e9a: Link UP Jul 9 23:49:40.879457 systemd-networkd[1423]: cali536494c0e9a: Gained carrier Jul 9 23:49:40.881785 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:49:40.900158 containerd[1505]: time="2025-07-09T23:49:40.898091816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.505 [INFO][4657] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gp8z7-eth0 csi-node-driver- calico-system a1c76dce-4758-43aa-813c-3a4ee32989f0 677 0 2025-07-09 23:49:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gp8z7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali536494c0e9a [] [] }} ContainerID="0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" Namespace="calico-system" Pod="csi-node-driver-gp8z7" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp8z7-" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.505 [INFO][4657] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" Namespace="calico-system" Pod="csi-node-driver-gp8z7" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp8z7-eth0" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.577 [INFO][4716] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" HandleID="k8s-pod-network.0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" Workload="localhost-k8s-csi--node--driver--gp8z7-eth0" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.578 [INFO][4716] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" HandleID="k8s-pod-network.0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" Workload="localhost-k8s-csi--node--driver--gp8z7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gp8z7", "timestamp":"2025-07-09 23:49:40.577826319 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.578 [INFO][4716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.776 [INFO][4716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.776 [INFO][4716] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.804 [INFO][4716] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" host="localhost" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.817 [INFO][4716] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.828 [INFO][4716] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.833 [INFO][4716] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.839 [INFO][4716] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.839 [INFO][4716] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" host="localhost" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.842 [INFO][4716] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.852 [INFO][4716] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" host="localhost" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.863 [INFO][4716] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" host="localhost" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.863 [INFO][4716] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" host="localhost" Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.863 [INFO][4716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:49:40.900158 containerd[1505]: 2025-07-09 23:49:40.863 [INFO][4716] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" HandleID="k8s-pod-network.0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" Workload="localhost-k8s-csi--node--driver--gp8z7-eth0" Jul 9 23:49:40.901370 containerd[1505]: 2025-07-09 23:49:40.867 [INFO][4657] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" Namespace="calico-system" Pod="csi-node-driver-gp8z7" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp8z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gp8z7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a1c76dce-4758-43aa-813c-3a4ee32989f0", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gp8z7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali536494c0e9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:40.901370 containerd[1505]: 2025-07-09 23:49:40.868 [INFO][4657] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" Namespace="calico-system" Pod="csi-node-driver-gp8z7" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp8z7-eth0" Jul 9 23:49:40.901370 containerd[1505]: 2025-07-09 23:49:40.868 [INFO][4657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali536494c0e9a ContainerID="0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" Namespace="calico-system" Pod="csi-node-driver-gp8z7" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp8z7-eth0" Jul 9 23:49:40.901370 containerd[1505]: 2025-07-09 23:49:40.880 [INFO][4657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" Namespace="calico-system" Pod="csi-node-driver-gp8z7" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp8z7-eth0" Jul 9 23:49:40.901370 containerd[1505]: 2025-07-09 23:49:40.882 [INFO][4657] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" Namespace="calico-system" Pod="csi-node-driver-gp8z7" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp8z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gp8z7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a1c76dce-4758-43aa-813c-3a4ee32989f0", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a", Pod:"csi-node-driver-gp8z7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali536494c0e9a", MAC:"46:0f:2f:a5:20:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:49:40.901370 containerd[1505]: 2025-07-09 23:49:40.893 [INFO][4657] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" Namespace="calico-system" Pod="csi-node-driver-gp8z7" WorkloadEndpoint="localhost-k8s-csi--node--driver--gp8z7-eth0" Jul 9 23:49:40.901370 containerd[1505]: time="2025-07-09T23:49:40.899623764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 9 23:49:40.901370 containerd[1505]: time="2025-07-09T23:49:40.900986037Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:40.904967 containerd[1505]: time="2025-07-09T23:49:40.904815987Z" level=info msg="connecting to shim 1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f" address="unix:///run/containerd/s/3bcb08470ff17869e01ba1820bec0019b878c23c045c431770506efcc0c7b132" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:40.913530 containerd[1505]: time="2025-07-09T23:49:40.911526519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:40.914297 containerd[1505]: time="2025-07-09T23:49:40.913863159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.343260049s" Jul 9 23:49:40.914297 containerd[1505]: time="2025-07-09T23:49:40.913907838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 9 23:49:40.918629 containerd[1505]: time="2025-07-09T23:49:40.918482922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 23:49:40.926297 containerd[1505]: time="2025-07-09T23:49:40.924543516Z" level=info msg="CreateContainer within sandbox \"4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 9 23:49:40.941264 systemd-networkd[1423]: cali83b2c63f4c2: Gained IPv6LL Jul 9 23:49:40.944835 containerd[1505]: time="2025-07-09T23:49:40.944770547Z" level=info msg="connecting to shim 0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a" address="unix:///run/containerd/s/071652e49f3f77c9742a51859353399ebcadf1585f164542583d70a412b7e11a" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:40.946207 containerd[1505]: time="2025-07-09T23:49:40.945547200Z" level=info msg="Container 7f47c1dc0a64290c3c1dfa0035a23211137503b70a8ae1f82db416492054d442: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:40.949183 containerd[1505]: time="2025-07-09T23:49:40.947832723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qfz5c,Uid:20e4c898-92b0-4a0b-be4c-bd32b26e85d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c\"" Jul 9 23:49:40.960034 containerd[1505]: time="2025-07-09T23:49:40.958755471Z" level=info msg="CreateContainer within sandbox \"d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:49:40.964842 containerd[1505]: time="2025-07-09T23:49:40.964789305Z" level=info msg="CreateContainer within sandbox \"4cc2ac7eb71ebdd1e53bb6ba0f85744f97a285d29f9fc2ca120a95f99035f145\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7f47c1dc0a64290c3c1dfa0035a23211137503b70a8ae1f82db416492054d442\"" Jul 9 23:49:40.970511 containerd[1505]: time="2025-07-09T23:49:40.965619717Z" level=info msg="StartContainer for \"7f47c1dc0a64290c3c1dfa0035a23211137503b70a8ae1f82db416492054d442\"" Jul 9 23:49:40.970511 containerd[1505]: time="2025-07-09T23:49:40.968408622Z" level=info msg="connecting to shim 7f47c1dc0a64290c3c1dfa0035a23211137503b70a8ae1f82db416492054d442" address="unix:///run/containerd/s/205517e3ff52471b84e94f03209f28bd89a3830232efc7ceacc8e171c00b8f9b" protocol=ttrpc version=3 Jul 9 23:49:40.978391 systemd[1]: Started cri-containerd-1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f.scope - libcontainer container 1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f. Jul 9 23:49:40.981721 containerd[1505]: time="2025-07-09T23:49:40.981335462Z" level=info msg="Container 7f95e684636f8618cc3421768fe131e94287111c4a80a19feb2860cd4aee6e34: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:40.981961 systemd[1]: Started cri-containerd-0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a.scope - libcontainer container 0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a. Jul 9 23:49:40.995518 containerd[1505]: time="2025-07-09T23:49:40.995460501Z" level=info msg="CreateContainer within sandbox \"d550b89ed0412ce767da1244710ed699704b06a54acb6c4584b17e2e0154be3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f95e684636f8618cc3421768fe131e94287111c4a80a19feb2860cd4aee6e34\"" Jul 9 23:49:40.996390 containerd[1505]: time="2025-07-09T23:49:40.996076880Z" level=info msg="StartContainer for \"7f95e684636f8618cc3421768fe131e94287111c4a80a19feb2860cd4aee6e34\"" Jul 9 23:49:40.997609 containerd[1505]: time="2025-07-09T23:49:40.997217841Z" level=info msg="connecting to shim 7f95e684636f8618cc3421768fe131e94287111c4a80a19feb2860cd4aee6e34" address="unix:///run/containerd/s/230d0fdbdbd8ff33a954d7af077a52b4163f9c9991efcf5ef7d0351e2cf3f322" protocol=ttrpc version=3 Jul 9 23:49:41.000305 systemd[1]: Started cri-containerd-7f47c1dc0a64290c3c1dfa0035a23211137503b70a8ae1f82db416492054d442.scope - libcontainer container 7f47c1dc0a64290c3c1dfa0035a23211137503b70a8ae1f82db416492054d442. Jul 9 23:49:41.006131 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:49:41.023479 systemd[1]: Started cri-containerd-7f95e684636f8618cc3421768fe131e94287111c4a80a19feb2860cd4aee6e34.scope - libcontainer container 7f95e684636f8618cc3421768fe131e94287111c4a80a19feb2860cd4aee6e34. Jul 9 23:49:41.028101 containerd[1505]: time="2025-07-09T23:49:41.028054860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gp8z7,Uid:a1c76dce-4758-43aa-813c-3a4ee32989f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a\"" Jul 9 23:49:41.038895 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:49:41.075090 containerd[1505]: time="2025-07-09T23:49:41.075042871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b57986d96-zmxgv,Uid:b356b2b4-7758-4ace-9446-60f3ea47c743,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f\"" Jul 9 23:49:41.088091 containerd[1505]: time="2025-07-09T23:49:41.087860768Z" level=info msg="StartContainer for \"7f95e684636f8618cc3421768fe131e94287111c4a80a19feb2860cd4aee6e34\" returns successfully" Jul 9 23:49:41.104619 containerd[1505]: time="2025-07-09T23:49:41.104578577Z" level=info msg="StartContainer for \"7f47c1dc0a64290c3c1dfa0035a23211137503b70a8ae1f82db416492054d442\" returns successfully" Jul 9 23:49:41.549666 systemd[1]: Started sshd@7-10.0.0.68:22-10.0.0.1:37182.service - OpenSSH per-connection server daemon (10.0.0.1:37182). Jul 9 23:49:41.618497 sshd[5051]: Accepted publickey for core from 10.0.0.1 port 37182 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:41.620011 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:41.624201 systemd-logind[1488]: New session 8 of user core. Jul 9 23:49:41.634317 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 23:49:41.661805 kubelet[2619]: I0709 23:49:41.660352 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-crqbt" podStartSLOduration=20.514941761 podStartE2EDuration="24.66027429s" podCreationTimestamp="2025-07-09 23:49:17 +0000 UTC" firstStartedPulling="2025-07-09 23:49:36.770956828 +0000 UTC m=+35.466595629" lastFinishedPulling="2025-07-09 23:49:40.916289357 +0000 UTC m=+39.611928158" observedRunningTime="2025-07-09 23:49:41.658816338 +0000 UTC m=+40.354455219" watchObservedRunningTime="2025-07-09 23:49:41.66027429 +0000 UTC m=+40.355913131" Jul 9 23:49:41.682733 kubelet[2619]: I0709 23:49:41.682606 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qfz5c" podStartSLOduration=35.682589274 podStartE2EDuration="35.682589274s" podCreationTimestamp="2025-07-09 23:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:41.682388921 +0000 UTC m=+40.378027842" watchObservedRunningTime="2025-07-09 23:49:41.682589274 +0000 UTC m=+40.378228075" Jul 9 23:49:41.862449 kubelet[2619]: I0709 23:49:41.862299 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:49:41.917714 containerd[1505]: time="2025-07-09T23:49:41.917656242Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f47c1dc0a64290c3c1dfa0035a23211137503b70a8ae1f82db416492054d442\" id:\"144719a36a600923cc415958a765001a1d1ab608f219c8a2b430c2fd2aed8651\" pid:5080 exit_status:1 exited_at:{seconds:1752104981 nanos:893278566}" Jul 9 23:49:41.962385 sshd[5056]: Connection closed by 10.0.0.1 port 37182 Jul 9 23:49:41.962947 sshd-session[5051]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:41.967820 systemd-logind[1488]: Session 8 logged out. Waiting for processes to exit. Jul 9 23:49:41.968226 systemd[1]: sshd@7-10.0.0.68:22-10.0.0.1:37182.service: Deactivated successfully. Jul 9 23:49:41.970886 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 23:49:41.972895 systemd-logind[1488]: Removed session 8. Jul 9 23:49:41.975708 containerd[1505]: time="2025-07-09T23:49:41.975664849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec2fd7ab715834da04e899a05f2f6bdda20d9486c5b48c298436f508f198c10f\" id:\"1a4669e35a56868bd06f95740e5a9c06c7445bcf8e1cebcb07ba3b68bb7bd43e\" pid:5107 exited_at:{seconds:1752104981 nanos:975392298}" Jul 9 23:49:42.029274 systemd-networkd[1423]: calieb984eefd0c: Gained IPv6LL Jul 9 23:49:42.029598 systemd-networkd[1423]: vxlan.calico: Gained IPv6LL Jul 9 23:49:42.069464 containerd[1505]: time="2025-07-09T23:49:42.069419828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec2fd7ab715834da04e899a05f2f6bdda20d9486c5b48c298436f508f198c10f\" id:\"18c410d761da6b1f5f0533f3677b18199f319c62d88ac291c2196d6b5c5f12ef\" pid:5136 exited_at:{seconds:1752104982 nanos:69141796}" Jul 9 23:49:42.094324 systemd-networkd[1423]: cali5b53c702f72: Gained IPv6LL Jul 9 23:49:42.517426 kubelet[2619]: I0709 23:49:42.517374 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:49:42.549686 containerd[1505]: time="2025-07-09T23:49:42.549608886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee05ddb36131f81f3bf08a6fc5e6d335fa92f47a961e7dd737035fd6c894a344\" id:\"30c19879a3822cad371b86b2d27074f600272bbe45ca95ca1ea854e31cd9e7ed\" pid:5161 exited_at:{seconds:1752104982 nanos:549224419}" Jul 9 23:49:42.595062 containerd[1505]: time="2025-07-09T23:49:42.594998116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee05ddb36131f81f3bf08a6fc5e6d335fa92f47a961e7dd737035fd6c894a344\" id:\"b8927375712176bf93a7bf10ea568630899a9369be62b46282c0191051428332\" pid:5183 exited_at:{seconds:1752104982 nanos:594686566}" Jul 9 23:49:42.734302 systemd-networkd[1423]: cali536494c0e9a: Gained IPv6LL Jul 9 23:49:42.756620 containerd[1505]: time="2025-07-09T23:49:42.756575194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f47c1dc0a64290c3c1dfa0035a23211137503b70a8ae1f82db416492054d442\" id:\"851d7487fef0effa52057cb175dcb2db8ce4d838eeadf1d882fbe9faf4b89c29\" pid:5205 exit_status:1 exited_at:{seconds:1752104982 nanos:756283603}" Jul 9 23:49:43.412198 containerd[1505]: time="2025-07-09T23:49:43.412149980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:43.413852 containerd[1505]: time="2025-07-09T23:49:43.413818688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 9 23:49:43.415012 containerd[1505]: time="2025-07-09T23:49:43.414960773Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:43.427073 containerd[1505]: time="2025-07-09T23:49:43.427007000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:43.428483 containerd[1505]: time="2025-07-09T23:49:43.428361798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.509732721s" Jul 9 23:49:43.428483 containerd[1505]: time="2025-07-09T23:49:43.428394517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 9 23:49:43.429406 containerd[1505]: time="2025-07-09T23:49:43.429373127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 9 23:49:43.433907 containerd[1505]: time="2025-07-09T23:49:43.433846188Z" level=info msg="CreateContainer within sandbox \"058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 23:49:43.443416 containerd[1505]: time="2025-07-09T23:49:43.443362694Z" level=info msg="Container 7e82102412df1712ec00b0c882995b3f7fe2d43586e86f7968036a5dae4c8e6e: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:43.451223 containerd[1505]: time="2025-07-09T23:49:43.451178732Z" level=info msg="CreateContainer within sandbox \"058ab15cce6b292978ea3a42c5e4459011cad217d338caddfa221a185084207a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7e82102412df1712ec00b0c882995b3f7fe2d43586e86f7968036a5dae4c8e6e\"" Jul 9 23:49:43.451925 containerd[1505]: time="2025-07-09T23:49:43.451747154Z" level=info msg="StartContainer for \"7e82102412df1712ec00b0c882995b3f7fe2d43586e86f7968036a5dae4c8e6e\"" Jul 9 23:49:43.453181 containerd[1505]: time="2025-07-09T23:49:43.453139231Z" level=info msg="connecting to shim 7e82102412df1712ec00b0c882995b3f7fe2d43586e86f7968036a5dae4c8e6e" address="unix:///run/containerd/s/0d48fd983e67ba4a94244e9a0168c04746795d844dd5995481d3e885b91830e2" protocol=ttrpc version=3 Jul 9 23:49:43.478506 systemd[1]: Started cri-containerd-7e82102412df1712ec00b0c882995b3f7fe2d43586e86f7968036a5dae4c8e6e.scope - libcontainer container 7e82102412df1712ec00b0c882995b3f7fe2d43586e86f7968036a5dae4c8e6e. Jul 9 23:49:43.517238 containerd[1505]: time="2025-07-09T23:49:43.517200249Z" level=info msg="StartContainer for \"7e82102412df1712ec00b0c882995b3f7fe2d43586e86f7968036a5dae4c8e6e\" returns successfully" Jul 9 23:49:43.683672 kubelet[2619]: I0709 23:49:43.683569 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b57986d96-pcljx" podStartSLOduration=23.440132811 podStartE2EDuration="27.683362306s" podCreationTimestamp="2025-07-09 23:49:16 +0000 UTC" firstStartedPulling="2025-07-09 23:49:39.186035035 +0000 UTC m=+37.881673876" lastFinishedPulling="2025-07-09 23:49:43.42926453 +0000 UTC m=+42.124903371" observedRunningTime="2025-07-09 23:49:43.682982718 +0000 UTC m=+42.378621559" watchObservedRunningTime="2025-07-09 23:49:43.683362306 +0000 UTC m=+42.379001147" Jul 9 23:49:43.753550 containerd[1505]: time="2025-07-09T23:49:43.753512695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f47c1dc0a64290c3c1dfa0035a23211137503b70a8ae1f82db416492054d442\" id:\"ae7c71470789a59096cb648b2540b45c2eb854656084375c2dea016723aca859\" pid:5276 exit_status:1 exited_at:{seconds:1752104983 nanos:752955432}" Jul 9 23:49:44.533799 containerd[1505]: time="2025-07-09T23:49:44.533166399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:44.534685 containerd[1505]: time="2025-07-09T23:49:44.534656474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 9 23:49:44.535601 containerd[1505]: time="2025-07-09T23:49:44.535567527Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:44.538485 containerd[1505]: time="2025-07-09T23:49:44.538339444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:44.539382 containerd[1505]: time="2025-07-09T23:49:44.539161779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.109749934s" Jul 9 23:49:44.539382 containerd[1505]: time="2025-07-09T23:49:44.539197298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 9 23:49:44.543836 containerd[1505]: time="2025-07-09T23:49:44.543794680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 23:49:44.685027 containerd[1505]: time="2025-07-09T23:49:44.684952368Z" level=info msg="CreateContainer within sandbox \"0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 9 23:49:44.704486 containerd[1505]: time="2025-07-09T23:49:44.704434744Z" level=info msg="Container ebaaba0f90e2cf9d188b48f9e2e4f546ff13f235b15c5eabd014471218445c92: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:44.719055 containerd[1505]: time="2025-07-09T23:49:44.718990188Z" level=info msg="CreateContainer within sandbox \"0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ebaaba0f90e2cf9d188b48f9e2e4f546ff13f235b15c5eabd014471218445c92\"" Jul 9 23:49:44.719834 containerd[1505]: time="2025-07-09T23:49:44.719792203Z" level=info msg="StartContainer for \"ebaaba0f90e2cf9d188b48f9e2e4f546ff13f235b15c5eabd014471218445c92\"" Jul 9 23:49:44.723942 containerd[1505]: time="2025-07-09T23:49:44.723030986Z" level=info msg="connecting to shim ebaaba0f90e2cf9d188b48f9e2e4f546ff13f235b15c5eabd014471218445c92" address="unix:///run/containerd/s/071652e49f3f77c9742a51859353399ebcadf1585f164542583d70a412b7e11a" protocol=ttrpc version=3 Jul 9 23:49:44.756384 systemd[1]: Started cri-containerd-ebaaba0f90e2cf9d188b48f9e2e4f546ff13f235b15c5eabd014471218445c92.scope - libcontainer container ebaaba0f90e2cf9d188b48f9e2e4f546ff13f235b15c5eabd014471218445c92. Jul 9 23:49:44.810961 containerd[1505]: time="2025-07-09T23:49:44.810792995Z" level=info msg="StartContainer for \"ebaaba0f90e2cf9d188b48f9e2e4f546ff13f235b15c5eabd014471218445c92\" returns successfully" Jul 9 23:49:44.921330 containerd[1505]: time="2025-07-09T23:49:44.921272403Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:44.921545 containerd[1505]: time="2025-07-09T23:49:44.921500676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 9 23:49:44.923576 containerd[1505]: time="2025-07-09T23:49:44.923542094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 379.705216ms" Jul 9 23:49:44.923636 containerd[1505]: time="2025-07-09T23:49:44.923584013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 9 23:49:44.925380 containerd[1505]: time="2025-07-09T23:49:44.925328481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 9 23:49:44.929373 containerd[1505]: time="2025-07-09T23:49:44.929315401Z" level=info msg="CreateContainer within sandbox \"1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 23:49:44.961405 containerd[1505]: time="2025-07-09T23:49:44.961353801Z" level=info msg="Container 55d2372619dc847d4fef2e79574144a16c412e98abc8848edb015b745a945fa4: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:44.965485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1964288099.mount: Deactivated successfully. Jul 9 23:49:44.970638 containerd[1505]: time="2025-07-09T23:49:44.970584844Z" level=info msg="CreateContainer within sandbox \"1930ae001622697dfa8192e1377ef3ac3b873952aa210d40edde3cc4e2db5c3f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"55d2372619dc847d4fef2e79574144a16c412e98abc8848edb015b745a945fa4\"" Jul 9 23:49:44.972075 containerd[1505]: time="2025-07-09T23:49:44.971124388Z" level=info msg="StartContainer for \"55d2372619dc847d4fef2e79574144a16c412e98abc8848edb015b745a945fa4\"" Jul 9 23:49:44.973533 containerd[1505]: time="2025-07-09T23:49:44.973490197Z" level=info msg="connecting to shim 55d2372619dc847d4fef2e79574144a16c412e98abc8848edb015b745a945fa4" address="unix:///run/containerd/s/3bcb08470ff17869e01ba1820bec0019b878c23c045c431770506efcc0c7b132" protocol=ttrpc version=3 Jul 9 23:49:45.007346 systemd[1]: Started cri-containerd-55d2372619dc847d4fef2e79574144a16c412e98abc8848edb015b745a945fa4.scope - libcontainer container 55d2372619dc847d4fef2e79574144a16c412e98abc8848edb015b745a945fa4. Jul 9 23:49:45.073511 containerd[1505]: time="2025-07-09T23:49:45.073233394Z" level=info msg="StartContainer for \"55d2372619dc847d4fef2e79574144a16c412e98abc8848edb015b745a945fa4\" returns successfully" Jul 9 23:49:45.710709 kubelet[2619]: I0709 23:49:45.710609 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b57986d96-zmxgv" podStartSLOduration=25.864486576 podStartE2EDuration="29.710590161s" podCreationTimestamp="2025-07-09 23:49:16 +0000 UTC" firstStartedPulling="2025-07-09 23:49:41.078341082 +0000 UTC m=+39.773979883" lastFinishedPulling="2025-07-09 23:49:44.924444627 +0000 UTC m=+43.620083468" observedRunningTime="2025-07-09 23:49:45.709876222 +0000 UTC m=+44.405515063" watchObservedRunningTime="2025-07-09 23:49:45.710590161 +0000 UTC m=+44.406229002" Jul 9 23:49:46.022288 containerd[1505]: time="2025-07-09T23:49:46.021897298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:46.022601 containerd[1505]: time="2025-07-09T23:49:46.022523920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 9 23:49:46.023454 containerd[1505]: time="2025-07-09T23:49:46.023424895Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:46.025950 containerd[1505]: time="2025-07-09T23:49:46.025726070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:46.026408 containerd[1505]: time="2025-07-09T23:49:46.026378372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.100996972s" Jul 9 23:49:46.026495 containerd[1505]: time="2025-07-09T23:49:46.026481409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 9 23:49:46.031064 containerd[1505]: time="2025-07-09T23:49:46.031014682Z" level=info msg="CreateContainer within sandbox \"0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 9 23:49:46.041460 containerd[1505]: time="2025-07-09T23:49:46.039748756Z" level=info msg="Container 955e9bfe530c78e7eb27347d99c573680cb52e76c9afe8fd6ff967bd3d5e367b: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:46.050541 containerd[1505]: time="2025-07-09T23:49:46.050492453Z" level=info msg="CreateContainer within sandbox \"0e3125cee4ef72741a16a4de873c6d855b37531fbb6815c53a54b3780373382a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"955e9bfe530c78e7eb27347d99c573680cb52e76c9afe8fd6ff967bd3d5e367b\"" Jul 9 23:49:46.051317 containerd[1505]: time="2025-07-09T23:49:46.051290991Z" level=info msg="StartContainer for \"955e9bfe530c78e7eb27347d99c573680cb52e76c9afe8fd6ff967bd3d5e367b\"" Jul 9 23:49:46.053414 containerd[1505]: time="2025-07-09T23:49:46.053364493Z" level=info msg="connecting to shim 955e9bfe530c78e7eb27347d99c573680cb52e76c9afe8fd6ff967bd3d5e367b" address="unix:///run/containerd/s/071652e49f3f77c9742a51859353399ebcadf1585f164542583d70a412b7e11a" protocol=ttrpc version=3 Jul 9 23:49:46.084353 systemd[1]: Started cri-containerd-955e9bfe530c78e7eb27347d99c573680cb52e76c9afe8fd6ff967bd3d5e367b.scope - libcontainer container 955e9bfe530c78e7eb27347d99c573680cb52e76c9afe8fd6ff967bd3d5e367b. Jul 9 23:49:46.140052 containerd[1505]: time="2025-07-09T23:49:46.140008335Z" level=info msg="StartContainer for \"955e9bfe530c78e7eb27347d99c573680cb52e76c9afe8fd6ff967bd3d5e367b\" returns successfully" Jul 9 23:49:46.490203 kubelet[2619]: I0709 23:49:46.490110 2619 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 9 23:49:46.500168 kubelet[2619]: I0709 23:49:46.500107 2619 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 9 23:49:46.740350 kubelet[2619]: I0709 23:49:46.740092 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gp8z7" podStartSLOduration=24.743490795 podStartE2EDuration="29.74007265s" podCreationTimestamp="2025-07-09 23:49:17 +0000 UTC" firstStartedPulling="2025-07-09 23:49:41.030878247 +0000 UTC m=+39.726517048" lastFinishedPulling="2025-07-09 23:49:46.027460062 +0000 UTC m=+44.723098903" observedRunningTime="2025-07-09 23:49:46.739384949 +0000 UTC m=+45.435023790" watchObservedRunningTime="2025-07-09 23:49:46.74007265 +0000 UTC m=+45.435711491" Jul 9 23:49:46.987385 systemd[1]: Started sshd@8-10.0.0.68:22-10.0.0.1:49924.service - OpenSSH per-connection server daemon (10.0.0.1:49924). Jul 9 23:49:47.073701 sshd[5414]: Accepted publickey for core from 10.0.0.1 port 49924 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:47.075719 sshd-session[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:47.079883 systemd-logind[1488]: New session 9 of user core. Jul 9 23:49:47.090349 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 23:49:47.384495 sshd[5416]: Connection closed by 10.0.0.1 port 49924 Jul 9 23:49:47.384824 sshd-session[5414]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:47.390192 systemd[1]: sshd@8-10.0.0.68:22-10.0.0.1:49924.service: Deactivated successfully. Jul 9 23:49:47.392238 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 23:49:47.392993 systemd-logind[1488]: Session 9 logged out. Waiting for processes to exit. Jul 9 23:49:47.394246 systemd-logind[1488]: Removed session 9. Jul 9 23:49:52.402812 systemd[1]: Started sshd@9-10.0.0.68:22-10.0.0.1:49928.service - OpenSSH per-connection server daemon (10.0.0.1:49928). Jul 9 23:49:52.456091 sshd[5441]: Accepted publickey for core from 10.0.0.1 port 49928 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:52.457522 sshd-session[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:52.461863 systemd-logind[1488]: New session 10 of user core. Jul 9 23:49:52.470923 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 23:49:52.619364 sshd[5443]: Connection closed by 10.0.0.1 port 49928 Jul 9 23:49:52.621312 sshd-session[5441]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:52.634945 systemd[1]: sshd@9-10.0.0.68:22-10.0.0.1:49928.service: Deactivated successfully. Jul 9 23:49:52.637159 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 23:49:52.639982 systemd-logind[1488]: Session 10 logged out. Waiting for processes to exit. Jul 9 23:49:52.642499 systemd[1]: Started sshd@10-10.0.0.68:22-10.0.0.1:49690.service - OpenSSH per-connection server daemon (10.0.0.1:49690). Jul 9 23:49:52.644305 systemd-logind[1488]: Removed session 10. Jul 9 23:49:52.701703 sshd[5457]: Accepted publickey for core from 10.0.0.1 port 49690 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:52.702655 sshd-session[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:52.707772 systemd-logind[1488]: New session 11 of user core. Jul 9 23:49:52.718387 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 23:49:52.956744 sshd[5459]: Connection closed by 10.0.0.1 port 49690 Jul 9 23:49:52.959164 sshd-session[5457]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:52.968908 systemd[1]: sshd@10-10.0.0.68:22-10.0.0.1:49690.service: Deactivated successfully. Jul 9 23:49:52.971180 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 23:49:52.972458 systemd-logind[1488]: Session 11 logged out. Waiting for processes to exit. Jul 9 23:49:52.976652 systemd[1]: Started sshd@11-10.0.0.68:22-10.0.0.1:49698.service - OpenSSH per-connection server daemon (10.0.0.1:49698). Jul 9 23:49:52.977310 systemd-logind[1488]: Removed session 11. Jul 9 23:49:53.026247 sshd[5471]: Accepted publickey for core from 10.0.0.1 port 49698 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:53.027646 sshd-session[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:53.032729 systemd-logind[1488]: New session 12 of user core. Jul 9 23:49:53.038335 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 23:49:53.214824 sshd[5473]: Connection closed by 10.0.0.1 port 49698 Jul 9 23:49:53.215132 sshd-session[5471]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:53.219288 systemd[1]: sshd@11-10.0.0.68:22-10.0.0.1:49698.service: Deactivated successfully. Jul 9 23:49:53.221040 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 23:49:53.223193 systemd-logind[1488]: Session 12 logged out. Waiting for processes to exit. Jul 9 23:49:53.225095 systemd-logind[1488]: Removed session 12. Jul 9 23:49:58.236506 systemd[1]: Started sshd@12-10.0.0.68:22-10.0.0.1:49714.service - OpenSSH per-connection server daemon (10.0.0.1:49714). Jul 9 23:49:58.302752 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 49714 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:49:58.304324 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:58.313971 systemd-logind[1488]: New session 13 of user core. Jul 9 23:49:58.325849 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 23:49:58.495956 sshd[5492]: Connection closed by 10.0.0.1 port 49714 Jul 9 23:49:58.496560 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:58.501158 systemd[1]: sshd@12-10.0.0.68:22-10.0.0.1:49714.service: Deactivated successfully. Jul 9 23:49:58.503976 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 23:49:58.506095 systemd-logind[1488]: Session 13 logged out. Waiting for processes to exit. Jul 9 23:49:58.508602 systemd-logind[1488]: Removed session 13. Jul 9 23:50:02.860803 containerd[1505]: time="2025-07-09T23:50:02.860691902Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee05ddb36131f81f3bf08a6fc5e6d335fa92f47a961e7dd737035fd6c894a344\" id:\"dd7f286abd86dbf7f257beea08119dc9251802defce33dd85088b704be6226c1\" pid:5526 exited_at:{seconds:1752105002 nanos:860136231}" Jul 9 23:50:03.512029 systemd[1]: Started sshd@13-10.0.0.68:22-10.0.0.1:44108.service - OpenSSH per-connection server daemon (10.0.0.1:44108). Jul 9 23:50:03.575498 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 44108 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:03.577426 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:03.583712 systemd-logind[1488]: New session 14 of user core. Jul 9 23:50:03.598358 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 23:50:03.740034 sshd[5539]: Connection closed by 10.0.0.1 port 44108 Jul 9 23:50:03.740580 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:03.744605 systemd[1]: sshd@13-10.0.0.68:22-10.0.0.1:44108.service: Deactivated successfully. Jul 9 23:50:03.747935 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 23:50:03.748881 systemd-logind[1488]: Session 14 logged out. Waiting for processes to exit. Jul 9 23:50:03.750709 systemd-logind[1488]: Removed session 14. Jul 9 23:50:08.756959 systemd[1]: Started sshd@14-10.0.0.68:22-10.0.0.1:44110.service - OpenSSH per-connection server daemon (10.0.0.1:44110). Jul 9 23:50:08.807244 sshd[5555]: Accepted publickey for core from 10.0.0.1 port 44110 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:08.808397 sshd-session[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:08.814219 systemd-logind[1488]: New session 15 of user core. Jul 9 23:50:08.825411 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 23:50:08.960092 sshd[5557]: Connection closed by 10.0.0.1 port 44110 Jul 9 23:50:08.960661 sshd-session[5555]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:08.964683 systemd[1]: sshd@14-10.0.0.68:22-10.0.0.1:44110.service: Deactivated successfully. Jul 9 23:50:08.966861 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 23:50:08.967947 systemd-logind[1488]: Session 15 logged out. Waiting for processes to exit. Jul 9 23:50:08.969513 systemd-logind[1488]: Removed session 15. Jul 9 23:50:12.071250 containerd[1505]: time="2025-07-09T23:50:12.071201171Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec2fd7ab715834da04e899a05f2f6bdda20d9486c5b48c298436f508f198c10f\" id:\"cdac7dbe17e470cbcb15b6072c49fdb96da9c23dd52a5a973af5ca4dd1022bb0\" pid:5582 exited_at:{seconds:1752105012 nanos:70898615}" Jul 9 23:50:12.596261 containerd[1505]: time="2025-07-09T23:50:12.596224580Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee05ddb36131f81f3bf08a6fc5e6d335fa92f47a961e7dd737035fd6c894a344\" id:\"f47d4502d76fc7d2b0316af0e73cc4eafcd845636cecbbf6fcddbc3efdc13d2e\" pid:5606 exited_at:{seconds:1752105012 nanos:595946263}" Jul 9 23:50:13.780171 containerd[1505]: time="2025-07-09T23:50:13.780074368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f47c1dc0a64290c3c1dfa0035a23211137503b70a8ae1f82db416492054d442\" id:\"e6db2e3e6887b81316caf3ea3cd26ae65ed9c83e3fd33ee6883a5d8e4b55537f\" pid:5627 exited_at:{seconds:1752105013 nanos:779665133}" Jul 9 23:50:13.973695 systemd[1]: Started sshd@15-10.0.0.68:22-10.0.0.1:55056.service - OpenSSH per-connection server daemon (10.0.0.1:55056). Jul 9 23:50:14.020297 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 55056 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:14.021944 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:14.027934 systemd-logind[1488]: New session 16 of user core. Jul 9 23:50:14.040325 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 23:50:14.192953 sshd[5644]: Connection closed by 10.0.0.1 port 55056 Jul 9 23:50:14.193466 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:14.206450 systemd[1]: sshd@15-10.0.0.68:22-10.0.0.1:55056.service: Deactivated successfully. Jul 9 23:50:14.208492 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 23:50:14.209382 systemd-logind[1488]: Session 16 logged out. Waiting for processes to exit. Jul 9 23:50:14.212231 systemd[1]: Started sshd@16-10.0.0.68:22-10.0.0.1:55064.service - OpenSSH per-connection server daemon (10.0.0.1:55064). Jul 9 23:50:14.212817 systemd-logind[1488]: Removed session 16. Jul 9 23:50:14.261039 sshd[5658]: Accepted publickey for core from 10.0.0.1 port 55064 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:14.262491 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:14.267991 systemd-logind[1488]: New session 17 of user core. Jul 9 23:50:14.274341 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 23:50:14.493920 sshd[5660]: Connection closed by 10.0.0.1 port 55064 Jul 9 23:50:14.494559 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:14.508045 systemd[1]: sshd@16-10.0.0.68:22-10.0.0.1:55064.service: Deactivated successfully. Jul 9 23:50:14.511778 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 23:50:14.512617 systemd-logind[1488]: Session 17 logged out. Waiting for processes to exit. Jul 9 23:50:14.515842 systemd[1]: Started sshd@17-10.0.0.68:22-10.0.0.1:55074.service - OpenSSH per-connection server daemon (10.0.0.1:55074). Jul 9 23:50:14.516737 systemd-logind[1488]: Removed session 17. Jul 9 23:50:14.574643 sshd[5671]: Accepted publickey for core from 10.0.0.1 port 55074 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:14.576503 sshd-session[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:14.581219 systemd-logind[1488]: New session 18 of user core. Jul 9 23:50:14.593332 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 23:50:15.345193 sshd[5673]: Connection closed by 10.0.0.1 port 55074 Jul 9 23:50:15.346202 sshd-session[5671]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:15.358027 systemd[1]: sshd@17-10.0.0.68:22-10.0.0.1:55074.service: Deactivated successfully. Jul 9 23:50:15.364552 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 23:50:15.368680 systemd-logind[1488]: Session 18 logged out. Waiting for processes to exit. Jul 9 23:50:15.374816 systemd-logind[1488]: Removed session 18. Jul 9 23:50:15.378373 systemd[1]: Started sshd@18-10.0.0.68:22-10.0.0.1:55076.service - OpenSSH per-connection server daemon (10.0.0.1:55076). Jul 9 23:50:15.448111 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 55076 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:15.449690 sshd-session[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:15.454177 systemd-logind[1488]: New session 19 of user core. Jul 9 23:50:15.465361 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 23:50:15.778637 sshd[5697]: Connection closed by 10.0.0.1 port 55076 Jul 9 23:50:15.781893 sshd-session[5694]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:15.790285 systemd[1]: sshd@18-10.0.0.68:22-10.0.0.1:55076.service: Deactivated successfully. Jul 9 23:50:15.792645 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 23:50:15.796252 systemd-logind[1488]: Session 19 logged out. Waiting for processes to exit. Jul 9 23:50:15.800400 systemd[1]: Started sshd@19-10.0.0.68:22-10.0.0.1:55078.service - OpenSSH per-connection server daemon (10.0.0.1:55078). Jul 9 23:50:15.801515 systemd-logind[1488]: Removed session 19. Jul 9 23:50:15.862180 sshd[5708]: Accepted publickey for core from 10.0.0.1 port 55078 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:15.862694 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:15.868091 systemd-logind[1488]: New session 20 of user core. Jul 9 23:50:15.877322 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 23:50:16.021168 sshd[5710]: Connection closed by 10.0.0.1 port 55078 Jul 9 23:50:16.021653 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:16.025805 systemd[1]: sshd@19-10.0.0.68:22-10.0.0.1:55078.service: Deactivated successfully. Jul 9 23:50:16.029955 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 23:50:16.031204 systemd-logind[1488]: Session 20 logged out. Waiting for processes to exit. Jul 9 23:50:16.033531 systemd-logind[1488]: Removed session 20. Jul 9 23:50:21.036435 systemd[1]: Started sshd@20-10.0.0.68:22-10.0.0.1:55082.service - OpenSSH per-connection server daemon (10.0.0.1:55082). Jul 9 23:50:21.114636 sshd[5732]: Accepted publickey for core from 10.0.0.1 port 55082 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:21.114944 sshd-session[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:21.119732 systemd-logind[1488]: New session 21 of user core. Jul 9 23:50:21.128280 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 23:50:21.300888 sshd[5738]: Connection closed by 10.0.0.1 port 55082 Jul 9 23:50:21.301185 sshd-session[5732]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:21.305396 systemd[1]: sshd@20-10.0.0.68:22-10.0.0.1:55082.service: Deactivated successfully. Jul 9 23:50:21.307271 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 23:50:21.309841 systemd-logind[1488]: Session 21 logged out. Waiting for processes to exit. Jul 9 23:50:21.310972 systemd-logind[1488]: Removed session 21. Jul 9 23:50:26.315746 systemd[1]: Started sshd@21-10.0.0.68:22-10.0.0.1:45812.service - OpenSSH per-connection server daemon (10.0.0.1:45812). Jul 9 23:50:26.387794 sshd[5752]: Accepted publickey for core from 10.0.0.1 port 45812 ssh2: RSA SHA256:gc9XfzCdXUit2xMYwbO9Atxxy3DG1hyaUiU6i3BG1Rg Jul 9 23:50:26.389413 sshd-session[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:26.394261 systemd-logind[1488]: New session 22 of user core. Jul 9 23:50:26.404301 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 23:50:26.559004 sshd[5754]: Connection closed by 10.0.0.1 port 45812 Jul 9 23:50:26.559557 sshd-session[5752]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:26.563202 systemd[1]: sshd@21-10.0.0.68:22-10.0.0.1:45812.service: Deactivated successfully. Jul 9 23:50:26.564845 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 23:50:26.565500 systemd-logind[1488]: Session 22 logged out. Waiting for processes to exit. Jul 9 23:50:26.566636 systemd-logind[1488]: Removed session 22.