Sep 11 00:30:15.799636 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:25:29 -00 2025 Sep 11 00:30:15.799656 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:30:15.799668 kernel: BIOS-provided physical RAM map: Sep 11 00:30:15.799674 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 11 00:30:15.799681 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 11 00:30:15.799687 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 11 00:30:15.799695 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 11 00:30:15.799701 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 11 00:30:15.799708 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 11 00:30:15.799714 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 11 00:30:15.799721 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 11 00:30:15.799730 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 11 00:30:15.799736 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 11 00:30:15.799743 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 11 00:30:15.799751 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 11 00:30:15.799758 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 11 00:30:15.799767 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:30:15.799774 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:30:15.799781 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:30:15.799788 kernel: NX (Execute Disable) protection: active Sep 11 00:30:15.799795 kernel: APIC: Static calls initialized Sep 11 00:30:15.799802 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Sep 11 00:30:15.799809 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Sep 11 00:30:15.799816 kernel: extended physical RAM map: Sep 11 00:30:15.799823 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 11 00:30:15.799830 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 11 00:30:15.799837 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 11 00:30:15.799846 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 11 00:30:15.799853 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Sep 11 00:30:15.799860 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Sep 11 00:30:15.799867 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Sep 11 00:30:15.799874 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Sep 11 00:30:15.799881 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Sep 11 00:30:15.799888 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 11 00:30:15.799895 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 11 00:30:15.799902 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 11 00:30:15.799909 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 11 00:30:15.799915 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 11 00:30:15.799925 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 11 00:30:15.799932 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 11 00:30:15.799942 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 11 00:30:15.799949 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:30:15.799956 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:30:15.799964 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:30:15.799973 kernel: efi: EFI v2.7 by EDK II Sep 11 00:30:15.799980 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 11 00:30:15.799987 kernel: random: crng init done Sep 11 00:30:15.799995 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 11 00:30:15.800002 kernel: secureboot: Secure boot enabled Sep 11 00:30:15.800009 kernel: SMBIOS 2.8 present. Sep 11 00:30:15.800016 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 11 00:30:15.800023 kernel: DMI: Memory slots populated: 1/1 Sep 11 00:30:15.800030 kernel: Hypervisor detected: KVM Sep 11 00:30:15.800037 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 11 00:30:15.800047 kernel: kvm-clock: using sched offset of 5361622658 cycles Sep 11 00:30:15.800054 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 11 00:30:15.800062 kernel: tsc: Detected 2794.750 MHz processor Sep 11 00:30:15.800070 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:30:15.800077 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:30:15.800084 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 11 00:30:15.800092 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 11 00:30:15.800099 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:30:15.800107 kernel: Using GB pages for direct mapping Sep 11 00:30:15.800114 kernel: ACPI: Early table checksum verification disabled Sep 11 00:30:15.800124 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 11 00:30:15.800131 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 11 00:30:15.800138 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:30:15.800146 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:30:15.800153 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 11 00:30:15.800160 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:30:15.800175 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:30:15.800183 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:30:15.800192 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:30:15.800200 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 11 00:30:15.800208 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 11 00:30:15.800215 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 11 00:30:15.800222 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 11 00:30:15.800230 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 11 00:30:15.800237 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 11 00:30:15.800244 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 11 00:30:15.800252 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 11 00:30:15.800259 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 11 00:30:15.800268 kernel: No NUMA configuration found Sep 11 00:30:15.800276 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 11 00:30:15.800284 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 11 00:30:15.800291 kernel: Zone ranges: Sep 11 00:30:15.800298 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:30:15.800306 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 11 00:30:15.800313 kernel: Normal empty Sep 11 00:30:15.800320 kernel: Device empty Sep 11 00:30:15.800328 kernel: Movable zone start for each node Sep 11 00:30:15.800337 kernel: Early memory node ranges Sep 11 00:30:15.800344 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 11 00:30:15.800352 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 11 00:30:15.800359 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 11 00:30:15.800366 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 11 00:30:15.800373 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 11 00:30:15.800380 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 11 00:30:15.800388 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:30:15.800395 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 11 00:30:15.800405 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 11 00:30:15.800412 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 11 00:30:15.800419 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 11 00:30:15.800427 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 11 00:30:15.800434 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 11 00:30:15.800441 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 11 00:30:15.800449 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:30:15.800456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 11 00:30:15.800463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 11 00:30:15.800472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:30:15.800483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 11 00:30:15.800491 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 11 00:30:15.800499 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:30:15.800508 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 11 00:30:15.800516 kernel: TSC deadline timer available Sep 11 00:30:15.800525 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:30:15.800544 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:30:15.800552 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:30:15.800581 kernel: CPU topo: Max. threads per core: 1 Sep 11 00:30:15.800588 kernel: CPU topo: Num. cores per package: 4 Sep 11 00:30:15.800596 kernel: CPU topo: Num. threads per package: 4 Sep 11 00:30:15.800604 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 11 00:30:15.800614 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 11 00:30:15.800621 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 11 00:30:15.800629 kernel: kvm-guest: setup PV sched yield Sep 11 00:30:15.800637 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 11 00:30:15.800644 kernel: Booting paravirtualized kernel on KVM Sep 11 00:30:15.800655 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:30:15.800663 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 11 00:30:15.800670 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 11 00:30:15.800678 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 11 00:30:15.800686 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 11 00:30:15.800693 kernel: kvm-guest: PV spinlocks enabled Sep 11 00:30:15.800701 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 11 00:30:15.800710 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:30:15.800721 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:30:15.800728 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:30:15.800736 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:30:15.800744 kernel: Fallback order for Node 0: 0 Sep 11 00:30:15.800752 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 11 00:30:15.800759 kernel: Policy zone: DMA32 Sep 11 00:30:15.800767 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:30:15.800775 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 00:30:15.800782 kernel: ftrace: allocating 40103 entries in 157 pages Sep 11 00:30:15.800792 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:30:15.800800 kernel: Dynamic Preempt: voluntary Sep 11 00:30:15.800807 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:30:15.800816 kernel: rcu: RCU event tracing is enabled. Sep 11 00:30:15.800824 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 00:30:15.800832 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:30:15.800840 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:30:15.800848 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:30:15.800855 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:30:15.800866 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 00:30:15.800873 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:30:15.800881 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:30:15.800889 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:30:15.800897 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 11 00:30:15.800905 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:30:15.800912 kernel: Console: colour dummy device 80x25 Sep 11 00:30:15.800920 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:30:15.800928 kernel: ACPI: Core revision 20240827 Sep 11 00:30:15.800938 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 11 00:30:15.800946 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:30:15.800954 kernel: x2apic enabled Sep 11 00:30:15.800962 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:30:15.800969 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 11 00:30:15.800977 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 11 00:30:15.800985 kernel: kvm-guest: setup PV IPIs Sep 11 00:30:15.800992 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 11 00:30:15.801000 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 11 00:30:15.801011 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 11 00:30:15.801018 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 11 00:30:15.801026 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 11 00:30:15.801034 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 11 00:30:15.801042 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:30:15.801049 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:30:15.801057 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:30:15.801065 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 11 00:30:15.801073 kernel: active return thunk: retbleed_return_thunk Sep 11 00:30:15.801083 kernel: RETBleed: Mitigation: untrained return thunk Sep 11 00:30:15.801090 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 11 00:30:15.801098 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 11 00:30:15.801106 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 11 00:30:15.801114 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 11 00:30:15.801122 kernel: active return thunk: srso_return_thunk Sep 11 00:30:15.801130 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 11 00:30:15.801138 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:30:15.801148 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:30:15.801156 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:30:15.801163 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:30:15.801178 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 11 00:30:15.801186 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:30:15.801194 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:30:15.801202 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:30:15.801210 kernel: landlock: Up and running. Sep 11 00:30:15.801217 kernel: SELinux: Initializing. Sep 11 00:30:15.801228 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:30:15.801235 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:30:15.801243 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 11 00:30:15.801251 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 11 00:30:15.801259 kernel: ... version: 0 Sep 11 00:30:15.801266 kernel: ... bit width: 48 Sep 11 00:30:15.801274 kernel: ... generic registers: 6 Sep 11 00:30:15.801282 kernel: ... value mask: 0000ffffffffffff Sep 11 00:30:15.801289 kernel: ... max period: 00007fffffffffff Sep 11 00:30:15.801300 kernel: ... fixed-purpose events: 0 Sep 11 00:30:15.801307 kernel: ... event mask: 000000000000003f Sep 11 00:30:15.801315 kernel: signal: max sigframe size: 1776 Sep 11 00:30:15.801322 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:30:15.801330 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:30:15.801338 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:30:15.801346 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:30:15.801353 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:30:15.801361 kernel: .... node #0, CPUs: #1 #2 #3 Sep 11 00:30:15.801371 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 00:30:15.801379 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 11 00:30:15.801387 kernel: Memory: 2411268K/2552216K available (14336K kernel code, 2429K rwdata, 9960K rodata, 53832K init, 1088K bss, 135016K reserved, 0K cma-reserved) Sep 11 00:30:15.801395 kernel: devtmpfs: initialized Sep 11 00:30:15.801403 kernel: x86/mm: Memory block size: 128MB Sep 11 00:30:15.801411 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 11 00:30:15.801419 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 11 00:30:15.801427 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:30:15.801434 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 00:30:15.801444 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:30:15.801452 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:30:15.801460 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:30:15.801468 kernel: audit: type=2000 audit(1757550613.922:1): state=initialized audit_enabled=0 res=1 Sep 11 00:30:15.801475 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:30:15.801483 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:30:15.801491 kernel: cpuidle: using governor menu Sep 11 00:30:15.801498 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:30:15.801506 kernel: dca service started, version 1.12.1 Sep 11 00:30:15.801516 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 11 00:30:15.801524 kernel: PCI: Using configuration type 1 for base access Sep 11 00:30:15.801550 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:30:15.801558 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:30:15.801566 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:30:15.801574 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:30:15.801581 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:30:15.801589 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:30:15.801597 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:30:15.801607 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:30:15.801614 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:30:15.801622 kernel: ACPI: Interpreter enabled Sep 11 00:30:15.801630 kernel: ACPI: PM: (supports S0 S5) Sep 11 00:30:15.801637 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:30:15.801645 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:30:15.801653 kernel: PCI: Using E820 reservations for host bridge windows Sep 11 00:30:15.801660 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 11 00:30:15.801668 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 00:30:15.801844 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:30:15.801966 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 11 00:30:15.802084 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 11 00:30:15.802095 kernel: PCI host bridge to bus 0000:00 Sep 11 00:30:15.802228 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 11 00:30:15.802336 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 11 00:30:15.802451 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 11 00:30:15.802572 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 11 00:30:15.802680 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 11 00:30:15.802785 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:30:15.802890 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 00:30:15.803027 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:30:15.803152 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 11 00:30:15.803282 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 11 00:30:15.803398 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 11 00:30:15.803518 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 11 00:30:15.803654 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 11 00:30:15.803782 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 00:30:15.803901 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 11 00:30:15.804022 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 11 00:30:15.804139 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 11 00:30:15.804275 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 11 00:30:15.804395 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 11 00:30:15.804517 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 11 00:30:15.804653 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 11 00:30:15.804781 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 11 00:30:15.804903 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 11 00:30:15.805020 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 11 00:30:15.805138 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 11 00:30:15.805264 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 11 00:30:15.805391 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 11 00:30:15.805510 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 11 00:30:15.805655 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 11 00:30:15.805776 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 11 00:30:15.805892 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 11 00:30:15.806020 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 11 00:30:15.806137 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 11 00:30:15.806148 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 11 00:30:15.806156 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 11 00:30:15.806172 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 11 00:30:15.806183 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 11 00:30:15.806191 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 11 00:30:15.806199 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 11 00:30:15.806206 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 11 00:30:15.806214 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 11 00:30:15.806222 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 11 00:30:15.806229 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 11 00:30:15.806237 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 11 00:30:15.806244 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 11 00:30:15.806254 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 11 00:30:15.806262 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 11 00:30:15.806269 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 11 00:30:15.806277 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 11 00:30:15.806284 kernel: iommu: Default domain type: Translated Sep 11 00:30:15.806292 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:30:15.806300 kernel: efivars: Registered efivars operations Sep 11 00:30:15.806307 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:30:15.806315 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 11 00:30:15.806324 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 11 00:30:15.806332 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Sep 11 00:30:15.806340 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Sep 11 00:30:15.806347 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 11 00:30:15.806355 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 11 00:30:15.806478 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 11 00:30:15.806642 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 11 00:30:15.806818 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 11 00:30:15.806830 kernel: vgaarb: loaded Sep 11 00:30:15.806841 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 11 00:30:15.806849 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 11 00:30:15.806857 kernel: clocksource: Switched to clocksource kvm-clock Sep 11 00:30:15.806876 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:30:15.806888 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:30:15.806903 kernel: pnp: PnP ACPI init Sep 11 00:30:15.807064 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 11 00:30:15.807077 kernel: pnp: PnP ACPI: found 6 devices Sep 11 00:30:15.807089 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:30:15.807097 kernel: NET: Registered PF_INET protocol family Sep 11 00:30:15.807105 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:30:15.807113 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 00:30:15.807121 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:30:15.807129 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:30:15.807137 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 00:30:15.807144 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 00:30:15.807152 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:30:15.807162 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:30:15.807180 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:30:15.807189 kernel: NET: Registered PF_XDP protocol family Sep 11 00:30:15.807311 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 11 00:30:15.807429 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 11 00:30:15.807553 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 11 00:30:15.807720 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 11 00:30:15.807828 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 11 00:30:15.807938 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 11 00:30:15.808053 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 11 00:30:15.808159 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:30:15.808179 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:30:15.808188 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 11 00:30:15.808196 kernel: Initialise system trusted keyrings Sep 11 00:30:15.808204 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 00:30:15.808212 kernel: Key type asymmetric registered Sep 11 00:30:15.808223 kernel: Asymmetric key parser 'x509' registered Sep 11 00:30:15.808244 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:30:15.808256 kernel: io scheduler mq-deadline registered Sep 11 00:30:15.808273 kernel: io scheduler kyber registered Sep 11 00:30:15.808286 kernel: io scheduler bfq registered Sep 11 00:30:15.808297 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:30:15.808309 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 11 00:30:15.808317 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 11 00:30:15.808325 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 11 00:30:15.808333 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:30:15.808345 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:30:15.808354 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 11 00:30:15.808362 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 11 00:30:15.808370 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 11 00:30:15.808520 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 11 00:30:15.808552 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 11 00:30:15.808668 kernel: rtc_cmos 00:04: registered as rtc0 Sep 11 00:30:15.808777 kernel: rtc_cmos 00:04: setting system clock to 2025-09-11T00:30:15 UTC (1757550615) Sep 11 00:30:15.808929 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 11 00:30:15.808941 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 11 00:30:15.808949 kernel: efifb: probing for efifb Sep 11 00:30:15.808957 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 11 00:30:15.808965 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 11 00:30:15.808973 kernel: efifb: scrolling: redraw Sep 11 00:30:15.808981 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 11 00:30:15.808989 kernel: Console: switching to colour frame buffer device 160x50 Sep 11 00:30:15.809001 kernel: fb0: EFI VGA frame buffer device Sep 11 00:30:15.809011 kernel: pstore: Using crash dump compression: deflate Sep 11 00:30:15.809019 kernel: pstore: Registered efi_pstore as persistent store backend Sep 11 00:30:15.809027 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:30:15.809035 kernel: Segment Routing with IPv6 Sep 11 00:30:15.809043 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:30:15.809053 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:30:15.809061 kernel: Key type dns_resolver registered Sep 11 00:30:15.809069 kernel: IPI shorthand broadcast: enabled Sep 11 00:30:15.809077 kernel: sched_clock: Marking stable (2747002254, 137651927)->(2904507210, -19853029) Sep 11 00:30:15.809085 kernel: registered taskstats version 1 Sep 11 00:30:15.809093 kernel: Loading compiled-in X.509 certificates Sep 11 00:30:15.809101 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 8138ce5002a1b572fd22b23ac238f29bab3f249f' Sep 11 00:30:15.809110 kernel: Demotion targets for Node 0: null Sep 11 00:30:15.809118 kernel: Key type .fscrypt registered Sep 11 00:30:15.809127 kernel: Key type fscrypt-provisioning registered Sep 11 00:30:15.809135 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:30:15.809143 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:30:15.809151 kernel: ima: No architecture policies found Sep 11 00:30:15.809159 kernel: clk: Disabling unused clocks Sep 11 00:30:15.809175 kernel: Warning: unable to open an initial console. Sep 11 00:30:15.809184 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 11 00:30:15.809192 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:30:15.809200 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:30:15.809210 kernel: Run /init as init process Sep 11 00:30:15.809218 kernel: with arguments: Sep 11 00:30:15.809226 kernel: /init Sep 11 00:30:15.809234 kernel: with environment: Sep 11 00:30:15.809241 kernel: HOME=/ Sep 11 00:30:15.809249 kernel: TERM=linux Sep 11 00:30:15.809257 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:30:15.809266 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:30:15.809279 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:30:15.809288 systemd[1]: Detected virtualization kvm. Sep 11 00:30:15.809296 systemd[1]: Detected architecture x86-64. Sep 11 00:30:15.809304 systemd[1]: Running in initrd. Sep 11 00:30:15.809312 systemd[1]: No hostname configured, using default hostname. Sep 11 00:30:15.809321 systemd[1]: Hostname set to . Sep 11 00:30:15.809329 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:30:15.809340 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:30:15.809348 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:30:15.809357 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:30:15.809366 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:30:15.809375 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:30:15.809384 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:30:15.809393 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:30:15.809405 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:30:15.809414 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:30:15.809422 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:30:15.809431 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:30:15.809439 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:30:15.809449 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:30:15.809460 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:30:15.809470 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:30:15.809481 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:30:15.809494 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:30:15.809505 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:30:15.809515 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:30:15.809526 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:30:15.809553 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:30:15.809564 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:30:15.809574 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:30:15.809585 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:30:15.809598 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:30:15.809609 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:30:15.809621 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:30:15.809632 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:30:15.809642 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:30:15.809653 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:30:15.809664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:30:15.809674 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:30:15.809685 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:30:15.809694 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:30:15.809703 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:30:15.809733 systemd-journald[220]: Collecting audit messages is disabled. Sep 11 00:30:15.809755 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:30:15.809763 systemd-journald[220]: Journal started Sep 11 00:30:15.809782 systemd-journald[220]: Runtime Journal (/run/log/journal/c17651f6e85e490b875b255a7e2d063e) is 6M, max 48.2M, 42.2M free. Sep 11 00:30:15.809820 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:30:15.812567 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:30:15.813945 systemd-modules-load[222]: Inserted module 'overlay' Sep 11 00:30:15.814805 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:30:15.825118 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:30:15.826264 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:30:15.839131 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:30:15.842048 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:30:15.844617 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:30:15.846278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:30:15.849117 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:30:15.851989 kernel: Bridge firewalling registered Sep 11 00:30:15.849499 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 11 00:30:15.850818 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:30:15.853728 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:30:15.855110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:30:15.887735 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:30:15.889779 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:30:15.897173 dracut-cmdline[257]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:30:15.939282 systemd-resolved[269]: Positive Trust Anchors: Sep 11 00:30:15.939296 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:30:15.939327 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:30:15.941831 systemd-resolved[269]: Defaulting to hostname 'linux'. Sep 11 00:30:15.947399 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:30:15.947862 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:30:16.002562 kernel: SCSI subsystem initialized Sep 11 00:30:16.011565 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:30:16.022567 kernel: iscsi: registered transport (tcp) Sep 11 00:30:16.044581 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:30:16.044638 kernel: QLogic iSCSI HBA Driver Sep 11 00:30:16.065120 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:30:16.089964 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:30:16.091420 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:30:16.143530 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:30:16.145391 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:30:16.212567 kernel: raid6: avx2x4 gen() 25631 MB/s Sep 11 00:30:16.229556 kernel: raid6: avx2x2 gen() 26132 MB/s Sep 11 00:30:16.246601 kernel: raid6: avx2x1 gen() 20326 MB/s Sep 11 00:30:16.246644 kernel: raid6: using algorithm avx2x2 gen() 26132 MB/s Sep 11 00:30:16.264596 kernel: raid6: .... xor() 19158 MB/s, rmw enabled Sep 11 00:30:16.264619 kernel: raid6: using avx2x2 recovery algorithm Sep 11 00:30:16.285567 kernel: xor: automatically using best checksumming function avx Sep 11 00:30:16.453581 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:30:16.461897 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:30:16.465831 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:30:16.504061 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 11 00:30:16.509466 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:30:16.510937 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:30:16.538322 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 11 00:30:16.567806 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:30:16.570154 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:30:16.648156 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:30:16.652579 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:30:16.684570 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 11 00:30:16.686937 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 00:30:16.694284 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 00:30:16.694306 kernel: GPT:9289727 != 19775487 Sep 11 00:30:16.694316 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 00:30:16.694844 kernel: GPT:9289727 != 19775487 Sep 11 00:30:16.694862 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 00:30:16.695942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:30:16.710564 kernel: libata version 3.00 loaded. Sep 11 00:30:16.712551 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 11 00:30:16.719037 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:30:16.722856 kernel: ahci 0000:00:1f.2: version 3.0 Sep 11 00:30:16.723067 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 11 00:30:16.726789 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 11 00:30:16.726957 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 11 00:30:16.727093 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 11 00:30:16.729718 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:30:16.735198 kernel: AES CTR mode by8 optimization enabled Sep 11 00:30:16.730029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:30:16.741205 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:30:16.748527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:30:16.760780 kernel: scsi host0: ahci Sep 11 00:30:16.760991 kernel: scsi host1: ahci Sep 11 00:30:16.768574 kernel: scsi host2: ahci Sep 11 00:30:16.780728 kernel: scsi host3: ahci Sep 11 00:30:16.780926 kernel: scsi host4: ahci Sep 11 00:30:16.782990 kernel: scsi host5: ahci Sep 11 00:30:16.783177 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 1 Sep 11 00:30:16.783190 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 1 Sep 11 00:30:16.784044 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 00:30:16.791619 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 1 Sep 11 00:30:16.791636 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 1 Sep 11 00:30:16.791650 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 1 Sep 11 00:30:16.791663 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 1 Sep 11 00:30:16.790275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:30:16.819924 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 00:30:16.831288 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:30:16.840192 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 00:30:16.841428 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 00:30:16.844831 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:30:16.877868 disk-uuid[635]: Primary Header is updated. Sep 11 00:30:16.877868 disk-uuid[635]: Secondary Entries is updated. Sep 11 00:30:16.877868 disk-uuid[635]: Secondary Header is updated. Sep 11 00:30:16.881561 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:30:16.885564 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:30:17.100745 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 11 00:30:17.100818 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 11 00:30:17.100830 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 11 00:30:17.102566 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 11 00:30:17.102584 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 11 00:30:17.103566 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 11 00:30:17.104579 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:30:17.104593 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 11 00:30:17.104871 kernel: ata3.00: applying bridge limits Sep 11 00:30:17.106038 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:30:17.106052 kernel: ata3.00: configured for UDMA/100 Sep 11 00:30:17.108563 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 11 00:30:17.154117 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 11 00:30:17.154334 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:30:17.174561 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 11 00:30:17.556240 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:30:17.557713 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:30:17.558955 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:30:17.559303 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:30:17.560916 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:30:17.587723 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:30:17.885728 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:30:17.888007 disk-uuid[636]: The operation has completed successfully. Sep 11 00:30:17.926688 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:30:17.926829 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:30:17.960623 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:30:17.978865 sh[664]: Success Sep 11 00:30:17.999597 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:30:17.999660 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:30:17.999676 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:30:18.011573 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 11 00:30:18.045792 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:30:18.050159 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:30:18.063624 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:30:18.069812 kernel: BTRFS: device fsid f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (676) Sep 11 00:30:18.069845 kernel: BTRFS info (device dm-0): first mount of filesystem f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 Sep 11 00:30:18.069858 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:30:18.075565 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:30:18.075586 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:30:18.076690 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:30:18.079040 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:30:18.081409 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:30:18.084247 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:30:18.087342 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:30:18.118967 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (707) Sep 11 00:30:18.119042 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:30:18.119057 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:30:18.123569 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:30:18.123622 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:30:18.128570 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:30:18.130140 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:30:18.131722 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:30:18.218583 ignition[747]: Ignition 2.21.0 Sep 11 00:30:18.218597 ignition[747]: Stage: fetch-offline Sep 11 00:30:18.218626 ignition[747]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:30:18.218636 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:30:18.218723 ignition[747]: parsed url from cmdline: "" Sep 11 00:30:18.218727 ignition[747]: no config URL provided Sep 11 00:30:18.218731 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:30:18.218740 ignition[747]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:30:18.218762 ignition[747]: op(1): [started] loading QEMU firmware config module Sep 11 00:30:18.218768 ignition[747]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 00:30:18.231709 ignition[747]: op(1): [finished] loading QEMU firmware config module Sep 11 00:30:18.240409 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:30:18.244645 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:30:18.273820 ignition[747]: parsing config with SHA512: 29c71fd442cf3a6ec1f05b4fb0e309f89b2ffafc282028e2ded76425ce6185c22607472266b3e5728380c0442d74c764175f953ddc208ac4bb3df1b8e9b9688e Sep 11 00:30:18.279372 unknown[747]: fetched base config from "system" Sep 11 00:30:18.279382 unknown[747]: fetched user config from "qemu" Sep 11 00:30:18.279710 ignition[747]: fetch-offline: fetch-offline passed Sep 11 00:30:18.279760 ignition[747]: Ignition finished successfully Sep 11 00:30:18.283937 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:30:18.293815 systemd-networkd[854]: lo: Link UP Sep 11 00:30:18.293824 systemd-networkd[854]: lo: Gained carrier Sep 11 00:30:18.295508 systemd-networkd[854]: Enumeration completed Sep 11 00:30:18.295648 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:30:18.295938 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:30:18.295944 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:30:18.296192 systemd[1]: Reached target network.target - Network. Sep 11 00:30:18.297274 systemd-networkd[854]: eth0: Link UP Sep 11 00:30:18.297478 systemd-networkd[854]: eth0: Gained carrier Sep 11 00:30:18.297489 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:30:18.299609 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 00:30:18.300408 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:30:18.314605 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:30:18.332912 ignition[858]: Ignition 2.21.0 Sep 11 00:30:18.332925 ignition[858]: Stage: kargs Sep 11 00:30:18.333055 ignition[858]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:30:18.333064 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:30:18.333731 ignition[858]: kargs: kargs passed Sep 11 00:30:18.333772 ignition[858]: Ignition finished successfully Sep 11 00:30:18.339793 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:30:18.341199 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:30:18.372310 ignition[868]: Ignition 2.21.0 Sep 11 00:30:18.372323 ignition[868]: Stage: disks Sep 11 00:30:18.372447 ignition[868]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:30:18.372456 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:30:18.373095 ignition[868]: disks: disks passed Sep 11 00:30:18.373146 ignition[868]: Ignition finished successfully Sep 11 00:30:18.376758 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:30:18.377356 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:30:18.379297 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:30:18.379772 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:30:18.380093 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:30:18.380414 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:30:18.381770 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:30:18.408984 systemd-resolved[269]: Detected conflict on linux IN A 10.0.0.139 Sep 11 00:30:18.408997 systemd-resolved[269]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Sep 11 00:30:18.411023 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 00:30:18.480035 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:30:18.481456 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:30:18.590586 kernel: EXT4-fs (vda9): mounted filesystem 6a9ce0af-81d0-4628-9791-e47488ed2744 r/w with ordered data mode. Quota mode: none. Sep 11 00:30:18.591617 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:30:18.592395 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:30:18.595633 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:30:18.597581 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:30:18.599773 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 00:30:18.599834 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:30:18.599861 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:30:18.615452 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:30:18.619146 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:30:18.624283 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Sep 11 00:30:18.624319 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:30:18.624334 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:30:18.626557 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:30:18.626578 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:30:18.628675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:30:18.663902 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:30:18.668676 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:30:18.673480 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:30:18.678270 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:30:18.769255 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:30:18.771493 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:30:18.773290 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:30:18.793559 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:30:18.806247 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:30:18.814618 ignition[999]: INFO : Ignition 2.21.0 Sep 11 00:30:18.814618 ignition[999]: INFO : Stage: mount Sep 11 00:30:18.816213 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:30:18.816213 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:30:18.816213 ignition[999]: INFO : mount: mount passed Sep 11 00:30:18.816213 ignition[999]: INFO : Ignition finished successfully Sep 11 00:30:18.817861 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:30:18.820520 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:30:19.068869 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:30:19.070485 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:30:19.096356 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Sep 11 00:30:19.096400 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:30:19.096412 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:30:19.100562 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:30:19.100584 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:30:19.102050 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:30:19.137865 ignition[1028]: INFO : Ignition 2.21.0 Sep 11 00:30:19.137865 ignition[1028]: INFO : Stage: files Sep 11 00:30:19.140112 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:30:19.140112 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:30:19.142491 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:30:19.142491 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:30:19.142491 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:30:19.146676 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:30:19.146676 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:30:19.146676 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:30:19.146676 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 11 00:30:19.146676 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 11 00:30:19.144406 unknown[1028]: wrote ssh authorized keys file for user: core Sep 11 00:30:19.186057 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:30:20.133718 systemd-networkd[854]: eth0: Gained IPv6LL Sep 11 00:30:20.206964 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 11 00:30:20.209407 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:30:20.209407 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:30:20.209407 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:30:20.209407 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:30:20.209407 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:30:20.209407 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:30:20.209407 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:30:20.209407 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:30:20.223653 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:30:20.223653 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:30:20.223653 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:30:20.223653 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:30:20.223653 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:30:20.223653 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 11 00:30:20.739186 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 00:30:21.080892 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:30:21.080892 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 00:30:21.084873 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:30:21.087873 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:30:21.087873 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 00:30:21.087873 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 11 00:30:21.092693 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:30:21.092693 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:30:21.092693 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 11 00:30:21.092693 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 00:30:21.107832 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:30:21.111588 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:30:21.113134 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 00:30:21.113134 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:30:21.113134 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:30:21.113134 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:30:21.113134 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:30:21.113134 ignition[1028]: INFO : files: files passed Sep 11 00:30:21.113134 ignition[1028]: INFO : Ignition finished successfully Sep 11 00:30:21.120003 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:30:21.123456 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:30:21.126314 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:30:21.137232 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:30:21.137353 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:30:21.140778 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 00:30:21.144503 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:30:21.146187 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:30:21.146187 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:30:21.149485 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:30:21.152427 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:30:21.155212 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:30:21.215934 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:30:21.216066 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:30:21.216936 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:30:21.221005 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:30:21.221323 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:30:21.222995 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:30:21.258269 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:30:21.259786 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:30:21.282480 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:30:21.283006 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:30:21.283347 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:30:21.283826 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:30:21.283931 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:30:21.290262 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:30:21.292251 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:30:21.293985 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:30:21.295796 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:30:21.296342 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:30:21.296827 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:30:21.301084 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:30:21.301393 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:30:21.304832 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:30:21.307077 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:30:21.308898 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:30:21.310819 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:30:21.310923 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:30:21.313626 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:30:21.314152 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:30:21.314425 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:30:21.314575 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:30:21.319036 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:30:21.319183 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:30:21.324288 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:30:21.324410 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:30:21.325022 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:30:21.325260 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:30:21.330610 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:30:21.330947 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:30:21.333666 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:30:21.333987 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:30:21.334106 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:30:21.336980 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:30:21.337115 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:30:21.338940 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:30:21.339064 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:30:21.341075 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:30:21.341177 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:30:21.345989 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:30:21.346437 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:30:21.346554 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:30:21.351659 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:30:21.351931 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:30:21.352038 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:30:21.352326 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:30:21.352417 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:30:21.360686 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:30:21.375754 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:30:21.389020 ignition[1083]: INFO : Ignition 2.21.0 Sep 11 00:30:21.389020 ignition[1083]: INFO : Stage: umount Sep 11 00:30:21.390809 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:30:21.390809 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:30:21.390809 ignition[1083]: INFO : umount: umount passed Sep 11 00:30:21.390809 ignition[1083]: INFO : Ignition finished successfully Sep 11 00:30:21.392087 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:30:21.392222 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:30:21.394209 systemd[1]: Stopped target network.target - Network. Sep 11 00:30:21.395126 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:30:21.395176 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:30:21.395459 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:30:21.395498 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:30:21.395781 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:30:21.395828 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:30:21.396111 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:30:21.396155 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:30:21.396555 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:30:21.397003 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:30:21.398324 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:30:21.404077 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:30:21.404203 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:30:21.408946 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:30:21.409503 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:30:21.409935 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:30:21.414428 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:30:21.414693 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:30:21.414800 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:30:21.418941 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:30:21.419449 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:30:21.421417 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:30:21.421458 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:30:21.422765 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:30:21.425295 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:30:21.425345 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:30:21.427844 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:30:21.427889 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:30:21.432382 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:30:21.432428 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:30:21.433014 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:30:21.437931 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:30:21.452803 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:30:21.452994 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:30:21.453704 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:30:21.453750 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:30:21.456372 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:30:21.456407 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:30:21.456822 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:30:21.456878 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:30:21.457491 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:30:21.457550 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:30:21.458279 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:30:21.458322 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:30:21.477254 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:30:21.478438 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:30:21.478506 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:30:21.481622 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:30:21.481667 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:30:21.523031 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:30:21.523099 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:30:21.526804 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:30:21.526916 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:30:21.527638 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:30:21.527728 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:30:21.598208 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:30:21.598368 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:30:21.600579 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:30:21.600995 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:30:21.601068 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:30:21.602200 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:30:21.624160 systemd[1]: Switching root. Sep 11 00:30:21.672109 systemd-journald[220]: Journal stopped Sep 11 00:30:22.914190 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 11 00:30:22.914257 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:30:22.914271 kernel: SELinux: policy capability open_perms=1 Sep 11 00:30:22.914283 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:30:22.914294 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:30:22.914305 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:30:22.914317 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:30:22.914333 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:30:22.914347 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:30:22.914358 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:30:22.914379 kernel: audit: type=1403 audit(1757550622.189:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:30:22.914391 systemd[1]: Successfully loaded SELinux policy in 46.622ms. Sep 11 00:30:22.914411 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.613ms. Sep 11 00:30:22.914424 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:30:22.914437 systemd[1]: Detected virtualization kvm. Sep 11 00:30:22.914448 systemd[1]: Detected architecture x86-64. Sep 11 00:30:22.914460 systemd[1]: Detected first boot. Sep 11 00:30:22.914475 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:30:22.914509 zram_generator::config[1128]: No configuration found. Sep 11 00:30:22.914550 kernel: Guest personality initialized and is inactive Sep 11 00:30:22.914579 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 11 00:30:22.914591 kernel: Initialized host personality Sep 11 00:30:22.914602 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:30:22.914614 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:30:22.914627 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:30:22.914640 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:30:22.914654 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:30:22.914667 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:30:22.914687 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:30:22.914706 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:30:22.914719 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:30:22.914731 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:30:22.914743 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:30:22.914755 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:30:22.914769 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:30:22.914781 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:30:22.914793 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:30:22.914806 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:30:22.914818 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:30:22.914829 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:30:22.914842 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:30:22.914856 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:30:22.914869 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:30:22.914881 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:30:22.914892 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:30:22.914910 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:30:22.914922 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:30:22.914934 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:30:22.914945 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:30:22.914957 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:30:22.914974 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:30:22.914995 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:30:22.915008 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:30:22.915020 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:30:22.915032 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:30:22.915049 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:30:22.915061 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:30:22.915073 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:30:22.915085 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:30:22.915097 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:30:22.915111 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:30:22.915124 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:30:22.915138 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:30:22.915150 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:22.915162 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:30:22.915174 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:30:22.915186 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:30:22.915198 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:30:22.915213 systemd[1]: Reached target machines.target - Containers. Sep 11 00:30:22.915225 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:30:22.915237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:30:22.915248 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:30:22.915261 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:30:22.915273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:30:22.915285 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:30:22.915297 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:30:22.915309 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:30:22.915323 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:30:22.915335 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:30:22.915347 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:30:22.915359 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:30:22.915372 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:30:22.915384 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:30:22.915397 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:30:22.915409 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:30:22.915423 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:30:22.915435 kernel: fuse: init (API version 7.41) Sep 11 00:30:22.915446 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:30:22.915458 kernel: loop: module loaded Sep 11 00:30:22.915470 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:30:22.915482 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:30:22.915496 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:30:22.915508 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:30:22.915520 systemd[1]: Stopped verity-setup.service. Sep 11 00:30:22.915569 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:22.915583 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:30:22.915595 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:30:22.915607 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:30:22.915619 kernel: ACPI: bus type drm_connector registered Sep 11 00:30:22.915633 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:30:22.915644 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:30:22.915657 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:30:22.915669 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:30:22.915684 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:30:22.915696 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:30:22.915730 systemd-journald[1203]: Collecting audit messages is disabled. Sep 11 00:30:22.915753 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:30:22.915766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:30:22.915782 systemd-journald[1203]: Journal started Sep 11 00:30:22.915803 systemd-journald[1203]: Runtime Journal (/run/log/journal/c17651f6e85e490b875b255a7e2d063e) is 6M, max 48.2M, 42.2M free. Sep 11 00:30:22.683139 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:30:22.707303 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 00:30:22.707776 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:30:22.916585 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:30:22.919573 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:30:22.920658 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:30:22.920866 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:30:22.922134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:30:22.922332 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:30:22.923746 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:30:22.923949 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:30:22.925235 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:30:22.925433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:30:22.926898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:30:22.928236 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:30:22.929783 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:30:22.931254 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:30:22.945256 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:30:22.947875 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:30:22.949947 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:30:22.951055 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:30:22.951139 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:30:22.953109 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:30:22.958648 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:30:22.959783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:30:22.962106 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:30:22.966638 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:30:22.967910 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:30:22.968837 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:30:22.970074 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:30:22.970998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:30:22.973724 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:30:22.976328 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:30:22.979037 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:30:22.980515 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:30:22.985266 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:30:22.996659 kernel: loop0: detected capacity change from 0 to 229808 Sep 11 00:30:23.001262 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:30:23.002724 systemd-journald[1203]: Time spent on flushing to /var/log/journal/c17651f6e85e490b875b255a7e2d063e is 31.191ms for 1043 entries. Sep 11 00:30:23.002724 systemd-journald[1203]: System Journal (/var/log/journal/c17651f6e85e490b875b255a7e2d063e) is 8M, max 195.6M, 187.6M free. Sep 11 00:30:23.045405 systemd-journald[1203]: Received client request to flush runtime journal. Sep 11 00:30:23.045462 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:30:23.003018 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:30:23.006100 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:30:23.008797 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:30:23.029447 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:30:23.036246 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:30:23.048767 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:30:23.050554 kernel: loop1: detected capacity change from 0 to 113872 Sep 11 00:30:23.054622 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:30:23.071066 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Sep 11 00:30:23.071085 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Sep 11 00:30:23.077108 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:30:23.082576 kernel: loop2: detected capacity change from 0 to 146240 Sep 11 00:30:23.112673 kernel: loop3: detected capacity change from 0 to 229808 Sep 11 00:30:23.121551 kernel: loop4: detected capacity change from 0 to 113872 Sep 11 00:30:23.129948 kernel: loop5: detected capacity change from 0 to 146240 Sep 11 00:30:23.138604 (sd-merge)[1269]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 00:30:23.140100 (sd-merge)[1269]: Merged extensions into '/usr'. Sep 11 00:30:23.144433 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:30:23.144453 systemd[1]: Reloading... Sep 11 00:30:23.204565 zram_generator::config[1297]: No configuration found. Sep 11 00:30:23.306336 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:30:23.316703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:30:23.398669 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:30:23.398821 systemd[1]: Reloading finished in 253 ms. Sep 11 00:30:23.426109 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:30:23.427664 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:30:23.446849 systemd[1]: Starting ensure-sysext.service... Sep 11 00:30:23.473222 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:30:23.482332 systemd[1]: Reload requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:30:23.482349 systemd[1]: Reloading... Sep 11 00:30:23.493862 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:30:23.493905 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:30:23.494214 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:30:23.494487 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:30:23.495391 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:30:23.495686 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Sep 11 00:30:23.495760 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Sep 11 00:30:23.529560 zram_generator::config[1363]: No configuration found. Sep 11 00:30:23.606113 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:30:23.606127 systemd-tmpfiles[1333]: Skipping /boot Sep 11 00:30:23.618332 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:30:23.618347 systemd-tmpfiles[1333]: Skipping /boot Sep 11 00:30:23.664728 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:30:23.750296 systemd[1]: Reloading finished in 267 ms. Sep 11 00:30:23.797400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:23.797592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:30:23.798889 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:30:23.812437 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:30:23.814606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:30:23.815718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:30:23.815897 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:30:23.816005 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:23.818486 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:23.818729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:30:23.818887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:30:23.818992 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:30:23.819089 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:23.821954 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:23.822177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:30:23.830225 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:30:23.831469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:30:23.831631 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:30:23.831784 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:23.832926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:30:23.833158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:30:23.834418 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:30:23.834868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:30:23.837245 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:30:23.837456 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:30:23.839180 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:30:23.839402 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:30:23.844236 systemd[1]: Finished ensure-sysext.service. Sep 11 00:30:23.845589 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:30:23.852301 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:30:23.854441 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:30:23.856672 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:30:23.857852 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:30:23.857918 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:30:23.866041 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:30:23.871649 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:30:23.874607 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:30:23.876582 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:30:23.883188 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:30:23.888113 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:30:23.899688 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:30:23.901568 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:30:23.906303 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:30:23.926647 augenrules[1439]: No rules Sep 11 00:30:23.925323 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:30:23.927245 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:30:23.927507 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:30:23.928983 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:30:23.931704 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:30:23.947294 systemd-udevd[1418]: Using default interface naming scheme 'v255'. Sep 11 00:30:23.947776 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:30:23.967658 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:30:23.971130 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:30:24.082243 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:30:24.088656 systemd-networkd[1456]: lo: Link UP Sep 11 00:30:24.088667 systemd-networkd[1456]: lo: Gained carrier Sep 11 00:30:24.089421 systemd-networkd[1456]: Enumeration completed Sep 11 00:30:24.089506 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:30:24.092749 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:30:24.096897 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:30:24.113856 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:30:24.113866 systemd-networkd[1456]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:30:24.114302 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:30:24.114410 systemd-networkd[1456]: eth0: Link UP Sep 11 00:30:24.114594 systemd-networkd[1456]: eth0: Gained carrier Sep 11 00:30:24.114607 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:30:24.120885 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:30:24.128133 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:30:24.129860 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:30:24.131633 systemd-networkd[1456]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:30:24.132342 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Sep 11 00:30:25.326637 systemd-timesyncd[1411]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 00:30:25.326688 systemd-timesyncd[1411]: Initial clock synchronization to Thu 2025-09-11 00:30:25.326570 UTC. Sep 11 00:30:25.331431 systemd-resolved[1410]: Positive Trust Anchors: Sep 11 00:30:25.331459 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:30:25.331491 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:30:25.337103 systemd-resolved[1410]: Defaulting to hostname 'linux'. Sep 11 00:30:25.339065 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:30:25.340515 systemd[1]: Reached target network.target - Network. Sep 11 00:30:25.341444 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:30:25.342687 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:30:25.343785 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:30:25.345006 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:30:25.346292 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:30:25.347697 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:30:25.349061 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:30:25.350686 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:30:25.352014 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:30:25.352106 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:30:25.353127 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:30:25.354935 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:30:25.357075 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:30:25.358554 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:30:25.361964 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:30:25.363637 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:30:25.364881 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:30:25.368074 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 11 00:30:25.368412 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:30:25.369939 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:30:25.372143 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:30:25.373737 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:30:25.375235 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:30:25.377049 kernel: ACPI: button: Power Button [PWRF] Sep 11 00:30:25.382805 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:30:25.383968 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:30:25.385192 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:30:25.385280 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:30:25.386406 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 11 00:30:25.386662 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 11 00:30:25.386823 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 11 00:30:25.387801 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:30:25.390791 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:30:25.394294 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:30:25.402293 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:30:25.406214 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:30:25.408109 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:30:25.411145 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:30:25.413217 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:30:25.420183 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:30:25.422238 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:30:25.425808 jq[1511]: false Sep 11 00:30:25.424834 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:30:25.430844 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:30:25.433967 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:30:25.434485 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:30:25.437265 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:30:25.441491 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:30:25.447746 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing passwd entry cache Sep 11 00:30:25.447755 oslogin_cache_refresh[1519]: Refreshing passwd entry cache Sep 11 00:30:25.454996 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:30:25.456722 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:30:25.456973 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:30:25.457309 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:30:25.457571 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:30:25.462738 jq[1530]: true Sep 11 00:30:25.471839 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting users, quitting Sep 11 00:30:25.471839 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:30:25.471839 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing group entry cache Sep 11 00:30:25.471310 oslogin_cache_refresh[1519]: Failure getting users, quitting Sep 11 00:30:25.471331 oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:30:25.471385 oslogin_cache_refresh[1519]: Refreshing group entry cache Sep 11 00:30:25.474239 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:30:25.477592 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:30:25.477974 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:30:25.483099 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting groups, quitting Sep 11 00:30:25.483099 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:30:25.482281 oslogin_cache_refresh[1519]: Failure getting groups, quitting Sep 11 00:30:25.482293 oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:30:25.490429 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:30:25.490817 update_engine[1528]: I20250911 00:30:25.490455 1528 main.cc:92] Flatcar Update Engine starting Sep 11 00:30:25.492290 tar[1536]: linux-amd64/LICENSE Sep 11 00:30:25.492290 tar[1536]: linux-amd64/helm Sep 11 00:30:25.492766 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:30:25.501713 jq[1542]: true Sep 11 00:30:25.509591 dbus-daemon[1504]: [system] SELinux support is enabled Sep 11 00:30:25.510001 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:30:25.521246 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:30:25.521490 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:30:25.522848 update_engine[1528]: I20250911 00:30:25.521922 1528 update_check_scheduler.cc:74] Next update check in 3m1s Sep 11 00:30:25.522970 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:30:25.523188 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:30:25.525801 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:30:25.539305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:30:25.543425 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:30:25.579673 extend-filesystems[1515]: Found /dev/vda6 Sep 11 00:30:25.587569 extend-filesystems[1515]: Found /dev/vda9 Sep 11 00:30:25.587569 extend-filesystems[1515]: Checking size of /dev/vda9 Sep 11 00:30:25.602091 extend-filesystems[1515]: Resized partition /dev/vda9 Sep 11 00:30:25.617099 extend-filesystems[1588]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 00:30:25.619423 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:30:25.672250 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:30:25.670653 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:30:25.671976 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:30:25.686087 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 00:30:25.692097 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:30:25.712098 kernel: kvm_amd: TSC scaling supported Sep 11 00:30:25.712223 kernel: kvm_amd: Nested Virtualization enabled Sep 11 00:30:25.712252 kernel: kvm_amd: Nested Paging enabled Sep 11 00:30:25.712279 kernel: kvm_amd: LBR virtualization supported Sep 11 00:30:25.712303 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 11 00:30:25.712329 kernel: kvm_amd: Virtual GIF supported Sep 11 00:30:25.731956 containerd[1543]: time="2025-09-11T00:30:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:30:25.732628 containerd[1543]: time="2025-09-11T00:30:25.732577212Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:30:25.741553 containerd[1543]: time="2025-09-11T00:30:25.741514379Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.834µs" Sep 11 00:30:25.741553 containerd[1543]: time="2025-09-11T00:30:25.741545848Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:30:25.741608 containerd[1543]: time="2025-09-11T00:30:25.741561688Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:30:25.741772 containerd[1543]: time="2025-09-11T00:30:25.741747426Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:30:25.741772 containerd[1543]: time="2025-09-11T00:30:25.741768996Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:30:25.763929 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 00:30:25.763966 kernel: EDAC MC: Ver: 3.0.0 Sep 11 00:30:25.763979 containerd[1543]: time="2025-09-11T00:30:25.742520856Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:30:25.764132 containerd[1543]: time="2025-09-11T00:30:25.764108363Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:30:25.765574 containerd[1543]: time="2025-09-11T00:30:25.764193162Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:30:25.765574 containerd[1543]: time="2025-09-11T00:30:25.764529342Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:30:25.765574 containerd[1543]: time="2025-09-11T00:30:25.764546063Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:30:25.765574 containerd[1543]: time="2025-09-11T00:30:25.764556863Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:30:25.765574 containerd[1543]: time="2025-09-11T00:30:25.764566181Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:30:25.765574 containerd[1543]: time="2025-09-11T00:30:25.764658023Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:30:25.765574 containerd[1543]: time="2025-09-11T00:30:25.764879739Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:30:25.765574 containerd[1543]: time="2025-09-11T00:30:25.764909324Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:30:25.765574 containerd[1543]: time="2025-09-11T00:30:25.764918411Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:30:25.765574 containerd[1543]: time="2025-09-11T00:30:25.764989074Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:30:25.765574 containerd[1543]: time="2025-09-11T00:30:25.765353227Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:30:25.765574 containerd[1543]: time="2025-09-11T00:30:25.765432605Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:30:25.765422 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:30:25.765968 extend-filesystems[1588]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 00:30:25.765968 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 00:30:25.765968 extend-filesystems[1588]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 00:30:25.770786 extend-filesystems[1515]: Resized filesystem in /dev/vda9 Sep 11 00:30:25.767428 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:30:25.776139 containerd[1543]: time="2025-09-11T00:30:25.776097853Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:30:25.776176 containerd[1543]: time="2025-09-11T00:30:25.776167313Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:30:25.776208 containerd[1543]: time="2025-09-11T00:30:25.776183393Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:30:25.776208 containerd[1543]: time="2025-09-11T00:30:25.776197109Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:30:25.776291 containerd[1543]: time="2025-09-11T00:30:25.776271909Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:30:25.776291 containerd[1543]: time="2025-09-11T00:30:25.776289382Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:30:25.776329 containerd[1543]: time="2025-09-11T00:30:25.776303739Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:30:25.776329 containerd[1543]: time="2025-09-11T00:30:25.776315671Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:30:25.776329 containerd[1543]: time="2025-09-11T00:30:25.776326652Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:30:25.776387 containerd[1543]: time="2025-09-11T00:30:25.776337392Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:30:25.776387 containerd[1543]: time="2025-09-11T00:30:25.776347060Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:30:25.776387 containerd[1543]: time="2025-09-11T00:30:25.776359734Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:30:25.776526 containerd[1543]: time="2025-09-11T00:30:25.776504585Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:30:25.776548 containerd[1543]: time="2025-09-11T00:30:25.776536686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:30:25.776568 containerd[1543]: time="2025-09-11T00:30:25.776551393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:30:25.776568 containerd[1543]: time="2025-09-11T00:30:25.776562163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:30:25.776686 containerd[1543]: time="2025-09-11T00:30:25.776573044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:30:25.776686 containerd[1543]: time="2025-09-11T00:30:25.776582922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:30:25.776686 containerd[1543]: time="2025-09-11T00:30:25.776593172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:30:25.776686 containerd[1543]: time="2025-09-11T00:30:25.776602840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:30:25.776686 containerd[1543]: time="2025-09-11T00:30:25.776622597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:30:25.776686 containerd[1543]: time="2025-09-11T00:30:25.776632555Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:30:25.776686 containerd[1543]: time="2025-09-11T00:30:25.776642213Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:30:25.778628 containerd[1543]: time="2025-09-11T00:30:25.776702987Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:30:25.778664 containerd[1543]: time="2025-09-11T00:30:25.778634279Z" level=info msg="Start snapshots syncer" Sep 11 00:30:25.778697 containerd[1543]: time="2025-09-11T00:30:25.778679293Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:30:25.781166 containerd[1543]: time="2025-09-11T00:30:25.779028307Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.782837219Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.782961923Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.783149495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.783171917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.783182747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.783192435Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.783206051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.783216340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.783226489Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.783254892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.783265422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.783275922Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.783310987Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:30:25.783911 containerd[1543]: time="2025-09-11T00:30:25.783328150Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:30:25.784201 containerd[1543]: time="2025-09-11T00:30:25.783337347Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:30:25.784201 containerd[1543]: time="2025-09-11T00:30:25.783346604Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:30:25.784201 containerd[1543]: time="2025-09-11T00:30:25.783353818Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:30:25.784201 containerd[1543]: time="2025-09-11T00:30:25.783362744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:30:25.784201 containerd[1543]: time="2025-09-11T00:30:25.783375819Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:30:25.784201 containerd[1543]: time="2025-09-11T00:30:25.783408610Z" level=info msg="runtime interface created" Sep 11 00:30:25.784201 containerd[1543]: time="2025-09-11T00:30:25.783414461Z" level=info msg="created NRI interface" Sep 11 00:30:25.784201 containerd[1543]: time="2025-09-11T00:30:25.783422667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:30:25.784201 containerd[1543]: time="2025-09-11T00:30:25.783438797Z" level=info msg="Connect containerd service" Sep 11 00:30:25.784201 containerd[1543]: time="2025-09-11T00:30:25.783471979Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:30:25.784381 containerd[1543]: time="2025-09-11T00:30:25.784297738Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:30:25.814246 systemd-logind[1526]: Watching system buttons on /dev/input/event2 (Power Button) Sep 11 00:30:25.814276 systemd-logind[1526]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:30:25.815307 systemd-logind[1526]: New seat seat0. Sep 11 00:30:25.816473 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:30:25.877170 containerd[1543]: time="2025-09-11T00:30:25.877042534Z" level=info msg="Start subscribing containerd event" Sep 11 00:30:25.877170 containerd[1543]: time="2025-09-11T00:30:25.877110251Z" level=info msg="Start recovering state" Sep 11 00:30:25.877298 containerd[1543]: time="2025-09-11T00:30:25.877211431Z" level=info msg="Start event monitor" Sep 11 00:30:25.877298 containerd[1543]: time="2025-09-11T00:30:25.877228383Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:30:25.877298 containerd[1543]: time="2025-09-11T00:30:25.877237960Z" level=info msg="Start streaming server" Sep 11 00:30:25.877298 containerd[1543]: time="2025-09-11T00:30:25.877248440Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:30:25.877298 containerd[1543]: time="2025-09-11T00:30:25.877256585Z" level=info msg="runtime interface starting up..." Sep 11 00:30:25.877298 containerd[1543]: time="2025-09-11T00:30:25.877263709Z" level=info msg="starting plugins..." Sep 11 00:30:25.877298 containerd[1543]: time="2025-09-11T00:30:25.877280490Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:30:25.877907 containerd[1543]: time="2025-09-11T00:30:25.877884062Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:30:25.878109 containerd[1543]: time="2025-09-11T00:30:25.878018414Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:30:25.878655 containerd[1543]: time="2025-09-11T00:30:25.878619010Z" level=info msg="containerd successfully booted in 0.149067s" Sep 11 00:30:25.878776 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:30:26.000968 sshd_keygen[1554]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:30:26.023470 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:30:26.026294 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:30:26.050944 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:30:26.051222 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:30:26.053871 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:30:26.077848 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:30:26.080713 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:30:26.082798 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:30:26.084140 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:30:26.126130 tar[1536]: linux-amd64/README.md Sep 11 00:30:26.149087 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:30:27.151233 systemd-networkd[1456]: eth0: Gained IPv6LL Sep 11 00:30:27.154466 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:30:27.156343 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:30:27.158987 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 00:30:27.161359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:30:27.163538 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:30:27.192368 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:30:27.194137 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 00:30:27.194382 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 00:30:27.196490 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:30:27.879613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:30:27.881822 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:30:27.883604 systemd[1]: Startup finished in 2.800s (kernel) + 6.570s (initrd) + 4.546s (userspace) = 13.917s. Sep 11 00:30:27.889586 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:30:28.313800 kubelet[1661]: E0911 00:30:28.313713 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:30:28.318276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:30:28.318534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:30:28.318892 systemd[1]: kubelet.service: Consumed 1.004s CPU time, 266.6M memory peak. Sep 11 00:30:29.701195 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:30:29.702989 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:47938.service - OpenSSH per-connection server daemon (10.0.0.1:47938). Sep 11 00:30:29.753500 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 47938 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:30:29.755362 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:29.766823 systemd-logind[1526]: New session 1 of user core. Sep 11 00:30:29.768255 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:30:29.769381 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:30:29.804725 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:30:29.807439 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:30:29.823686 (systemd)[1678]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:30:29.826069 systemd-logind[1526]: New session c1 of user core. Sep 11 00:30:29.964563 systemd[1678]: Queued start job for default target default.target. Sep 11 00:30:29.985171 systemd[1678]: Created slice app.slice - User Application Slice. Sep 11 00:30:29.985194 systemd[1678]: Reached target paths.target - Paths. Sep 11 00:30:29.985229 systemd[1678]: Reached target timers.target - Timers. Sep 11 00:30:29.986583 systemd[1678]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:30:29.996473 systemd[1678]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:30:29.996532 systemd[1678]: Reached target sockets.target - Sockets. Sep 11 00:30:29.996565 systemd[1678]: Reached target basic.target - Basic System. Sep 11 00:30:29.996601 systemd[1678]: Reached target default.target - Main User Target. Sep 11 00:30:29.996633 systemd[1678]: Startup finished in 164ms. Sep 11 00:30:29.997072 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:30:29.998686 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:30:30.065997 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:38890.service - OpenSSH per-connection server daemon (10.0.0.1:38890). Sep 11 00:30:30.118458 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 38890 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:30:30.119706 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:30.123926 systemd-logind[1526]: New session 2 of user core. Sep 11 00:30:30.134159 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:30:30.185592 sshd[1691]: Connection closed by 10.0.0.1 port 38890 Sep 11 00:30:30.185940 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:30.194401 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:38890.service: Deactivated successfully. Sep 11 00:30:30.195864 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 00:30:30.196572 systemd-logind[1526]: Session 2 logged out. Waiting for processes to exit. Sep 11 00:30:30.198963 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:38896.service - OpenSSH per-connection server daemon (10.0.0.1:38896). Sep 11 00:30:30.199676 systemd-logind[1526]: Removed session 2. Sep 11 00:30:30.242124 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 38896 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:30:30.243347 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:30.247429 systemd-logind[1526]: New session 3 of user core. Sep 11 00:30:30.257156 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:30:30.305248 sshd[1699]: Connection closed by 10.0.0.1 port 38896 Sep 11 00:30:30.305488 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:30.323326 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:38896.service: Deactivated successfully. Sep 11 00:30:30.324831 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 00:30:30.325514 systemd-logind[1526]: Session 3 logged out. Waiting for processes to exit. Sep 11 00:30:30.327876 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:38904.service - OpenSSH per-connection server daemon (10.0.0.1:38904). Sep 11 00:30:30.328448 systemd-logind[1526]: Removed session 3. Sep 11 00:30:30.378462 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 38904 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:30:30.379893 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:30.384574 systemd-logind[1526]: New session 4 of user core. Sep 11 00:30:30.397195 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:30:30.450470 sshd[1707]: Connection closed by 10.0.0.1 port 38904 Sep 11 00:30:30.450768 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:30.463620 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:38904.service: Deactivated successfully. Sep 11 00:30:30.465479 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:30:30.466147 systemd-logind[1526]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:30:30.469020 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:38910.service - OpenSSH per-connection server daemon (10.0.0.1:38910). Sep 11 00:30:30.469558 systemd-logind[1526]: Removed session 4. Sep 11 00:30:30.513534 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 38910 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:30:30.515010 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:30.519281 systemd-logind[1526]: New session 5 of user core. Sep 11 00:30:30.529160 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:30:30.586934 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:30:30.587253 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:30:30.602966 sudo[1717]: pam_unix(sudo:session): session closed for user root Sep 11 00:30:30.604576 sshd[1716]: Connection closed by 10.0.0.1 port 38910 Sep 11 00:30:30.604954 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:30.618345 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:38910.service: Deactivated successfully. Sep 11 00:30:30.620609 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:30:30.621429 systemd-logind[1526]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:30:30.624775 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:38924.service - OpenSSH per-connection server daemon (10.0.0.1:38924). Sep 11 00:30:30.625347 systemd-logind[1526]: Removed session 5. Sep 11 00:30:30.670682 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 38924 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:30:30.671992 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:30.676094 systemd-logind[1526]: New session 6 of user core. Sep 11 00:30:30.686151 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:30:30.738527 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:30:30.738808 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:30:30.974824 sudo[1727]: pam_unix(sudo:session): session closed for user root Sep 11 00:30:30.982752 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:30:30.983151 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:30:30.993064 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:30:31.045245 augenrules[1749]: No rules Sep 11 00:30:31.047570 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:30:31.047936 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:30:31.049512 sudo[1726]: pam_unix(sudo:session): session closed for user root Sep 11 00:30:31.051233 sshd[1725]: Connection closed by 10.0.0.1 port 38924 Sep 11 00:30:31.051642 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:31.059848 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:38924.service: Deactivated successfully. Sep 11 00:30:31.061754 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:30:31.062670 systemd-logind[1526]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:30:31.065670 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:38926.service - OpenSSH per-connection server daemon (10.0.0.1:38926). Sep 11 00:30:31.066278 systemd-logind[1526]: Removed session 6. Sep 11 00:30:31.120139 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 38926 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:30:31.121842 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:31.126213 systemd-logind[1526]: New session 7 of user core. Sep 11 00:30:31.137189 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:30:31.191266 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:30:31.191582 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:30:31.488610 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:30:31.502386 (dockerd)[1781]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:30:31.718349 dockerd[1781]: time="2025-09-11T00:30:31.718266437Z" level=info msg="Starting up" Sep 11 00:30:31.719202 dockerd[1781]: time="2025-09-11T00:30:31.719178598Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:30:32.523985 dockerd[1781]: time="2025-09-11T00:30:32.523922813Z" level=info msg="Loading containers: start." Sep 11 00:30:32.535069 kernel: Initializing XFRM netlink socket Sep 11 00:30:32.982693 systemd-networkd[1456]: docker0: Link UP Sep 11 00:30:32.988677 dockerd[1781]: time="2025-09-11T00:30:32.988632073Z" level=info msg="Loading containers: done." Sep 11 00:30:33.001916 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2322275367-merged.mount: Deactivated successfully. Sep 11 00:30:33.003640 dockerd[1781]: time="2025-09-11T00:30:33.003595489Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:30:33.003697 dockerd[1781]: time="2025-09-11T00:30:33.003677302Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:30:33.003795 dockerd[1781]: time="2025-09-11T00:30:33.003772130Z" level=info msg="Initializing buildkit" Sep 11 00:30:33.035649 dockerd[1781]: time="2025-09-11T00:30:33.035614865Z" level=info msg="Completed buildkit initialization" Sep 11 00:30:33.041113 dockerd[1781]: time="2025-09-11T00:30:33.041082177Z" level=info msg="Daemon has completed initialization" Sep 11 00:30:33.041161 dockerd[1781]: time="2025-09-11T00:30:33.041125868Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:30:33.041366 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:30:33.812931 containerd[1543]: time="2025-09-11T00:30:33.812879506Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 11 00:30:34.601401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175035241.mount: Deactivated successfully. Sep 11 00:30:36.945098 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1432447028 wd_nsec: 1432446938 Sep 11 00:30:37.914825 containerd[1543]: time="2025-09-11T00:30:37.914756084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:37.915817 containerd[1543]: time="2025-09-11T00:30:37.915757201Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 11 00:30:37.917135 containerd[1543]: time="2025-09-11T00:30:37.917102173Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:37.921120 containerd[1543]: time="2025-09-11T00:30:37.921078368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:37.922403 containerd[1543]: time="2025-09-11T00:30:37.922360081Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 4.10943013s" Sep 11 00:30:37.922403 containerd[1543]: time="2025-09-11T00:30:37.922398093Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 11 00:30:37.922926 containerd[1543]: time="2025-09-11T00:30:37.922902027Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 11 00:30:38.452702 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:30:38.454474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:30:38.817020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:30:38.824277 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:30:39.261176 kubelet[2057]: E0911 00:30:39.261095 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:30:39.269012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:30:39.269240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:30:39.269597 systemd[1]: kubelet.service: Consumed 447ms CPU time, 109.8M memory peak. Sep 11 00:30:40.073753 containerd[1543]: time="2025-09-11T00:30:40.073657153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:40.074361 containerd[1543]: time="2025-09-11T00:30:40.074332499Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 11 00:30:40.075656 containerd[1543]: time="2025-09-11T00:30:40.075605746Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:40.078143 containerd[1543]: time="2025-09-11T00:30:40.078077230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:40.079190 containerd[1543]: time="2025-09-11T00:30:40.079160271Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.156229811s" Sep 11 00:30:40.079238 containerd[1543]: time="2025-09-11T00:30:40.079193063Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 11 00:30:40.079745 containerd[1543]: time="2025-09-11T00:30:40.079713849Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 11 00:30:41.820491 containerd[1543]: time="2025-09-11T00:30:41.820423199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:41.821379 containerd[1543]: time="2025-09-11T00:30:41.821319339Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 11 00:30:41.822600 containerd[1543]: time="2025-09-11T00:30:41.822543564Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:41.824781 containerd[1543]: time="2025-09-11T00:30:41.824745553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:41.825849 containerd[1543]: time="2025-09-11T00:30:41.825808226Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.745802669s" Sep 11 00:30:41.827053 containerd[1543]: time="2025-09-11T00:30:41.825980048Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 11 00:30:41.828078 containerd[1543]: time="2025-09-11T00:30:41.828052884Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 11 00:30:42.929523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3950793603.mount: Deactivated successfully. Sep 11 00:30:44.538634 containerd[1543]: time="2025-09-11T00:30:44.538540297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:44.636854 containerd[1543]: time="2025-09-11T00:30:44.636797069Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 11 00:30:44.687993 containerd[1543]: time="2025-09-11T00:30:44.687920585Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:44.729257 containerd[1543]: time="2025-09-11T00:30:44.729162254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:44.729955 containerd[1543]: time="2025-09-11T00:30:44.729885660Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.901799724s" Sep 11 00:30:44.729955 containerd[1543]: time="2025-09-11T00:30:44.729946474Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 11 00:30:44.730513 containerd[1543]: time="2025-09-11T00:30:44.730449307Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 11 00:30:45.333164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332748662.mount: Deactivated successfully. Sep 11 00:30:45.999805 containerd[1543]: time="2025-09-11T00:30:45.999737842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:46.000517 containerd[1543]: time="2025-09-11T00:30:46.000449687Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 11 00:30:46.001651 containerd[1543]: time="2025-09-11T00:30:46.001610674Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:46.004382 containerd[1543]: time="2025-09-11T00:30:46.004329452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:46.005483 containerd[1543]: time="2025-09-11T00:30:46.005431728Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.27494446s" Sep 11 00:30:46.005483 containerd[1543]: time="2025-09-11T00:30:46.005476402Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 11 00:30:46.005933 containerd[1543]: time="2025-09-11T00:30:46.005900337Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:30:46.574109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704442287.mount: Deactivated successfully. Sep 11 00:30:46.581474 containerd[1543]: time="2025-09-11T00:30:46.581417702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:30:46.582133 containerd[1543]: time="2025-09-11T00:30:46.582110511Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 11 00:30:46.583373 containerd[1543]: time="2025-09-11T00:30:46.583350045Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:30:46.585674 containerd[1543]: time="2025-09-11T00:30:46.585628718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:30:46.586358 containerd[1543]: time="2025-09-11T00:30:46.586328159Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 580.400591ms" Sep 11 00:30:46.586399 containerd[1543]: time="2025-09-11T00:30:46.586361702Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:30:46.587000 containerd[1543]: time="2025-09-11T00:30:46.586811415Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 11 00:30:47.003084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13747597.mount: Deactivated successfully. Sep 11 00:30:49.452700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:30:49.454677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:30:49.460811 containerd[1543]: time="2025-09-11T00:30:49.460761651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:49.461895 containerd[1543]: time="2025-09-11T00:30:49.461858748Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 11 00:30:49.462994 containerd[1543]: time="2025-09-11T00:30:49.462928143Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:49.465597 containerd[1543]: time="2025-09-11T00:30:49.465550470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:49.466647 containerd[1543]: time="2025-09-11T00:30:49.466588266Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.879748187s" Sep 11 00:30:49.466647 containerd[1543]: time="2025-09-11T00:30:49.466631778Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 11 00:30:49.669915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:30:49.680330 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:30:49.807207 kubelet[2208]: E0911 00:30:49.807067 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:30:49.811481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:30:49.811685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:30:49.812053 systemd[1]: kubelet.service: Consumed 268ms CPU time, 110.7M memory peak. Sep 11 00:30:52.770980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:30:52.771157 systemd[1]: kubelet.service: Consumed 268ms CPU time, 110.7M memory peak. Sep 11 00:30:52.773342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:30:52.798987 systemd[1]: Reload requested from client PID 2241 ('systemctl') (unit session-7.scope)... Sep 11 00:30:52.798998 systemd[1]: Reloading... Sep 11 00:30:52.880861 zram_generator::config[2287]: No configuration found. Sep 11 00:30:53.116509 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:30:53.240066 systemd[1]: Reloading finished in 440 ms. Sep 11 00:30:53.315703 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:30:53.315806 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:30:53.316103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:30:53.316142 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.2M memory peak. Sep 11 00:30:53.317706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:30:53.484542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:30:53.488331 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:30:53.521586 kubelet[2332]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:30:53.521586 kubelet[2332]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:30:53.521586 kubelet[2332]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:30:53.521968 kubelet[2332]: I0911 00:30:53.521627 2332 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:30:53.705557 kubelet[2332]: I0911 00:30:53.705520 2332 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 00:30:53.705557 kubelet[2332]: I0911 00:30:53.705544 2332 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:30:53.705756 kubelet[2332]: I0911 00:30:53.705726 2332 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 00:30:53.734519 kubelet[2332]: I0911 00:30:53.734491 2332 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:30:53.734930 kubelet[2332]: E0911 00:30:53.734653 2332 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 11 00:30:53.741422 kubelet[2332]: I0911 00:30:53.741402 2332 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:30:53.747474 kubelet[2332]: I0911 00:30:53.747448 2332 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:30:53.747703 kubelet[2332]: I0911 00:30:53.747677 2332 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:30:53.747843 kubelet[2332]: I0911 00:30:53.747698 2332 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:30:53.747946 kubelet[2332]: I0911 00:30:53.747847 2332 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:30:53.747946 kubelet[2332]: I0911 00:30:53.747856 2332 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 00:30:53.747994 kubelet[2332]: I0911 00:30:53.747973 2332 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:30:53.750117 kubelet[2332]: I0911 00:30:53.750095 2332 kubelet.go:480] "Attempting to sync node with API server" Sep 11 00:30:53.750117 kubelet[2332]: I0911 00:30:53.750112 2332 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:30:53.750172 kubelet[2332]: I0911 00:30:53.750138 2332 kubelet.go:386] "Adding apiserver pod source" Sep 11 00:30:53.750172 kubelet[2332]: I0911 00:30:53.750156 2332 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:30:53.756610 kubelet[2332]: I0911 00:30:53.756129 2332 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:30:53.756610 kubelet[2332]: E0911 00:30:53.756447 2332 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 11 00:30:53.756610 kubelet[2332]: I0911 00:30:53.756504 2332 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 00:30:53.756817 kubelet[2332]: E0911 00:30:53.756791 2332 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 11 00:30:53.757201 kubelet[2332]: W0911 00:30:53.757183 2332 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:30:53.759584 kubelet[2332]: I0911 00:30:53.759560 2332 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:30:53.759647 kubelet[2332]: I0911 00:30:53.759603 2332 server.go:1289] "Started kubelet" Sep 11 00:30:53.761065 kubelet[2332]: I0911 00:30:53.761008 2332 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:30:53.762662 kubelet[2332]: I0911 00:30:53.762487 2332 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:30:53.762662 kubelet[2332]: I0911 00:30:53.762508 2332 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:30:53.762662 kubelet[2332]: I0911 00:30:53.762486 2332 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:30:53.763696 kubelet[2332]: I0911 00:30:53.763680 2332 server.go:317] "Adding debug handlers to kubelet server" Sep 11 00:30:53.766026 kubelet[2332]: I0911 00:30:53.765071 2332 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:30:53.766026 kubelet[2332]: E0911 00:30:53.765494 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:53.766026 kubelet[2332]: I0911 00:30:53.765543 2332 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:30:53.766026 kubelet[2332]: I0911 00:30:53.765721 2332 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:30:53.766026 kubelet[2332]: I0911 00:30:53.765776 2332 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:30:53.766390 kubelet[2332]: E0911 00:30:53.764537 2332 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18641301f8185a16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:30:53.759576598 +0000 UTC m=+0.266779913,LastTimestamp:2025-09-11 00:30:53.759576598 +0000 UTC m=+0.266779913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:30:53.766790 kubelet[2332]: E0911 00:30:53.766764 2332 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:30:53.767874 kubelet[2332]: E0911 00:30:53.767098 2332 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 11 00:30:53.768444 kubelet[2332]: I0911 00:30:53.768419 2332 factory.go:223] Registration of the containerd container factory successfully Sep 11 00:30:53.768444 kubelet[2332]: I0911 00:30:53.768439 2332 factory.go:223] Registration of the systemd container factory successfully Sep 11 00:30:53.768546 kubelet[2332]: I0911 00:30:53.768509 2332 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:30:53.769690 kubelet[2332]: E0911 00:30:53.768675 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Sep 11 00:30:53.777896 kubelet[2332]: I0911 00:30:53.777874 2332 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:30:53.778024 kubelet[2332]: I0911 00:30:53.778007 2332 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:30:53.778095 kubelet[2332]: I0911 00:30:53.778084 2332 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:30:53.865631 kubelet[2332]: E0911 00:30:53.865571 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:53.966184 kubelet[2332]: E0911 00:30:53.966129 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:53.969947 kubelet[2332]: E0911 00:30:53.969920 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Sep 11 00:30:54.025947 kubelet[2332]: I0911 00:30:54.025522 2332 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 00:30:54.027047 kubelet[2332]: I0911 00:30:54.027011 2332 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 00:30:54.027091 kubelet[2332]: I0911 00:30:54.027052 2332 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 00:30:54.027091 kubelet[2332]: I0911 00:30:54.027080 2332 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:30:54.027091 kubelet[2332]: I0911 00:30:54.027089 2332 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 00:30:54.027152 kubelet[2332]: E0911 00:30:54.027134 2332 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:30:54.027772 kubelet[2332]: E0911 00:30:54.027584 2332 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 11 00:30:54.066969 kubelet[2332]: E0911 00:30:54.066932 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:54.128284 kubelet[2332]: E0911 00:30:54.128232 2332 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:30:54.161322 kubelet[2332]: I0911 00:30:54.161288 2332 policy_none.go:49] "None policy: Start" Sep 11 00:30:54.161322 kubelet[2332]: I0911 00:30:54.161311 2332 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:30:54.161416 kubelet[2332]: I0911 00:30:54.161332 2332 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:30:54.167254 kubelet[2332]: E0911 00:30:54.167228 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:54.168124 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:30:54.192160 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:30:54.195069 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:30:54.212931 kubelet[2332]: E0911 00:30:54.212904 2332 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 00:30:54.213260 kubelet[2332]: I0911 00:30:54.213157 2332 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:30:54.213260 kubelet[2332]: I0911 00:30:54.213170 2332 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:30:54.213402 kubelet[2332]: I0911 00:30:54.213382 2332 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:30:54.214115 kubelet[2332]: E0911 00:30:54.214027 2332 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:30:54.214179 kubelet[2332]: E0911 00:30:54.214130 2332 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 00:30:54.315114 kubelet[2332]: I0911 00:30:54.315009 2332 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:30:54.315675 kubelet[2332]: E0911 00:30:54.315642 2332 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 11 00:30:54.338632 systemd[1]: Created slice kubepods-burstable-pod65e7b52d780fafe9d038541bf5a0ea33.slice - libcontainer container kubepods-burstable-pod65e7b52d780fafe9d038541bf5a0ea33.slice. Sep 11 00:30:54.352408 kubelet[2332]: E0911 00:30:54.352368 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:30:54.355169 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 11 00:30:54.365160 kubelet[2332]: E0911 00:30:54.365125 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:30:54.367569 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 11 00:30:54.369153 kubelet[2332]: I0911 00:30:54.369137 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65e7b52d780fafe9d038541bf5a0ea33-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"65e7b52d780fafe9d038541bf5a0ea33\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:30:54.369213 kubelet[2332]: I0911 00:30:54.369159 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65e7b52d780fafe9d038541bf5a0ea33-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"65e7b52d780fafe9d038541bf5a0ea33\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:30:54.369213 kubelet[2332]: E0911 00:30:54.369163 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:30:54.369213 kubelet[2332]: I0911 00:30:54.369193 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:54.369213 kubelet[2332]: I0911 00:30:54.369210 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:54.369302 kubelet[2332]: I0911 00:30:54.369225 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:54.369302 kubelet[2332]: I0911 00:30:54.369254 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:30:54.369302 kubelet[2332]: I0911 00:30:54.369284 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:54.369370 kubelet[2332]: I0911 00:30:54.369309 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:54.369370 kubelet[2332]: I0911 00:30:54.369336 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65e7b52d780fafe9d038541bf5a0ea33-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"65e7b52d780fafe9d038541bf5a0ea33\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:30:54.370463 kubelet[2332]: E0911 00:30:54.370439 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Sep 11 00:30:54.517027 kubelet[2332]: I0911 00:30:54.516998 2332 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:30:54.517306 kubelet[2332]: E0911 00:30:54.517271 2332 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 11 00:30:54.654005 containerd[1543]: time="2025-09-11T00:30:54.653877793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:65e7b52d780fafe9d038541bf5a0ea33,Namespace:kube-system,Attempt:0,}" Sep 11 00:30:54.666567 containerd[1543]: time="2025-09-11T00:30:54.666522371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 11 00:30:54.668663 kubelet[2332]: E0911 00:30:54.668627 2332 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 11 00:30:54.670302 containerd[1543]: time="2025-09-11T00:30:54.670255230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 11 00:30:54.675117 containerd[1543]: time="2025-09-11T00:30:54.675070819Z" level=info msg="connecting to shim 843dc401819f4d32d7ae09c9d31a77c9d4b17ecff65f38248af3a5c7681f00b5" address="unix:///run/containerd/s/76bb1bff61e07e5834dbb180e26ffdd78ef6b434b07a9d2a074ff93c3411a270" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:30:54.707105 containerd[1543]: time="2025-09-11T00:30:54.707039661Z" level=info msg="connecting to shim 1b5a7af6892541e7e4f5e8b4a4b2b2b0cfebbb1897a675a597a9971aa3fcaa0e" address="unix:///run/containerd/s/a44e0520fa08c1f7c73100f5668574bb4b522e16b12d3a12859a990142557ab1" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:30:54.709313 systemd[1]: Started cri-containerd-843dc401819f4d32d7ae09c9d31a77c9d4b17ecff65f38248af3a5c7681f00b5.scope - libcontainer container 843dc401819f4d32d7ae09c9d31a77c9d4b17ecff65f38248af3a5c7681f00b5. Sep 11 00:30:54.714237 containerd[1543]: time="2025-09-11T00:30:54.714179298Z" level=info msg="connecting to shim 3a51a7acb5884e5d96690a92ed61b5dbd1f192822ecd003cb357037184a4b9c2" address="unix:///run/containerd/s/7d2accc1e72577ba2540cedabd9c17a2df1ea40f43d8eb8a4f27eb4c2ad59c30" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:30:54.738233 systemd[1]: Started cri-containerd-1b5a7af6892541e7e4f5e8b4a4b2b2b0cfebbb1897a675a597a9971aa3fcaa0e.scope - libcontainer container 1b5a7af6892541e7e4f5e8b4a4b2b2b0cfebbb1897a675a597a9971aa3fcaa0e. Sep 11 00:30:54.742437 systemd[1]: Started cri-containerd-3a51a7acb5884e5d96690a92ed61b5dbd1f192822ecd003cb357037184a4b9c2.scope - libcontainer container 3a51a7acb5884e5d96690a92ed61b5dbd1f192822ecd003cb357037184a4b9c2. Sep 11 00:30:54.763821 containerd[1543]: time="2025-09-11T00:30:54.763763338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:65e7b52d780fafe9d038541bf5a0ea33,Namespace:kube-system,Attempt:0,} returns sandbox id \"843dc401819f4d32d7ae09c9d31a77c9d4b17ecff65f38248af3a5c7681f00b5\"" Sep 11 00:30:54.771065 containerd[1543]: time="2025-09-11T00:30:54.770284996Z" level=info msg="CreateContainer within sandbox \"843dc401819f4d32d7ae09c9d31a77c9d4b17ecff65f38248af3a5c7681f00b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:30:54.778498 containerd[1543]: time="2025-09-11T00:30:54.778470093Z" level=info msg="Container 7c99a2db8e6957da931d41f281510cb6028b5158d7b55b5e9c1703792948ba23: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:54.789196 containerd[1543]: time="2025-09-11T00:30:54.789139318Z" level=info msg="CreateContainer within sandbox \"843dc401819f4d32d7ae09c9d31a77c9d4b17ecff65f38248af3a5c7681f00b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7c99a2db8e6957da931d41f281510cb6028b5158d7b55b5e9c1703792948ba23\"" Sep 11 00:30:54.790238 containerd[1543]: time="2025-09-11T00:30:54.790205687Z" level=info msg="StartContainer for \"7c99a2db8e6957da931d41f281510cb6028b5158d7b55b5e9c1703792948ba23\"" Sep 11 00:30:54.791581 containerd[1543]: time="2025-09-11T00:30:54.791557372Z" level=info msg="connecting to shim 7c99a2db8e6957da931d41f281510cb6028b5158d7b55b5e9c1703792948ba23" address="unix:///run/containerd/s/76bb1bff61e07e5834dbb180e26ffdd78ef6b434b07a9d2a074ff93c3411a270" protocol=ttrpc version=3 Sep 11 00:30:54.797055 containerd[1543]: time="2025-09-11T00:30:54.797006860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a51a7acb5884e5d96690a92ed61b5dbd1f192822ecd003cb357037184a4b9c2\"" Sep 11 00:30:54.805132 containerd[1543]: time="2025-09-11T00:30:54.804770967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b5a7af6892541e7e4f5e8b4a4b2b2b0cfebbb1897a675a597a9971aa3fcaa0e\"" Sep 11 00:30:54.805416 containerd[1543]: time="2025-09-11T00:30:54.804980650Z" level=info msg="CreateContainer within sandbox \"3a51a7acb5884e5d96690a92ed61b5dbd1f192822ecd003cb357037184a4b9c2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:30:54.811554 containerd[1543]: time="2025-09-11T00:30:54.811507328Z" level=info msg="CreateContainer within sandbox \"1b5a7af6892541e7e4f5e8b4a4b2b2b0cfebbb1897a675a597a9971aa3fcaa0e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:30:54.814966 containerd[1543]: time="2025-09-11T00:30:54.814911120Z" level=info msg="Container 9f55786f806aa419490c94a7bb211a8ee699dad5841e4b9385c3b2109b584a0c: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:54.815277 systemd[1]: Started cri-containerd-7c99a2db8e6957da931d41f281510cb6028b5158d7b55b5e9c1703792948ba23.scope - libcontainer container 7c99a2db8e6957da931d41f281510cb6028b5158d7b55b5e9c1703792948ba23. Sep 11 00:30:54.828309 containerd[1543]: time="2025-09-11T00:30:54.828245833Z" level=info msg="CreateContainer within sandbox \"3a51a7acb5884e5d96690a92ed61b5dbd1f192822ecd003cb357037184a4b9c2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f55786f806aa419490c94a7bb211a8ee699dad5841e4b9385c3b2109b584a0c\"" Sep 11 00:30:54.829675 containerd[1543]: time="2025-09-11T00:30:54.829637312Z" level=info msg="StartContainer for \"9f55786f806aa419490c94a7bb211a8ee699dad5841e4b9385c3b2109b584a0c\"" Sep 11 00:30:54.830855 containerd[1543]: time="2025-09-11T00:30:54.830219964Z" level=info msg="Container 7503aa5b74bfbc43783e8dc64380dc28cc8d532a1033c9a1c84cee7e9ff7680d: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:54.831762 containerd[1543]: time="2025-09-11T00:30:54.831614379Z" level=info msg="connecting to shim 9f55786f806aa419490c94a7bb211a8ee699dad5841e4b9385c3b2109b584a0c" address="unix:///run/containerd/s/7d2accc1e72577ba2540cedabd9c17a2df1ea40f43d8eb8a4f27eb4c2ad59c30" protocol=ttrpc version=3 Sep 11 00:30:54.840130 containerd[1543]: time="2025-09-11T00:30:54.840064943Z" level=info msg="CreateContainer within sandbox \"1b5a7af6892541e7e4f5e8b4a4b2b2b0cfebbb1897a675a597a9971aa3fcaa0e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7503aa5b74bfbc43783e8dc64380dc28cc8d532a1033c9a1c84cee7e9ff7680d\"" Sep 11 00:30:54.841061 containerd[1543]: time="2025-09-11T00:30:54.841023391Z" level=info msg="StartContainer for \"7503aa5b74bfbc43783e8dc64380dc28cc8d532a1033c9a1c84cee7e9ff7680d\"" Sep 11 00:30:54.842190 containerd[1543]: time="2025-09-11T00:30:54.842159851Z" level=info msg="connecting to shim 7503aa5b74bfbc43783e8dc64380dc28cc8d532a1033c9a1c84cee7e9ff7680d" address="unix:///run/containerd/s/a44e0520fa08c1f7c73100f5668574bb4b522e16b12d3a12859a990142557ab1" protocol=ttrpc version=3 Sep 11 00:30:54.861479 systemd[1]: Started cri-containerd-9f55786f806aa419490c94a7bb211a8ee699dad5841e4b9385c3b2109b584a0c.scope - libcontainer container 9f55786f806aa419490c94a7bb211a8ee699dad5841e4b9385c3b2109b584a0c. Sep 11 00:30:54.879233 systemd[1]: Started cri-containerd-7503aa5b74bfbc43783e8dc64380dc28cc8d532a1033c9a1c84cee7e9ff7680d.scope - libcontainer container 7503aa5b74bfbc43783e8dc64380dc28cc8d532a1033c9a1c84cee7e9ff7680d. Sep 11 00:30:54.890094 containerd[1543]: time="2025-09-11T00:30:54.887141181Z" level=info msg="StartContainer for \"7c99a2db8e6957da931d41f281510cb6028b5158d7b55b5e9c1703792948ba23\" returns successfully" Sep 11 00:30:54.920302 kubelet[2332]: I0911 00:30:54.920193 2332 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:30:54.921725 kubelet[2332]: E0911 00:30:54.921689 2332 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 11 00:30:54.946306 containerd[1543]: time="2025-09-11T00:30:54.946239501Z" level=info msg="StartContainer for \"9f55786f806aa419490c94a7bb211a8ee699dad5841e4b9385c3b2109b584a0c\" returns successfully" Sep 11 00:30:55.035429 kubelet[2332]: E0911 00:30:55.035312 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:30:55.042099 kubelet[2332]: E0911 00:30:55.042067 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:30:55.328066 containerd[1543]: time="2025-09-11T00:30:55.327405417Z" level=info msg="StartContainer for \"7503aa5b74bfbc43783e8dc64380dc28cc8d532a1033c9a1c84cee7e9ff7680d\" returns successfully" Sep 11 00:30:55.723437 kubelet[2332]: I0911 00:30:55.723382 2332 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:30:56.049545 kubelet[2332]: E0911 00:30:56.049427 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:30:56.051588 kubelet[2332]: E0911 00:30:56.051549 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:30:56.051932 kubelet[2332]: E0911 00:30:56.051910 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:30:56.065403 kubelet[2332]: E0911 00:30:56.065318 2332 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 00:30:56.136194 kubelet[2332]: I0911 00:30:56.136146 2332 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:30:56.136194 kubelet[2332]: E0911 00:30:56.136185 2332 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 11 00:30:56.152604 kubelet[2332]: E0911 00:30:56.152544 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:56.253731 kubelet[2332]: E0911 00:30:56.253651 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:56.354246 kubelet[2332]: E0911 00:30:56.354097 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:56.455196 kubelet[2332]: E0911 00:30:56.455114 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:56.556355 kubelet[2332]: E0911 00:30:56.556276 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:56.657421 kubelet[2332]: E0911 00:30:56.657201 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:56.757661 kubelet[2332]: E0911 00:30:56.757592 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:56.857850 kubelet[2332]: E0911 00:30:56.857787 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:56.958982 kubelet[2332]: E0911 00:30:56.958921 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:57.050521 kubelet[2332]: E0911 00:30:57.050484 2332 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:30:57.059655 kubelet[2332]: E0911 00:30:57.059625 2332 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:30:57.167690 kubelet[2332]: I0911 00:30:57.167643 2332 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:30:57.177170 kubelet[2332]: I0911 00:30:57.177126 2332 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:57.183314 kubelet[2332]: I0911 00:30:57.183275 2332 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:30:57.759831 kubelet[2332]: I0911 00:30:57.759778 2332 apiserver.go:52] "Watching apiserver" Sep 11 00:30:57.766145 kubelet[2332]: I0911 00:30:57.766124 2332 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:30:58.439248 systemd[1]: Reload requested from client PID 2612 ('systemctl') (unit session-7.scope)... Sep 11 00:30:58.439267 systemd[1]: Reloading... Sep 11 00:30:58.579064 zram_generator::config[2658]: No configuration found. Sep 11 00:30:58.680425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:30:58.843468 systemd[1]: Reloading finished in 403 ms. Sep 11 00:30:58.878872 kubelet[2332]: I0911 00:30:58.878794 2332 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:30:58.878917 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:30:58.901923 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:30:58.902337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:30:58.902403 systemd[1]: kubelet.service: Consumed 735ms CPU time, 131.4M memory peak. Sep 11 00:30:58.905599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:30:59.222160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:30:59.227236 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:30:59.263350 kubelet[2700]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:30:59.264079 kubelet[2700]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:30:59.264079 kubelet[2700]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:30:59.264079 kubelet[2700]: I0911 00:30:59.263808 2700 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:30:59.271013 kubelet[2700]: I0911 00:30:59.270947 2700 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 00:30:59.271013 kubelet[2700]: I0911 00:30:59.270992 2700 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:30:59.272046 kubelet[2700]: I0911 00:30:59.271912 2700 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 00:30:59.273471 kubelet[2700]: I0911 00:30:59.273445 2700 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 11 00:30:59.275858 kubelet[2700]: I0911 00:30:59.275478 2700 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:30:59.278858 kubelet[2700]: I0911 00:30:59.278837 2700 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:30:59.284152 kubelet[2700]: I0911 00:30:59.284127 2700 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:30:59.284398 kubelet[2700]: I0911 00:30:59.284365 2700 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:30:59.284531 kubelet[2700]: I0911 00:30:59.284389 2700 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:30:59.284617 kubelet[2700]: I0911 00:30:59.284540 2700 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:30:59.284617 kubelet[2700]: I0911 00:30:59.284549 2700 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 00:30:59.284617 kubelet[2700]: I0911 00:30:59.284599 2700 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:30:59.284799 kubelet[2700]: I0911 00:30:59.284784 2700 kubelet.go:480] "Attempting to sync node with API server" Sep 11 00:30:59.284799 kubelet[2700]: I0911 00:30:59.284796 2700 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:30:59.284852 kubelet[2700]: I0911 00:30:59.284816 2700 kubelet.go:386] "Adding apiserver pod source" Sep 11 00:30:59.284852 kubelet[2700]: I0911 00:30:59.284831 2700 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:30:59.286344 kubelet[2700]: I0911 00:30:59.286325 2700 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:30:59.287800 kubelet[2700]: I0911 00:30:59.287779 2700 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 00:30:59.297272 kubelet[2700]: I0911 00:30:59.297240 2700 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:30:59.297392 kubelet[2700]: I0911 00:30:59.297286 2700 server.go:1289] "Started kubelet" Sep 11 00:30:59.297905 kubelet[2700]: I0911 00:30:59.297870 2700 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:30:59.298106 kubelet[2700]: I0911 00:30:59.298066 2700 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:30:59.301952 kubelet[2700]: I0911 00:30:59.301935 2700 server.go:317] "Adding debug handlers to kubelet server" Sep 11 00:30:59.303051 kubelet[2700]: I0911 00:30:59.302481 2700 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:30:59.305123 kubelet[2700]: E0911 00:30:59.305059 2700 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:30:59.306283 kubelet[2700]: I0911 00:30:59.306106 2700 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:30:59.306283 kubelet[2700]: I0911 00:30:59.306124 2700 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:30:59.306483 kubelet[2700]: I0911 00:30:59.306472 2700 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:30:59.308262 kubelet[2700]: I0911 00:30:59.308238 2700 factory.go:223] Registration of the systemd container factory successfully Sep 11 00:30:59.308390 kubelet[2700]: I0911 00:30:59.308342 2700 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:30:59.309146 kubelet[2700]: I0911 00:30:59.309122 2700 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:30:59.309423 kubelet[2700]: I0911 00:30:59.309401 2700 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:30:59.310675 kubelet[2700]: I0911 00:30:59.310642 2700 factory.go:223] Registration of the containerd container factory successfully Sep 11 00:30:59.321292 kubelet[2700]: I0911 00:30:59.321263 2700 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 00:30:59.323391 kubelet[2700]: I0911 00:30:59.323374 2700 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 00:30:59.323702 kubelet[2700]: I0911 00:30:59.323455 2700 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 00:30:59.323702 kubelet[2700]: I0911 00:30:59.323474 2700 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:30:59.323702 kubelet[2700]: I0911 00:30:59.323480 2700 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 00:30:59.323702 kubelet[2700]: E0911 00:30:59.323514 2700 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:30:59.343984 kubelet[2700]: I0911 00:30:59.343955 2700 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:30:59.343984 kubelet[2700]: I0911 00:30:59.343974 2700 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:30:59.344133 kubelet[2700]: I0911 00:30:59.344001 2700 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:30:59.344157 kubelet[2700]: I0911 00:30:59.344147 2700 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:30:59.344177 kubelet[2700]: I0911 00:30:59.344156 2700 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:30:59.344177 kubelet[2700]: I0911 00:30:59.344170 2700 policy_none.go:49] "None policy: Start" Sep 11 00:30:59.344234 kubelet[2700]: I0911 00:30:59.344178 2700 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:30:59.344234 kubelet[2700]: I0911 00:30:59.344187 2700 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:30:59.344275 kubelet[2700]: I0911 00:30:59.344259 2700 state_mem.go:75] "Updated machine memory state" Sep 11 00:30:59.348189 kubelet[2700]: E0911 00:30:59.348168 2700 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 00:30:59.348376 kubelet[2700]: I0911 00:30:59.348359 2700 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:30:59.348425 kubelet[2700]: I0911 00:30:59.348373 2700 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:30:59.348736 kubelet[2700]: I0911 00:30:59.348705 2700 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:30:59.350000 kubelet[2700]: E0911 00:30:59.349967 2700 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:30:59.425255 kubelet[2700]: I0911 00:30:59.425219 2700 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:30:59.425255 kubelet[2700]: I0911 00:30:59.425252 2700 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:30:59.425533 kubelet[2700]: I0911 00:30:59.425219 2700 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:59.451472 kubelet[2700]: I0911 00:30:59.451441 2700 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:30:59.510841 kubelet[2700]: I0911 00:30:59.510715 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:59.510841 kubelet[2700]: I0911 00:30:59.510753 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:30:59.510841 kubelet[2700]: I0911 00:30:59.510774 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65e7b52d780fafe9d038541bf5a0ea33-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"65e7b52d780fafe9d038541bf5a0ea33\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:30:59.510841 kubelet[2700]: I0911 00:30:59.510797 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:59.510841 kubelet[2700]: I0911 00:30:59.510814 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:59.511084 kubelet[2700]: I0911 00:30:59.510830 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65e7b52d780fafe9d038541bf5a0ea33-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"65e7b52d780fafe9d038541bf5a0ea33\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:30:59.511084 kubelet[2700]: I0911 00:30:59.510848 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65e7b52d780fafe9d038541bf5a0ea33-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"65e7b52d780fafe9d038541bf5a0ea33\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:30:59.511084 kubelet[2700]: I0911 00:30:59.510865 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:59.511084 kubelet[2700]: I0911 00:30:59.510888 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:59.771133 kubelet[2700]: E0911 00:30:59.770985 2700 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:30:59.771133 kubelet[2700]: E0911 00:30:59.771023 2700 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:30:59.771133 kubelet[2700]: E0911 00:30:59.770985 2700 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 00:30:59.778293 kubelet[2700]: I0911 00:30:59.777715 2700 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 11 00:30:59.778478 kubelet[2700]: I0911 00:30:59.778452 2700 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:31:00.286744 kubelet[2700]: I0911 00:31:00.286701 2700 apiserver.go:52] "Watching apiserver" Sep 11 00:31:00.309724 kubelet[2700]: I0911 00:31:00.309677 2700 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:31:00.335552 kubelet[2700]: I0911 00:31:00.335322 2700 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:31:00.336542 kubelet[2700]: I0911 00:31:00.336522 2700 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:31:00.336610 kubelet[2700]: I0911 00:31:00.335862 2700 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:31:00.344026 kubelet[2700]: E0911 00:31:00.343833 2700 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 00:31:00.347805 kubelet[2700]: E0911 00:31:00.347779 2700 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:31:00.347925 kubelet[2700]: E0911 00:31:00.347810 2700 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:31:00.360818 kubelet[2700]: I0911 00:31:00.360749 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.36072911 podStartE2EDuration="3.36072911s" podCreationTimestamp="2025-09-11 00:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:31:00.354338726 +0000 UTC m=+1.122569090" watchObservedRunningTime="2025-09-11 00:31:00.36072911 +0000 UTC m=+1.128959463" Sep 11 00:31:00.369161 kubelet[2700]: I0911 00:31:00.369090 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.369053581 podStartE2EDuration="3.369053581s" podCreationTimestamp="2025-09-11 00:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:31:00.368768364 +0000 UTC m=+1.136998707" watchObservedRunningTime="2025-09-11 00:31:00.369053581 +0000 UTC m=+1.137283934" Sep 11 00:31:00.369364 kubelet[2700]: I0911 00:31:00.369190 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.369185413 podStartE2EDuration="3.369185413s" podCreationTimestamp="2025-09-11 00:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:31:00.361015088 +0000 UTC m=+1.129245451" watchObservedRunningTime="2025-09-11 00:31:00.369185413 +0000 UTC m=+1.137415766" Sep 11 00:31:04.637251 kubelet[2700]: I0911 00:31:04.637216 2700 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:31:04.637693 kubelet[2700]: I0911 00:31:04.637656 2700 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:31:04.637720 containerd[1543]: time="2025-09-11T00:31:04.637517003Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:31:05.011546 systemd[1]: Created slice kubepods-besteffort-pod1aa2b1f6_31c5_4eee_b5d2_1436bc7671f3.slice - libcontainer container kubepods-besteffort-pod1aa2b1f6_31c5_4eee_b5d2_1436bc7671f3.slice. Sep 11 00:31:05.046539 kubelet[2700]: I0911 00:31:05.046474 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3-kube-proxy\") pod \"kube-proxy-8trlg\" (UID: \"1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3\") " pod="kube-system/kube-proxy-8trlg" Sep 11 00:31:05.046539 kubelet[2700]: I0911 00:31:05.046528 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3-xtables-lock\") pod \"kube-proxy-8trlg\" (UID: \"1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3\") " pod="kube-system/kube-proxy-8trlg" Sep 11 00:31:05.046539 kubelet[2700]: I0911 00:31:05.046545 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgclc\" (UniqueName: \"kubernetes.io/projected/1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3-kube-api-access-wgclc\") pod \"kube-proxy-8trlg\" (UID: \"1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3\") " pod="kube-system/kube-proxy-8trlg" Sep 11 00:31:05.046778 kubelet[2700]: I0911 00:31:05.046569 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3-lib-modules\") pod \"kube-proxy-8trlg\" (UID: \"1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3\") " pod="kube-system/kube-proxy-8trlg" Sep 11 00:31:05.317559 kubelet[2700]: E0911 00:31:05.317371 2700 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 11 00:31:05.317676 kubelet[2700]: E0911 00:31:05.317456 2700 projected.go:194] Error preparing data for projected volume kube-api-access-wgclc for pod kube-system/kube-proxy-8trlg: configmap "kube-root-ca.crt" not found Sep 11 00:31:05.317702 kubelet[2700]: E0911 00:31:05.317689 2700 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3-kube-api-access-wgclc podName:1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3 nodeName:}" failed. No retries permitted until 2025-09-11 00:31:05.817665459 +0000 UTC m=+6.585895812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wgclc" (UniqueName: "kubernetes.io/projected/1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3-kube-api-access-wgclc") pod "kube-proxy-8trlg" (UID: "1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3") : configmap "kube-root-ca.crt" not found Sep 11 00:31:05.930913 containerd[1543]: time="2025-09-11T00:31:05.930841461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8trlg,Uid:1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3,Namespace:kube-system,Attempt:0,}" Sep 11 00:31:06.930056 systemd[1]: Created slice kubepods-besteffort-pod356e0fd0_f9ee_45d9_b671_39edcc329871.slice - libcontainer container kubepods-besteffort-pod356e0fd0_f9ee_45d9_b671_39edcc329871.slice. Sep 11 00:31:06.947999 containerd[1543]: time="2025-09-11T00:31:06.947935486Z" level=info msg="connecting to shim ec37e300049130a55fb711983a5b842f48e24220057018a181d8f8541c9d2d76" address="unix:///run/containerd/s/a5b30b108052eb3f7373013b06e8e7e2656ec8c0c9d760f304b4347d77d75450" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:31:06.961000 kubelet[2700]: I0911 00:31:06.960950 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/356e0fd0-f9ee-45d9-b671-39edcc329871-var-lib-calico\") pod \"tigera-operator-755d956888-8r67z\" (UID: \"356e0fd0-f9ee-45d9-b671-39edcc329871\") " pod="tigera-operator/tigera-operator-755d956888-8r67z" Sep 11 00:31:06.961000 kubelet[2700]: I0911 00:31:06.960996 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd89h\" (UniqueName: \"kubernetes.io/projected/356e0fd0-f9ee-45d9-b671-39edcc329871-kube-api-access-cd89h\") pod \"tigera-operator-755d956888-8r67z\" (UID: \"356e0fd0-f9ee-45d9-b671-39edcc329871\") " pod="tigera-operator/tigera-operator-755d956888-8r67z" Sep 11 00:31:06.974340 systemd[1]: Started cri-containerd-ec37e300049130a55fb711983a5b842f48e24220057018a181d8f8541c9d2d76.scope - libcontainer container ec37e300049130a55fb711983a5b842f48e24220057018a181d8f8541c9d2d76. Sep 11 00:31:07.021239 containerd[1543]: time="2025-09-11T00:31:07.021197980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8trlg,Uid:1aa2b1f6-31c5-4eee-b5d2-1436bc7671f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec37e300049130a55fb711983a5b842f48e24220057018a181d8f8541c9d2d76\"" Sep 11 00:31:07.068623 containerd[1543]: time="2025-09-11T00:31:07.068572164Z" level=info msg="CreateContainer within sandbox \"ec37e300049130a55fb711983a5b842f48e24220057018a181d8f8541c9d2d76\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:31:07.132286 containerd[1543]: time="2025-09-11T00:31:07.132206080Z" level=info msg="Container 498b9154a5f2e884fa42c8ac02f857aff236741ed04a4d1f710926bbd3cc06f1: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:07.136294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2770540661.mount: Deactivated successfully. Sep 11 00:31:07.146681 containerd[1543]: time="2025-09-11T00:31:07.146612088Z" level=info msg="CreateContainer within sandbox \"ec37e300049130a55fb711983a5b842f48e24220057018a181d8f8541c9d2d76\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"498b9154a5f2e884fa42c8ac02f857aff236741ed04a4d1f710926bbd3cc06f1\"" Sep 11 00:31:07.147300 containerd[1543]: time="2025-09-11T00:31:07.147255211Z" level=info msg="StartContainer for \"498b9154a5f2e884fa42c8ac02f857aff236741ed04a4d1f710926bbd3cc06f1\"" Sep 11 00:31:07.148761 containerd[1543]: time="2025-09-11T00:31:07.148721648Z" level=info msg="connecting to shim 498b9154a5f2e884fa42c8ac02f857aff236741ed04a4d1f710926bbd3cc06f1" address="unix:///run/containerd/s/a5b30b108052eb3f7373013b06e8e7e2656ec8c0c9d760f304b4347d77d75450" protocol=ttrpc version=3 Sep 11 00:31:07.176317 systemd[1]: Started cri-containerd-498b9154a5f2e884fa42c8ac02f857aff236741ed04a4d1f710926bbd3cc06f1.scope - libcontainer container 498b9154a5f2e884fa42c8ac02f857aff236741ed04a4d1f710926bbd3cc06f1. Sep 11 00:31:07.229754 containerd[1543]: time="2025-09-11T00:31:07.229695761Z" level=info msg="StartContainer for \"498b9154a5f2e884fa42c8ac02f857aff236741ed04a4d1f710926bbd3cc06f1\" returns successfully" Sep 11 00:31:07.241822 containerd[1543]: time="2025-09-11T00:31:07.241788310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-8r67z,Uid:356e0fd0-f9ee-45d9-b671-39edcc329871,Namespace:tigera-operator,Attempt:0,}" Sep 11 00:31:07.375660 kubelet[2700]: I0911 00:31:07.375198 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8trlg" podStartSLOduration=3.3751783 podStartE2EDuration="3.3751783s" podCreationTimestamp="2025-09-11 00:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:31:07.373339835 +0000 UTC m=+8.141570198" watchObservedRunningTime="2025-09-11 00:31:07.3751783 +0000 UTC m=+8.143408653" Sep 11 00:31:07.393476 containerd[1543]: time="2025-09-11T00:31:07.393427014Z" level=info msg="connecting to shim 0754eb6538108ad22e84ba14009809d0b30283993f79c2426551ea59477cfdb0" address="unix:///run/containerd/s/acd61a5f749d4fa800925acbe11723cb6585d1bf738ddaaa421721405a80e211" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:31:07.421234 systemd[1]: Started cri-containerd-0754eb6538108ad22e84ba14009809d0b30283993f79c2426551ea59477cfdb0.scope - libcontainer container 0754eb6538108ad22e84ba14009809d0b30283993f79c2426551ea59477cfdb0. Sep 11 00:31:07.484955 containerd[1543]: time="2025-09-11T00:31:07.484787631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-8r67z,Uid:356e0fd0-f9ee-45d9-b671-39edcc329871,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0754eb6538108ad22e84ba14009809d0b30283993f79c2426551ea59477cfdb0\"" Sep 11 00:31:07.486901 containerd[1543]: time="2025-09-11T00:31:07.486696761Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 00:31:09.255884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248975289.mount: Deactivated successfully. Sep 11 00:31:10.224214 containerd[1543]: time="2025-09-11T00:31:10.224155472Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:10.224889 containerd[1543]: time="2025-09-11T00:31:10.224857253Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 11 00:31:10.225975 containerd[1543]: time="2025-09-11T00:31:10.225926891Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:10.227821 containerd[1543]: time="2025-09-11T00:31:10.227783703Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:10.228318 containerd[1543]: time="2025-09-11T00:31:10.228286316Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.741558916s" Sep 11 00:31:10.228318 containerd[1543]: time="2025-09-11T00:31:10.228314840Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 11 00:31:10.232541 containerd[1543]: time="2025-09-11T00:31:10.232514976Z" level=info msg="CreateContainer within sandbox \"0754eb6538108ad22e84ba14009809d0b30283993f79c2426551ea59477cfdb0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 00:31:10.241643 containerd[1543]: time="2025-09-11T00:31:10.241602951Z" level=info msg="Container 48ffcfe2a998c51ee512c99e3ec86fe818b796f89908c7d47c1c6b740c61d090: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:10.247731 containerd[1543]: time="2025-09-11T00:31:10.247684977Z" level=info msg="CreateContainer within sandbox \"0754eb6538108ad22e84ba14009809d0b30283993f79c2426551ea59477cfdb0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"48ffcfe2a998c51ee512c99e3ec86fe818b796f89908c7d47c1c6b740c61d090\"" Sep 11 00:31:10.248179 containerd[1543]: time="2025-09-11T00:31:10.248147955Z" level=info msg="StartContainer for \"48ffcfe2a998c51ee512c99e3ec86fe818b796f89908c7d47c1c6b740c61d090\"" Sep 11 00:31:10.248924 containerd[1543]: time="2025-09-11T00:31:10.248896845Z" level=info msg="connecting to shim 48ffcfe2a998c51ee512c99e3ec86fe818b796f89908c7d47c1c6b740c61d090" address="unix:///run/containerd/s/acd61a5f749d4fa800925acbe11723cb6585d1bf738ddaaa421721405a80e211" protocol=ttrpc version=3 Sep 11 00:31:10.299156 systemd[1]: Started cri-containerd-48ffcfe2a998c51ee512c99e3ec86fe818b796f89908c7d47c1c6b740c61d090.scope - libcontainer container 48ffcfe2a998c51ee512c99e3ec86fe818b796f89908c7d47c1c6b740c61d090. Sep 11 00:31:10.329012 containerd[1543]: time="2025-09-11T00:31:10.328958013Z" level=info msg="StartContainer for \"48ffcfe2a998c51ee512c99e3ec86fe818b796f89908c7d47c1c6b740c61d090\" returns successfully" Sep 11 00:31:10.966835 update_engine[1528]: I20250911 00:31:10.966745 1528 update_attempter.cc:509] Updating boot flags... Sep 11 00:31:12.918342 kubelet[2700]: I0911 00:31:12.918092 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-8r67z" podStartSLOduration=5.175406032 podStartE2EDuration="7.918075587s" podCreationTimestamp="2025-09-11 00:31:05 +0000 UTC" firstStartedPulling="2025-09-11 00:31:07.486281382 +0000 UTC m=+8.254511735" lastFinishedPulling="2025-09-11 00:31:10.228950937 +0000 UTC m=+10.997181290" observedRunningTime="2025-09-11 00:31:10.361802343 +0000 UTC m=+11.130032696" watchObservedRunningTime="2025-09-11 00:31:12.918075587 +0000 UTC m=+13.686305930" Sep 11 00:31:16.088098 sudo[1761]: pam_unix(sudo:session): session closed for user root Sep 11 00:31:16.089752 sshd[1760]: Connection closed by 10.0.0.1 port 38926 Sep 11 00:31:16.090596 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:16.095552 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:38926.service: Deactivated successfully. Sep 11 00:31:16.098502 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:31:16.098724 systemd[1]: session-7.scope: Consumed 5.509s CPU time, 222.5M memory peak. Sep 11 00:31:16.101361 systemd-logind[1526]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:31:16.103849 systemd-logind[1526]: Removed session 7. Sep 11 00:31:19.011143 systemd[1]: Created slice kubepods-besteffort-podb153a9d6_2717_4bf4_867d_46c4f6aeeebf.slice - libcontainer container kubepods-besteffort-podb153a9d6_2717_4bf4_867d_46c4f6aeeebf.slice. Sep 11 00:31:19.044617 kubelet[2700]: I0911 00:31:19.044253 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrkzb\" (UniqueName: \"kubernetes.io/projected/b153a9d6-2717-4bf4-867d-46c4f6aeeebf-kube-api-access-vrkzb\") pod \"calico-typha-5575c77dbc-jvqsz\" (UID: \"b153a9d6-2717-4bf4-867d-46c4f6aeeebf\") " pod="calico-system/calico-typha-5575c77dbc-jvqsz" Sep 11 00:31:19.044617 kubelet[2700]: I0911 00:31:19.044317 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b153a9d6-2717-4bf4-867d-46c4f6aeeebf-tigera-ca-bundle\") pod \"calico-typha-5575c77dbc-jvqsz\" (UID: \"b153a9d6-2717-4bf4-867d-46c4f6aeeebf\") " pod="calico-system/calico-typha-5575c77dbc-jvqsz" Sep 11 00:31:19.044617 kubelet[2700]: I0911 00:31:19.044338 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b153a9d6-2717-4bf4-867d-46c4f6aeeebf-typha-certs\") pod \"calico-typha-5575c77dbc-jvqsz\" (UID: \"b153a9d6-2717-4bf4-867d-46c4f6aeeebf\") " pod="calico-system/calico-typha-5575c77dbc-jvqsz" Sep 11 00:31:19.112191 systemd[1]: Created slice kubepods-besteffort-pod77372f63_6a88_4857_aab8_5ce68c8c7d6d.slice - libcontainer container kubepods-besteffort-pod77372f63_6a88_4857_aab8_5ce68c8c7d6d.slice. Sep 11 00:31:19.144820 kubelet[2700]: I0911 00:31:19.144775 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/77372f63-6a88-4857-aab8-5ce68c8c7d6d-node-certs\") pod \"calico-node-fx68c\" (UID: \"77372f63-6a88-4857-aab8-5ce68c8c7d6d\") " pod="calico-system/calico-node-fx68c" Sep 11 00:31:19.144820 kubelet[2700]: I0911 00:31:19.144814 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77372f63-6a88-4857-aab8-5ce68c8c7d6d-tigera-ca-bundle\") pod \"calico-node-fx68c\" (UID: \"77372f63-6a88-4857-aab8-5ce68c8c7d6d\") " pod="calico-system/calico-node-fx68c" Sep 11 00:31:19.144820 kubelet[2700]: I0911 00:31:19.144828 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/77372f63-6a88-4857-aab8-5ce68c8c7d6d-var-run-calico\") pod \"calico-node-fx68c\" (UID: \"77372f63-6a88-4857-aab8-5ce68c8c7d6d\") " pod="calico-system/calico-node-fx68c" Sep 11 00:31:19.144820 kubelet[2700]: I0911 00:31:19.144844 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/77372f63-6a88-4857-aab8-5ce68c8c7d6d-cni-log-dir\") pod \"calico-node-fx68c\" (UID: \"77372f63-6a88-4857-aab8-5ce68c8c7d6d\") " pod="calico-system/calico-node-fx68c" Sep 11 00:31:19.145126 kubelet[2700]: I0911 00:31:19.144857 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/77372f63-6a88-4857-aab8-5ce68c8c7d6d-cni-net-dir\") pod \"calico-node-fx68c\" (UID: \"77372f63-6a88-4857-aab8-5ce68c8c7d6d\") " pod="calico-system/calico-node-fx68c" Sep 11 00:31:19.145126 kubelet[2700]: I0911 00:31:19.144886 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77372f63-6a88-4857-aab8-5ce68c8c7d6d-lib-modules\") pod \"calico-node-fx68c\" (UID: \"77372f63-6a88-4857-aab8-5ce68c8c7d6d\") " pod="calico-system/calico-node-fx68c" Sep 11 00:31:19.145126 kubelet[2700]: I0911 00:31:19.144900 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/77372f63-6a88-4857-aab8-5ce68c8c7d6d-cni-bin-dir\") pod \"calico-node-fx68c\" (UID: \"77372f63-6a88-4857-aab8-5ce68c8c7d6d\") " pod="calico-system/calico-node-fx68c" Sep 11 00:31:19.145126 kubelet[2700]: I0911 00:31:19.144914 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77372f63-6a88-4857-aab8-5ce68c8c7d6d-var-lib-calico\") pod \"calico-node-fx68c\" (UID: \"77372f63-6a88-4857-aab8-5ce68c8c7d6d\") " pod="calico-system/calico-node-fx68c" Sep 11 00:31:19.145126 kubelet[2700]: I0911 00:31:19.144931 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsvk8\" (UniqueName: \"kubernetes.io/projected/77372f63-6a88-4857-aab8-5ce68c8c7d6d-kube-api-access-hsvk8\") pod \"calico-node-fx68c\" (UID: \"77372f63-6a88-4857-aab8-5ce68c8c7d6d\") " pod="calico-system/calico-node-fx68c" Sep 11 00:31:19.145278 kubelet[2700]: I0911 00:31:19.144946 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/77372f63-6a88-4857-aab8-5ce68c8c7d6d-policysync\") pod \"calico-node-fx68c\" (UID: \"77372f63-6a88-4857-aab8-5ce68c8c7d6d\") " pod="calico-system/calico-node-fx68c" Sep 11 00:31:19.145278 kubelet[2700]: I0911 00:31:19.144960 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/77372f63-6a88-4857-aab8-5ce68c8c7d6d-flexvol-driver-host\") pod \"calico-node-fx68c\" (UID: \"77372f63-6a88-4857-aab8-5ce68c8c7d6d\") " pod="calico-system/calico-node-fx68c" Sep 11 00:31:19.145278 kubelet[2700]: I0911 00:31:19.144988 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77372f63-6a88-4857-aab8-5ce68c8c7d6d-xtables-lock\") pod \"calico-node-fx68c\" (UID: \"77372f63-6a88-4857-aab8-5ce68c8c7d6d\") " pod="calico-system/calico-node-fx68c" Sep 11 00:31:19.246659 kubelet[2700]: E0911 00:31:19.246591 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.246659 kubelet[2700]: W0911 00:31:19.246617 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.247888 kubelet[2700]: E0911 00:31:19.247858 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.248247 kubelet[2700]: E0911 00:31:19.248228 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.248247 kubelet[2700]: W0911 00:31:19.248245 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.248348 kubelet[2700]: E0911 00:31:19.248263 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.251140 kubelet[2700]: E0911 00:31:19.251086 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.251140 kubelet[2700]: W0911 00:31:19.251101 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.251140 kubelet[2700]: E0911 00:31:19.251113 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.251725 kubelet[2700]: E0911 00:31:19.251707 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.251725 kubelet[2700]: W0911 00:31:19.251721 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.251808 kubelet[2700]: E0911 00:31:19.251732 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.252279 kubelet[2700]: E0911 00:31:19.252216 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.252279 kubelet[2700]: W0911 00:31:19.252234 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.252279 kubelet[2700]: E0911 00:31:19.252244 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.252519 kubelet[2700]: E0911 00:31:19.252422 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.252519 kubelet[2700]: W0911 00:31:19.252437 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.252519 kubelet[2700]: E0911 00:31:19.252446 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.254513 kubelet[2700]: E0911 00:31:19.254202 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.254513 kubelet[2700]: W0911 00:31:19.254217 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.254513 kubelet[2700]: E0911 00:31:19.254227 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.256536 kubelet[2700]: E0911 00:31:19.256480 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.256536 kubelet[2700]: W0911 00:31:19.256529 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.256605 kubelet[2700]: E0911 00:31:19.256539 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.257310 kubelet[2700]: E0911 00:31:19.257285 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.257361 kubelet[2700]: W0911 00:31:19.257303 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.260087 kubelet[2700]: E0911 00:31:19.260058 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.261006 kubelet[2700]: E0911 00:31:19.260980 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.261006 kubelet[2700]: W0911 00:31:19.260997 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.261006 kubelet[2700]: E0911 00:31:19.261007 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.262522 kubelet[2700]: E0911 00:31:19.262442 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.262522 kubelet[2700]: W0911 00:31:19.262459 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.262522 kubelet[2700]: E0911 00:31:19.262468 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.264088 kubelet[2700]: E0911 00:31:19.262615 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.264088 kubelet[2700]: W0911 00:31:19.262622 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.264088 kubelet[2700]: E0911 00:31:19.262629 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.264088 kubelet[2700]: E0911 00:31:19.262777 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.264088 kubelet[2700]: W0911 00:31:19.262785 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.264088 kubelet[2700]: E0911 00:31:19.262793 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.264088 kubelet[2700]: E0911 00:31:19.262925 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.264088 kubelet[2700]: W0911 00:31:19.262931 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.264088 kubelet[2700]: E0911 00:31:19.262938 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.264088 kubelet[2700]: E0911 00:31:19.263095 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.264438 kubelet[2700]: W0911 00:31:19.263102 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.264438 kubelet[2700]: E0911 00:31:19.263109 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.264438 kubelet[2700]: E0911 00:31:19.263268 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.264438 kubelet[2700]: W0911 00:31:19.263276 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.264438 kubelet[2700]: E0911 00:31:19.263283 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.264438 kubelet[2700]: E0911 00:31:19.263440 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.264438 kubelet[2700]: W0911 00:31:19.263446 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.264438 kubelet[2700]: E0911 00:31:19.263453 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.264438 kubelet[2700]: E0911 00:31:19.263699 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.264438 kubelet[2700]: W0911 00:31:19.263727 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.264749 kubelet[2700]: E0911 00:31:19.263756 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.265773 kubelet[2700]: E0911 00:31:19.265709 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.265773 kubelet[2700]: W0911 00:31:19.265731 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.265773 kubelet[2700]: E0911 00:31:19.265744 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.323500 kubelet[2700]: E0911 00:31:19.323441 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mn9tv" podUID="d6874fb2-8b46-4762-a169-00ccac62f67f" Sep 11 00:31:19.338073 containerd[1543]: time="2025-09-11T00:31:19.337920749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5575c77dbc-jvqsz,Uid:b153a9d6-2717-4bf4-867d-46c4f6aeeebf,Namespace:calico-system,Attempt:0,}" Sep 11 00:31:19.343553 kubelet[2700]: E0911 00:31:19.343519 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.343553 kubelet[2700]: W0911 00:31:19.343539 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.343553 kubelet[2700]: E0911 00:31:19.343561 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.344346 kubelet[2700]: E0911 00:31:19.343774 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.344346 kubelet[2700]: W0911 00:31:19.343782 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.344346 kubelet[2700]: E0911 00:31:19.343798 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.344346 kubelet[2700]: E0911 00:31:19.343996 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.344346 kubelet[2700]: W0911 00:31:19.344004 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.344346 kubelet[2700]: E0911 00:31:19.344012 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.344346 kubelet[2700]: E0911 00:31:19.344310 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.344346 kubelet[2700]: W0911 00:31:19.344332 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.344538 kubelet[2700]: E0911 00:31:19.344358 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.344610 kubelet[2700]: E0911 00:31:19.344592 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.344610 kubelet[2700]: W0911 00:31:19.344608 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.344701 kubelet[2700]: E0911 00:31:19.344618 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.344851 kubelet[2700]: E0911 00:31:19.344829 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.344851 kubelet[2700]: W0911 00:31:19.344843 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.344918 kubelet[2700]: E0911 00:31:19.344854 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.345109 kubelet[2700]: E0911 00:31:19.345090 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.345109 kubelet[2700]: W0911 00:31:19.345103 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.345109 kubelet[2700]: E0911 00:31:19.345113 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.345319 kubelet[2700]: E0911 00:31:19.345301 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.345319 kubelet[2700]: W0911 00:31:19.345313 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.345383 kubelet[2700]: E0911 00:31:19.345322 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.345519 kubelet[2700]: E0911 00:31:19.345502 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.345519 kubelet[2700]: W0911 00:31:19.345513 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.345586 kubelet[2700]: E0911 00:31:19.345522 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.345767 kubelet[2700]: E0911 00:31:19.345748 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.345767 kubelet[2700]: W0911 00:31:19.345761 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.345854 kubelet[2700]: E0911 00:31:19.345770 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.346007 kubelet[2700]: E0911 00:31:19.345970 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.346007 kubelet[2700]: W0911 00:31:19.345984 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.346007 kubelet[2700]: E0911 00:31:19.345994 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.346265 kubelet[2700]: E0911 00:31:19.346242 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.346265 kubelet[2700]: W0911 00:31:19.346252 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.346265 kubelet[2700]: E0911 00:31:19.346263 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.346486 kubelet[2700]: E0911 00:31:19.346469 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.346486 kubelet[2700]: W0911 00:31:19.346482 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.346541 kubelet[2700]: E0911 00:31:19.346494 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.346748 kubelet[2700]: E0911 00:31:19.346720 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.346748 kubelet[2700]: W0911 00:31:19.346736 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.346748 kubelet[2700]: E0911 00:31:19.346747 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.346931 kubelet[2700]: E0911 00:31:19.346915 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.346931 kubelet[2700]: W0911 00:31:19.346928 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.346986 kubelet[2700]: E0911 00:31:19.346938 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.347161 kubelet[2700]: E0911 00:31:19.347143 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.347161 kubelet[2700]: W0911 00:31:19.347156 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.347211 kubelet[2700]: E0911 00:31:19.347166 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.347378 kubelet[2700]: E0911 00:31:19.347362 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.347378 kubelet[2700]: W0911 00:31:19.347374 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.347431 kubelet[2700]: E0911 00:31:19.347384 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.347563 kubelet[2700]: E0911 00:31:19.347546 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.347563 kubelet[2700]: W0911 00:31:19.347559 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.347608 kubelet[2700]: E0911 00:31:19.347570 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.347759 kubelet[2700]: E0911 00:31:19.347742 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.347759 kubelet[2700]: W0911 00:31:19.347755 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.347809 kubelet[2700]: E0911 00:31:19.347764 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.347945 kubelet[2700]: E0911 00:31:19.347929 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.347945 kubelet[2700]: W0911 00:31:19.347941 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.347998 kubelet[2700]: E0911 00:31:19.347951 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.348251 kubelet[2700]: E0911 00:31:19.348232 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.348251 kubelet[2700]: W0911 00:31:19.348245 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.348326 kubelet[2700]: E0911 00:31:19.348254 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.348326 kubelet[2700]: I0911 00:31:19.348282 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d6874fb2-8b46-4762-a169-00ccac62f67f-registration-dir\") pod \"csi-node-driver-mn9tv\" (UID: \"d6874fb2-8b46-4762-a169-00ccac62f67f\") " pod="calico-system/csi-node-driver-mn9tv" Sep 11 00:31:19.348495 kubelet[2700]: E0911 00:31:19.348474 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.348495 kubelet[2700]: W0911 00:31:19.348488 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.348575 kubelet[2700]: E0911 00:31:19.348498 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.348575 kubelet[2700]: I0911 00:31:19.348522 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d6874fb2-8b46-4762-a169-00ccac62f67f-varrun\") pod \"csi-node-driver-mn9tv\" (UID: \"d6874fb2-8b46-4762-a169-00ccac62f67f\") " pod="calico-system/csi-node-driver-mn9tv" Sep 11 00:31:19.348923 kubelet[2700]: E0911 00:31:19.348879 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.348923 kubelet[2700]: W0911 00:31:19.348914 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.348975 kubelet[2700]: E0911 00:31:19.348940 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.349202 kubelet[2700]: E0911 00:31:19.349185 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.349202 kubelet[2700]: W0911 00:31:19.349199 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.349262 kubelet[2700]: E0911 00:31:19.349210 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.349486 kubelet[2700]: E0911 00:31:19.349458 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.349486 kubelet[2700]: W0911 00:31:19.349473 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.349486 kubelet[2700]: E0911 00:31:19.349484 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.349765 kubelet[2700]: I0911 00:31:19.349521 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6874fb2-8b46-4762-a169-00ccac62f67f-kubelet-dir\") pod \"csi-node-driver-mn9tv\" (UID: \"d6874fb2-8b46-4762-a169-00ccac62f67f\") " pod="calico-system/csi-node-driver-mn9tv" Sep 11 00:31:19.349812 kubelet[2700]: E0911 00:31:19.349788 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.349812 kubelet[2700]: W0911 00:31:19.349801 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.349882 kubelet[2700]: E0911 00:31:19.349816 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.350098 kubelet[2700]: E0911 00:31:19.350073 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.350098 kubelet[2700]: W0911 00:31:19.350088 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.350098 kubelet[2700]: E0911 00:31:19.350099 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.350343 kubelet[2700]: E0911 00:31:19.350322 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.350343 kubelet[2700]: W0911 00:31:19.350335 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.350343 kubelet[2700]: E0911 00:31:19.350344 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.350449 kubelet[2700]: I0911 00:31:19.350369 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d6874fb2-8b46-4762-a169-00ccac62f67f-socket-dir\") pod \"csi-node-driver-mn9tv\" (UID: \"d6874fb2-8b46-4762-a169-00ccac62f67f\") " pod="calico-system/csi-node-driver-mn9tv" Sep 11 00:31:19.351000 kubelet[2700]: E0911 00:31:19.350960 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.351000 kubelet[2700]: W0911 00:31:19.350991 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.351813 kubelet[2700]: E0911 00:31:19.351004 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.351813 kubelet[2700]: I0911 00:31:19.351088 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86wjc\" (UniqueName: \"kubernetes.io/projected/d6874fb2-8b46-4762-a169-00ccac62f67f-kube-api-access-86wjc\") pod \"csi-node-driver-mn9tv\" (UID: \"d6874fb2-8b46-4762-a169-00ccac62f67f\") " pod="calico-system/csi-node-driver-mn9tv" Sep 11 00:31:19.351813 kubelet[2700]: E0911 00:31:19.351422 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.351813 kubelet[2700]: W0911 00:31:19.351434 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.351813 kubelet[2700]: E0911 00:31:19.351447 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.351813 kubelet[2700]: E0911 00:31:19.351621 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.351813 kubelet[2700]: W0911 00:31:19.351629 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.351813 kubelet[2700]: E0911 00:31:19.351638 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.351985 kubelet[2700]: E0911 00:31:19.351882 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.351985 kubelet[2700]: W0911 00:31:19.351890 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.351985 kubelet[2700]: E0911 00:31:19.351899 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.352104 kubelet[2700]: E0911 00:31:19.352065 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.352104 kubelet[2700]: W0911 00:31:19.352072 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.352104 kubelet[2700]: E0911 00:31:19.352090 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.352280 kubelet[2700]: E0911 00:31:19.352257 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.352280 kubelet[2700]: W0911 00:31:19.352268 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.352280 kubelet[2700]: E0911 00:31:19.352276 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.352472 kubelet[2700]: E0911 00:31:19.352461 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.352472 kubelet[2700]: W0911 00:31:19.352470 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.352518 kubelet[2700]: E0911 00:31:19.352479 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.363944 containerd[1543]: time="2025-09-11T00:31:19.363876668Z" level=info msg="connecting to shim 0d22ffceec905135908d3aa549430b650d1af01b16bf7b93f90419aec3c7d3ad" address="unix:///run/containerd/s/eab6d4cd909097bec95ea80c77f9500a474d549e608817bad0a61b746da7bba1" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:31:19.393290 systemd[1]: Started cri-containerd-0d22ffceec905135908d3aa549430b650d1af01b16bf7b93f90419aec3c7d3ad.scope - libcontainer container 0d22ffceec905135908d3aa549430b650d1af01b16bf7b93f90419aec3c7d3ad. Sep 11 00:31:19.416216 containerd[1543]: time="2025-09-11T00:31:19.416163636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fx68c,Uid:77372f63-6a88-4857-aab8-5ce68c8c7d6d,Namespace:calico-system,Attempt:0,}" Sep 11 00:31:19.441010 containerd[1543]: time="2025-09-11T00:31:19.440961881Z" level=info msg="connecting to shim 4fc75c7c72d7d9a175b1adefd3828f880ba1d39e66936643485eb7f5d8fcd18e" address="unix:///run/containerd/s/ede337edd6f8d1ccbc747f65bf85a62222ea212ccc2c0f2f45831de8e3374d78" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:31:19.451452 containerd[1543]: time="2025-09-11T00:31:19.450821297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5575c77dbc-jvqsz,Uid:b153a9d6-2717-4bf4-867d-46c4f6aeeebf,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d22ffceec905135908d3aa549430b650d1af01b16bf7b93f90419aec3c7d3ad\"" Sep 11 00:31:19.452336 kubelet[2700]: E0911 00:31:19.452299 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.452336 kubelet[2700]: W0911 00:31:19.452322 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.452437 kubelet[2700]: E0911 00:31:19.452342 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.452807 kubelet[2700]: E0911 00:31:19.452743 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.452807 kubelet[2700]: W0911 00:31:19.452803 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.452864 kubelet[2700]: E0911 00:31:19.452815 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.454191 kubelet[2700]: E0911 00:31:19.454168 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.454191 kubelet[2700]: W0911 00:31:19.454185 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.454191 kubelet[2700]: E0911 00:31:19.454196 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.454738 kubelet[2700]: E0911 00:31:19.454722 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.454738 kubelet[2700]: W0911 00:31:19.454733 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.454797 kubelet[2700]: E0911 00:31:19.454743 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.455200 kubelet[2700]: E0911 00:31:19.455183 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.455200 kubelet[2700]: W0911 00:31:19.455195 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.455258 kubelet[2700]: E0911 00:31:19.455206 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.455786 kubelet[2700]: E0911 00:31:19.455752 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.455786 kubelet[2700]: W0911 00:31:19.455771 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.455786 kubelet[2700]: E0911 00:31:19.455782 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.456433 kubelet[2700]: E0911 00:31:19.456399 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.456433 kubelet[2700]: W0911 00:31:19.456415 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.456433 kubelet[2700]: E0911 00:31:19.456427 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.457264 kubelet[2700]: E0911 00:31:19.457240 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.457304 kubelet[2700]: W0911 00:31:19.457296 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.457327 kubelet[2700]: E0911 00:31:19.457310 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.457892 kubelet[2700]: E0911 00:31:19.457874 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.457892 kubelet[2700]: W0911 00:31:19.457886 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.457942 kubelet[2700]: E0911 00:31:19.457895 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.458291 kubelet[2700]: E0911 00:31:19.458263 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.458400 kubelet[2700]: W0911 00:31:19.458279 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.458424 kubelet[2700]: E0911 00:31:19.458402 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.458842 kubelet[2700]: E0911 00:31:19.458812 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.458842 kubelet[2700]: W0911 00:31:19.458826 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.458901 kubelet[2700]: E0911 00:31:19.458835 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.461086 containerd[1543]: time="2025-09-11T00:31:19.460745825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 00:31:19.461160 kubelet[2700]: E0911 00:31:19.461012 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.461160 kubelet[2700]: W0911 00:31:19.461021 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.461160 kubelet[2700]: E0911 00:31:19.461068 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.461382 kubelet[2700]: E0911 00:31:19.461356 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.461382 kubelet[2700]: W0911 00:31:19.461371 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.461382 kubelet[2700]: E0911 00:31:19.461379 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.461767 kubelet[2700]: E0911 00:31:19.461746 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.461767 kubelet[2700]: W0911 00:31:19.461761 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.461767 kubelet[2700]: E0911 00:31:19.461769 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.462092 kubelet[2700]: E0911 00:31:19.462072 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.462092 kubelet[2700]: W0911 00:31:19.462087 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.462159 kubelet[2700]: E0911 00:31:19.462099 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.462461 kubelet[2700]: E0911 00:31:19.462442 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.462461 kubelet[2700]: W0911 00:31:19.462455 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.462515 kubelet[2700]: E0911 00:31:19.462466 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.462805 kubelet[2700]: E0911 00:31:19.462786 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.462805 kubelet[2700]: W0911 00:31:19.462800 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.462864 kubelet[2700]: E0911 00:31:19.462811 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.463169 kubelet[2700]: E0911 00:31:19.463150 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.463169 kubelet[2700]: W0911 00:31:19.463164 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.463218 kubelet[2700]: E0911 00:31:19.463175 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.463517 kubelet[2700]: E0911 00:31:19.463497 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.463517 kubelet[2700]: W0911 00:31:19.463511 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.463581 kubelet[2700]: E0911 00:31:19.463523 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.463924 kubelet[2700]: E0911 00:31:19.463856 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.463924 kubelet[2700]: W0911 00:31:19.463871 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.463924 kubelet[2700]: E0911 00:31:19.463883 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.464454 kubelet[2700]: E0911 00:31:19.464434 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.464454 kubelet[2700]: W0911 00:31:19.464450 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.464538 kubelet[2700]: E0911 00:31:19.464462 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.465058 kubelet[2700]: E0911 00:31:19.464868 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.465058 kubelet[2700]: W0911 00:31:19.464897 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.465058 kubelet[2700]: E0911 00:31:19.464933 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.465557 kubelet[2700]: E0911 00:31:19.465480 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.465811 kubelet[2700]: W0911 00:31:19.465740 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.465991 kubelet[2700]: E0911 00:31:19.465954 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.467083 kubelet[2700]: E0911 00:31:19.466902 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.467083 kubelet[2700]: W0911 00:31:19.466917 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.467083 kubelet[2700]: E0911 00:31:19.466929 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.468791 kubelet[2700]: E0911 00:31:19.468764 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.468791 kubelet[2700]: W0911 00:31:19.468781 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.468791 kubelet[2700]: E0911 00:31:19.468795 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.486110 systemd[1]: Started cri-containerd-4fc75c7c72d7d9a175b1adefd3828f880ba1d39e66936643485eb7f5d8fcd18e.scope - libcontainer container 4fc75c7c72d7d9a175b1adefd3828f880ba1d39e66936643485eb7f5d8fcd18e. Sep 11 00:31:19.489574 kubelet[2700]: E0911 00:31:19.489382 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:19.489574 kubelet[2700]: W0911 00:31:19.489409 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:19.489574 kubelet[2700]: E0911 00:31:19.489434 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:19.524175 containerd[1543]: time="2025-09-11T00:31:19.523931158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fx68c,Uid:77372f63-6a88-4857-aab8-5ce68c8c7d6d,Namespace:calico-system,Attempt:0,} returns sandbox id \"4fc75c7c72d7d9a175b1adefd3828f880ba1d39e66936643485eb7f5d8fcd18e\"" Sep 11 00:31:20.872398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222248930.mount: Deactivated successfully. Sep 11 00:31:21.324503 kubelet[2700]: E0911 00:31:21.324458 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mn9tv" podUID="d6874fb2-8b46-4762-a169-00ccac62f67f" Sep 11 00:31:22.445091 containerd[1543]: time="2025-09-11T00:31:22.444865032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:22.447386 containerd[1543]: time="2025-09-11T00:31:22.447288580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 11 00:31:22.448863 containerd[1543]: time="2025-09-11T00:31:22.448814958Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:22.451605 containerd[1543]: time="2025-09-11T00:31:22.451574279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:22.452437 containerd[1543]: time="2025-09-11T00:31:22.452392551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.991608074s" Sep 11 00:31:22.452492 containerd[1543]: time="2025-09-11T00:31:22.452438368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 11 00:31:22.453684 containerd[1543]: time="2025-09-11T00:31:22.453648328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 00:31:22.466286 containerd[1543]: time="2025-09-11T00:31:22.466238780Z" level=info msg="CreateContainer within sandbox \"0d22ffceec905135908d3aa549430b650d1af01b16bf7b93f90419aec3c7d3ad\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 00:31:22.483635 containerd[1543]: time="2025-09-11T00:31:22.483553657Z" level=info msg="Container 66e64fa7f490e57cfd5fc6e8eb0087948613d8ebd55bf5d91479d209ec9bcf08: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:22.573488 containerd[1543]: time="2025-09-11T00:31:22.573419788Z" level=info msg="CreateContainer within sandbox \"0d22ffceec905135908d3aa549430b650d1af01b16bf7b93f90419aec3c7d3ad\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"66e64fa7f490e57cfd5fc6e8eb0087948613d8ebd55bf5d91479d209ec9bcf08\"" Sep 11 00:31:22.574259 containerd[1543]: time="2025-09-11T00:31:22.574001194Z" level=info msg="StartContainer for \"66e64fa7f490e57cfd5fc6e8eb0087948613d8ebd55bf5d91479d209ec9bcf08\"" Sep 11 00:31:22.575350 containerd[1543]: time="2025-09-11T00:31:22.575316673Z" level=info msg="connecting to shim 66e64fa7f490e57cfd5fc6e8eb0087948613d8ebd55bf5d91479d209ec9bcf08" address="unix:///run/containerd/s/eab6d4cd909097bec95ea80c77f9500a474d549e608817bad0a61b746da7bba1" protocol=ttrpc version=3 Sep 11 00:31:22.598257 systemd[1]: Started cri-containerd-66e64fa7f490e57cfd5fc6e8eb0087948613d8ebd55bf5d91479d209ec9bcf08.scope - libcontainer container 66e64fa7f490e57cfd5fc6e8eb0087948613d8ebd55bf5d91479d209ec9bcf08. Sep 11 00:31:22.660144 containerd[1543]: time="2025-09-11T00:31:22.660015404Z" level=info msg="StartContainer for \"66e64fa7f490e57cfd5fc6e8eb0087948613d8ebd55bf5d91479d209ec9bcf08\" returns successfully" Sep 11 00:31:23.324485 kubelet[2700]: E0911 00:31:23.324409 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mn9tv" podUID="d6874fb2-8b46-4762-a169-00ccac62f67f" Sep 11 00:31:23.473390 kubelet[2700]: E0911 00:31:23.473334 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.473390 kubelet[2700]: W0911 00:31:23.473358 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.473390 kubelet[2700]: E0911 00:31:23.473382 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.473642 kubelet[2700]: E0911 00:31:23.473591 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.473642 kubelet[2700]: W0911 00:31:23.473612 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.473642 kubelet[2700]: E0911 00:31:23.473623 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.473857 kubelet[2700]: E0911 00:31:23.473829 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.473857 kubelet[2700]: W0911 00:31:23.473842 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.473857 kubelet[2700]: E0911 00:31:23.473852 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.474107 kubelet[2700]: E0911 00:31:23.474076 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.474107 kubelet[2700]: W0911 00:31:23.474088 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.474107 kubelet[2700]: E0911 00:31:23.474097 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.474288 kubelet[2700]: E0911 00:31:23.474271 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.474288 kubelet[2700]: W0911 00:31:23.474283 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.474365 kubelet[2700]: E0911 00:31:23.474293 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.474515 kubelet[2700]: E0911 00:31:23.474497 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.474515 kubelet[2700]: W0911 00:31:23.474508 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.474591 kubelet[2700]: E0911 00:31:23.474518 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.474748 kubelet[2700]: E0911 00:31:23.474718 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.474748 kubelet[2700]: W0911 00:31:23.474729 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.474748 kubelet[2700]: E0911 00:31:23.474739 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.474951 kubelet[2700]: E0911 00:31:23.474929 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.474951 kubelet[2700]: W0911 00:31:23.474940 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.474951 kubelet[2700]: E0911 00:31:23.474950 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.475195 kubelet[2700]: E0911 00:31:23.475165 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.475195 kubelet[2700]: W0911 00:31:23.475177 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.475195 kubelet[2700]: E0911 00:31:23.475189 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.475405 kubelet[2700]: E0911 00:31:23.475385 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.475405 kubelet[2700]: W0911 00:31:23.475396 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.475405 kubelet[2700]: E0911 00:31:23.475405 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.475625 kubelet[2700]: E0911 00:31:23.475593 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.475625 kubelet[2700]: W0911 00:31:23.475612 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.475625 kubelet[2700]: E0911 00:31:23.475622 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.475833 kubelet[2700]: E0911 00:31:23.475810 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.475833 kubelet[2700]: W0911 00:31:23.475820 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.475833 kubelet[2700]: E0911 00:31:23.475830 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.476039 kubelet[2700]: E0911 00:31:23.476015 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.476088 kubelet[2700]: W0911 00:31:23.476025 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.476088 kubelet[2700]: E0911 00:31:23.476052 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.476255 kubelet[2700]: E0911 00:31:23.476232 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.476255 kubelet[2700]: W0911 00:31:23.476244 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.476255 kubelet[2700]: E0911 00:31:23.476253 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.476447 kubelet[2700]: E0911 00:31:23.476427 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.476447 kubelet[2700]: W0911 00:31:23.476437 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.476447 kubelet[2700]: E0911 00:31:23.476446 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.488754 kubelet[2700]: E0911 00:31:23.488732 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.488754 kubelet[2700]: W0911 00:31:23.488745 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.488754 kubelet[2700]: E0911 00:31:23.488754 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.488958 kubelet[2700]: E0911 00:31:23.488937 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.488958 kubelet[2700]: W0911 00:31:23.488947 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.488958 kubelet[2700]: E0911 00:31:23.488954 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.489177 kubelet[2700]: E0911 00:31:23.489153 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.489177 kubelet[2700]: W0911 00:31:23.489167 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.489177 kubelet[2700]: E0911 00:31:23.489176 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.489356 kubelet[2700]: E0911 00:31:23.489333 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.489356 kubelet[2700]: W0911 00:31:23.489344 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.489356 kubelet[2700]: E0911 00:31:23.489353 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.489540 kubelet[2700]: E0911 00:31:23.489526 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.489540 kubelet[2700]: W0911 00:31:23.489535 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.489617 kubelet[2700]: E0911 00:31:23.489543 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.489731 kubelet[2700]: E0911 00:31:23.489719 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.489731 kubelet[2700]: W0911 00:31:23.489727 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.489802 kubelet[2700]: E0911 00:31:23.489734 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.490145 kubelet[2700]: E0911 00:31:23.490128 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.490145 kubelet[2700]: W0911 00:31:23.490142 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.490230 kubelet[2700]: E0911 00:31:23.490153 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.490390 kubelet[2700]: E0911 00:31:23.490365 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.490390 kubelet[2700]: W0911 00:31:23.490378 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.490390 kubelet[2700]: E0911 00:31:23.490387 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.490590 kubelet[2700]: E0911 00:31:23.490572 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.490590 kubelet[2700]: W0911 00:31:23.490585 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.490668 kubelet[2700]: E0911 00:31:23.490597 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.490796 kubelet[2700]: E0911 00:31:23.490783 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.490796 kubelet[2700]: W0911 00:31:23.490794 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.490836 kubelet[2700]: E0911 00:31:23.490803 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.491043 kubelet[2700]: E0911 00:31:23.491018 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.491043 kubelet[2700]: W0911 00:31:23.491042 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.491105 kubelet[2700]: E0911 00:31:23.491052 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.491511 kubelet[2700]: E0911 00:31:23.491497 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.492046 kubelet[2700]: W0911 00:31:23.491568 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.492046 kubelet[2700]: E0911 00:31:23.491581 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.492046 kubelet[2700]: E0911 00:31:23.491785 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.492046 kubelet[2700]: W0911 00:31:23.491795 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.492046 kubelet[2700]: E0911 00:31:23.491807 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.492046 kubelet[2700]: E0911 00:31:23.491993 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.492046 kubelet[2700]: W0911 00:31:23.492002 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.492046 kubelet[2700]: E0911 00:31:23.492012 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.492236 kubelet[2700]: E0911 00:31:23.492197 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.492236 kubelet[2700]: W0911 00:31:23.492206 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.492236 kubelet[2700]: E0911 00:31:23.492221 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.492405 kubelet[2700]: E0911 00:31:23.492364 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.492405 kubelet[2700]: W0911 00:31:23.492377 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.492405 kubelet[2700]: E0911 00:31:23.492392 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.492578 kubelet[2700]: E0911 00:31:23.492563 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.492620 kubelet[2700]: W0911 00:31:23.492576 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.492620 kubelet[2700]: E0911 00:31:23.492591 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.492948 kubelet[2700]: E0911 00:31:23.492931 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:31:23.492948 kubelet[2700]: W0911 00:31:23.492946 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:31:23.493009 kubelet[2700]: E0911 00:31:23.492956 2700 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:31:23.981389 containerd[1543]: time="2025-09-11T00:31:23.981334691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:23.982295 containerd[1543]: time="2025-09-11T00:31:23.982265946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 11 00:31:23.983716 containerd[1543]: time="2025-09-11T00:31:23.983663459Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:23.985769 containerd[1543]: time="2025-09-11T00:31:23.985728059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:23.986478 containerd[1543]: time="2025-09-11T00:31:23.986443827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.532763188s" Sep 11 00:31:23.986521 containerd[1543]: time="2025-09-11T00:31:23.986482571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 11 00:31:23.991684 containerd[1543]: time="2025-09-11T00:31:23.991644147Z" level=info msg="CreateContainer within sandbox \"4fc75c7c72d7d9a175b1adefd3828f880ba1d39e66936643485eb7f5d8fcd18e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 00:31:24.006305 containerd[1543]: time="2025-09-11T00:31:24.006246529Z" level=info msg="Container 2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:24.016110 containerd[1543]: time="2025-09-11T00:31:24.016060215Z" level=info msg="CreateContainer within sandbox \"4fc75c7c72d7d9a175b1adefd3828f880ba1d39e66936643485eb7f5d8fcd18e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856\"" Sep 11 00:31:24.016733 containerd[1543]: time="2025-09-11T00:31:24.016680554Z" level=info msg="StartContainer for \"2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856\"" Sep 11 00:31:24.018087 containerd[1543]: time="2025-09-11T00:31:24.018057377Z" level=info msg="connecting to shim 2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856" address="unix:///run/containerd/s/ede337edd6f8d1ccbc747f65bf85a62222ea212ccc2c0f2f45831de8e3374d78" protocol=ttrpc version=3 Sep 11 00:31:24.037234 systemd[1]: Started cri-containerd-2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856.scope - libcontainer container 2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856. Sep 11 00:31:24.086080 containerd[1543]: time="2025-09-11T00:31:24.086006344Z" level=info msg="StartContainer for \"2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856\" returns successfully" Sep 11 00:31:24.091720 systemd[1]: cri-containerd-2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856.scope: Deactivated successfully. Sep 11 00:31:24.095207 containerd[1543]: time="2025-09-11T00:31:24.095163434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856\" id:\"2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856\" pid:3423 exited_at:{seconds:1757550684 nanos:94709438}" Sep 11 00:31:24.095207 containerd[1543]: time="2025-09-11T00:31:24.095201775Z" level=info msg="received exit event container_id:\"2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856\" id:\"2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856\" pid:3423 exited_at:{seconds:1757550684 nanos:94709438}" Sep 11 00:31:24.120343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b566023eac82671d74c163369d4d5ef03d76fa896bb157df0cef900e1c53856-rootfs.mount: Deactivated successfully. Sep 11 00:31:24.386423 kubelet[2700]: I0911 00:31:24.386311 2700 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:31:24.400781 kubelet[2700]: I0911 00:31:24.400711 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5575c77dbc-jvqsz" podStartSLOduration=3.407712796 podStartE2EDuration="6.40069506s" podCreationTimestamp="2025-09-11 00:31:18 +0000 UTC" firstStartedPulling="2025-09-11 00:31:19.460299984 +0000 UTC m=+20.228530337" lastFinishedPulling="2025-09-11 00:31:22.453282248 +0000 UTC m=+23.221512601" observedRunningTime="2025-09-11 00:31:23.395830904 +0000 UTC m=+24.164061267" watchObservedRunningTime="2025-09-11 00:31:24.40069506 +0000 UTC m=+25.168925423" Sep 11 00:31:25.324744 kubelet[2700]: E0911 00:31:25.324682 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mn9tv" podUID="d6874fb2-8b46-4762-a169-00ccac62f67f" Sep 11 00:31:25.390697 containerd[1543]: time="2025-09-11T00:31:25.390662630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 00:31:27.324053 kubelet[2700]: E0911 00:31:27.323805 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mn9tv" podUID="d6874fb2-8b46-4762-a169-00ccac62f67f" Sep 11 00:31:28.203089 containerd[1543]: time="2025-09-11T00:31:28.203007108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:28.203761 containerd[1543]: time="2025-09-11T00:31:28.203735969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 11 00:31:28.204872 containerd[1543]: time="2025-09-11T00:31:28.204817294Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:28.206918 containerd[1543]: time="2025-09-11T00:31:28.206862032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:28.207367 containerd[1543]: time="2025-09-11T00:31:28.207332687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.816215671s" Sep 11 00:31:28.207367 containerd[1543]: time="2025-09-11T00:31:28.207360570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 11 00:31:28.212114 containerd[1543]: time="2025-09-11T00:31:28.212069611Z" level=info msg="CreateContainer within sandbox \"4fc75c7c72d7d9a175b1adefd3828f880ba1d39e66936643485eb7f5d8fcd18e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 00:31:28.221113 containerd[1543]: time="2025-09-11T00:31:28.221085056Z" level=info msg="Container c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:28.233006 containerd[1543]: time="2025-09-11T00:31:28.232964761Z" level=info msg="CreateContainer within sandbox \"4fc75c7c72d7d9a175b1adefd3828f880ba1d39e66936643485eb7f5d8fcd18e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb\"" Sep 11 00:31:28.235062 containerd[1543]: time="2025-09-11T00:31:28.233542106Z" level=info msg="StartContainer for \"c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb\"" Sep 11 00:31:28.235062 containerd[1543]: time="2025-09-11T00:31:28.234812598Z" level=info msg="connecting to shim c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb" address="unix:///run/containerd/s/ede337edd6f8d1ccbc747f65bf85a62222ea212ccc2c0f2f45831de8e3374d78" protocol=ttrpc version=3 Sep 11 00:31:28.257163 systemd[1]: Started cri-containerd-c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb.scope - libcontainer container c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb. Sep 11 00:31:28.302395 containerd[1543]: time="2025-09-11T00:31:28.302344535Z" level=info msg="StartContainer for \"c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb\" returns successfully" Sep 11 00:31:29.326481 kubelet[2700]: E0911 00:31:29.326422 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mn9tv" podUID="d6874fb2-8b46-4762-a169-00ccac62f67f" Sep 11 00:31:29.527821 systemd[1]: cri-containerd-c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb.scope: Deactivated successfully. Sep 11 00:31:29.528639 systemd[1]: cri-containerd-c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb.scope: Consumed 639ms CPU time, 172.8M memory peak, 3.5M read from disk, 171.3M written to disk. Sep 11 00:31:29.529747 containerd[1543]: time="2025-09-11T00:31:29.529714590Z" level=info msg="received exit event container_id:\"c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb\" id:\"c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb\" pid:3485 exited_at:{seconds:1757550689 nanos:529437668}" Sep 11 00:31:29.530236 containerd[1543]: time="2025-09-11T00:31:29.529932369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb\" id:\"c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb\" pid:3485 exited_at:{seconds:1757550689 nanos:529437668}" Sep 11 00:31:29.592910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c82cba7cf902e834513480069649fe2c13a61f36f4a1b32ad07268c745e6d9fb-rootfs.mount: Deactivated successfully. Sep 11 00:31:29.627102 kubelet[2700]: I0911 00:31:29.627059 2700 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 00:31:29.901510 systemd[1]: Created slice kubepods-besteffort-pod612cc950_7987_4b06_8f21_d5f454e903b3.slice - libcontainer container kubepods-besteffort-pod612cc950_7987_4b06_8f21_d5f454e903b3.slice. Sep 11 00:31:29.912078 systemd[1]: Created slice kubepods-burstable-podbcf304be_820c_49a9_9083_afdf36dc2dcc.slice - libcontainer container kubepods-burstable-podbcf304be_820c_49a9_9083_afdf36dc2dcc.slice. Sep 11 00:31:29.921466 systemd[1]: Created slice kubepods-burstable-pod67ee7562_8c17_492b_8855_1e9f5785122c.slice - libcontainer container kubepods-burstable-pod67ee7562_8c17_492b_8855_1e9f5785122c.slice. Sep 11 00:31:29.928927 systemd[1]: Created slice kubepods-besteffort-pod31613f2f_53c8_48c8_a407_1e86cbe5b31a.slice - libcontainer container kubepods-besteffort-pod31613f2f_53c8_48c8_a407_1e86cbe5b31a.slice. Sep 11 00:31:29.933434 systemd[1]: Created slice kubepods-besteffort-poda893716b_fcc5_4f98_86ec_6a44b51140ab.slice - libcontainer container kubepods-besteffort-poda893716b_fcc5_4f98_86ec_6a44b51140ab.slice. Sep 11 00:31:29.940131 kubelet[2700]: I0911 00:31:29.940097 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4knsm\" (UniqueName: \"kubernetes.io/projected/bcf304be-820c-49a9-9083-afdf36dc2dcc-kube-api-access-4knsm\") pod \"coredns-674b8bbfcf-gph9x\" (UID: \"bcf304be-820c-49a9-9083-afdf36dc2dcc\") " pod="kube-system/coredns-674b8bbfcf-gph9x" Sep 11 00:31:29.940131 kubelet[2700]: I0911 00:31:29.940135 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a893716b-fcc5-4f98-86ec-6a44b51140ab-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-j46cl\" (UID: \"a893716b-fcc5-4f98-86ec-6a44b51140ab\") " pod="calico-system/goldmane-54d579b49d-j46cl" Sep 11 00:31:29.940311 kubelet[2700]: I0911 00:31:29.940150 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a893716b-fcc5-4f98-86ec-6a44b51140ab-goldmane-key-pair\") pod \"goldmane-54d579b49d-j46cl\" (UID: \"a893716b-fcc5-4f98-86ec-6a44b51140ab\") " pod="calico-system/goldmane-54d579b49d-j46cl" Sep 11 00:31:29.940311 kubelet[2700]: I0911 00:31:29.940165 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5njzd\" (UniqueName: \"kubernetes.io/projected/a893716b-fcc5-4f98-86ec-6a44b51140ab-kube-api-access-5njzd\") pod \"goldmane-54d579b49d-j46cl\" (UID: \"a893716b-fcc5-4f98-86ec-6a44b51140ab\") " pod="calico-system/goldmane-54d579b49d-j46cl" Sep 11 00:31:29.940311 kubelet[2700]: I0911 00:31:29.940179 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zskg9\" (UniqueName: \"kubernetes.io/projected/67ee7562-8c17-492b-8855-1e9f5785122c-kube-api-access-zskg9\") pod \"coredns-674b8bbfcf-zrbxh\" (UID: \"67ee7562-8c17-492b-8855-1e9f5785122c\") " pod="kube-system/coredns-674b8bbfcf-zrbxh" Sep 11 00:31:29.940311 kubelet[2700]: I0911 00:31:29.940196 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmlt8\" (UniqueName: \"kubernetes.io/projected/67bb8c61-2f2e-46c2-a879-b0f78278865d-kube-api-access-fmlt8\") pod \"calico-apiserver-54bf5776bb-p5zpf\" (UID: \"67bb8c61-2f2e-46c2-a879-b0f78278865d\") " pod="calico-apiserver/calico-apiserver-54bf5776bb-p5zpf" Sep 11 00:31:29.940311 kubelet[2700]: I0911 00:31:29.940214 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcf304be-820c-49a9-9083-afdf36dc2dcc-config-volume\") pod \"coredns-674b8bbfcf-gph9x\" (UID: \"bcf304be-820c-49a9-9083-afdf36dc2dcc\") " pod="kube-system/coredns-674b8bbfcf-gph9x" Sep 11 00:31:29.940431 kubelet[2700]: I0911 00:31:29.940229 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/67bb8c61-2f2e-46c2-a879-b0f78278865d-calico-apiserver-certs\") pod \"calico-apiserver-54bf5776bb-p5zpf\" (UID: \"67bb8c61-2f2e-46c2-a879-b0f78278865d\") " pod="calico-apiserver/calico-apiserver-54bf5776bb-p5zpf" Sep 11 00:31:29.940431 kubelet[2700]: I0911 00:31:29.940244 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjfms\" (UniqueName: \"kubernetes.io/projected/31613f2f-53c8-48c8-a407-1e86cbe5b31a-kube-api-access-hjfms\") pod \"calico-kube-controllers-66cc5b99d4-bnsq4\" (UID: \"31613f2f-53c8-48c8-a407-1e86cbe5b31a\") " pod="calico-system/calico-kube-controllers-66cc5b99d4-bnsq4" Sep 11 00:31:29.940431 kubelet[2700]: I0911 00:31:29.940263 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67ee7562-8c17-492b-8855-1e9f5785122c-config-volume\") pod \"coredns-674b8bbfcf-zrbxh\" (UID: \"67ee7562-8c17-492b-8855-1e9f5785122c\") " pod="kube-system/coredns-674b8bbfcf-zrbxh" Sep 11 00:31:29.940431 kubelet[2700]: I0911 00:31:29.940278 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4d238a88-a8a4-4120-b65a-06053649067f-calico-apiserver-certs\") pod \"calico-apiserver-54bf5776bb-tp65d\" (UID: \"4d238a88-a8a4-4120-b65a-06053649067f\") " pod="calico-apiserver/calico-apiserver-54bf5776bb-tp65d" Sep 11 00:31:29.940431 kubelet[2700]: I0911 00:31:29.940291 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/612cc950-7987-4b06-8f21-d5f454e903b3-whisker-ca-bundle\") pod \"whisker-8b67889cf-87x7f\" (UID: \"612cc950-7987-4b06-8f21-d5f454e903b3\") " pod="calico-system/whisker-8b67889cf-87x7f" Sep 11 00:31:29.940564 kubelet[2700]: I0911 00:31:29.940320 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgr9c\" (UniqueName: \"kubernetes.io/projected/4d238a88-a8a4-4120-b65a-06053649067f-kube-api-access-dgr9c\") pod \"calico-apiserver-54bf5776bb-tp65d\" (UID: \"4d238a88-a8a4-4120-b65a-06053649067f\") " pod="calico-apiserver/calico-apiserver-54bf5776bb-tp65d" Sep 11 00:31:29.940564 kubelet[2700]: I0911 00:31:29.940335 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpk4w\" (UniqueName: \"kubernetes.io/projected/612cc950-7987-4b06-8f21-d5f454e903b3-kube-api-access-fpk4w\") pod \"whisker-8b67889cf-87x7f\" (UID: \"612cc950-7987-4b06-8f21-d5f454e903b3\") " pod="calico-system/whisker-8b67889cf-87x7f" Sep 11 00:31:29.940564 kubelet[2700]: I0911 00:31:29.940353 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a893716b-fcc5-4f98-86ec-6a44b51140ab-config\") pod \"goldmane-54d579b49d-j46cl\" (UID: \"a893716b-fcc5-4f98-86ec-6a44b51140ab\") " pod="calico-system/goldmane-54d579b49d-j46cl" Sep 11 00:31:29.940564 kubelet[2700]: I0911 00:31:29.940366 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/612cc950-7987-4b06-8f21-d5f454e903b3-whisker-backend-key-pair\") pod \"whisker-8b67889cf-87x7f\" (UID: \"612cc950-7987-4b06-8f21-d5f454e903b3\") " pod="calico-system/whisker-8b67889cf-87x7f" Sep 11 00:31:29.940564 kubelet[2700]: I0911 00:31:29.940379 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31613f2f-53c8-48c8-a407-1e86cbe5b31a-tigera-ca-bundle\") pod \"calico-kube-controllers-66cc5b99d4-bnsq4\" (UID: \"31613f2f-53c8-48c8-a407-1e86cbe5b31a\") " pod="calico-system/calico-kube-controllers-66cc5b99d4-bnsq4" Sep 11 00:31:29.941454 systemd[1]: Created slice kubepods-besteffort-pod67bb8c61_2f2e_46c2_a879_b0f78278865d.slice - libcontainer container kubepods-besteffort-pod67bb8c61_2f2e_46c2_a879_b0f78278865d.slice. Sep 11 00:31:29.947962 systemd[1]: Created slice kubepods-besteffort-pod4d238a88_a8a4_4120_b65a_06053649067f.slice - libcontainer container kubepods-besteffort-pod4d238a88_a8a4_4120_b65a_06053649067f.slice. Sep 11 00:31:30.206856 containerd[1543]: time="2025-09-11T00:31:30.206815588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8b67889cf-87x7f,Uid:612cc950-7987-4b06-8f21-d5f454e903b3,Namespace:calico-system,Attempt:0,}" Sep 11 00:31:30.218713 containerd[1543]: time="2025-09-11T00:31:30.218669926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gph9x,Uid:bcf304be-820c-49a9-9083-afdf36dc2dcc,Namespace:kube-system,Attempt:0,}" Sep 11 00:31:30.225592 containerd[1543]: time="2025-09-11T00:31:30.225330213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zrbxh,Uid:67ee7562-8c17-492b-8855-1e9f5785122c,Namespace:kube-system,Attempt:0,}" Sep 11 00:31:30.233612 containerd[1543]: time="2025-09-11T00:31:30.233569510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cc5b99d4-bnsq4,Uid:31613f2f-53c8-48c8-a407-1e86cbe5b31a,Namespace:calico-system,Attempt:0,}" Sep 11 00:31:30.238623 containerd[1543]: time="2025-09-11T00:31:30.238437256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-j46cl,Uid:a893716b-fcc5-4f98-86ec-6a44b51140ab,Namespace:calico-system,Attempt:0,}" Sep 11 00:31:30.266143 containerd[1543]: time="2025-09-11T00:31:30.266021460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bf5776bb-tp65d,Uid:4d238a88-a8a4-4120-b65a-06053649067f,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:31:30.266287 containerd[1543]: time="2025-09-11T00:31:30.266222979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bf5776bb-p5zpf,Uid:67bb8c61-2f2e-46c2-a879-b0f78278865d,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:31:30.374599 containerd[1543]: time="2025-09-11T00:31:30.374531829Z" level=error msg="Failed to destroy network for sandbox \"1c4baf788d34709e75ad167d5e2654359e37861d66bb18c1a32411f3d06102c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.376376 containerd[1543]: time="2025-09-11T00:31:30.376199035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cc5b99d4-bnsq4,Uid:31613f2f-53c8-48c8-a407-1e86cbe5b31a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4baf788d34709e75ad167d5e2654359e37861d66bb18c1a32411f3d06102c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.376490 kubelet[2700]: E0911 00:31:30.376439 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4baf788d34709e75ad167d5e2654359e37861d66bb18c1a32411f3d06102c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.376794 kubelet[2700]: E0911 00:31:30.376528 2700 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4baf788d34709e75ad167d5e2654359e37861d66bb18c1a32411f3d06102c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66cc5b99d4-bnsq4" Sep 11 00:31:30.376794 kubelet[2700]: E0911 00:31:30.376560 2700 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4baf788d34709e75ad167d5e2654359e37861d66bb18c1a32411f3d06102c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66cc5b99d4-bnsq4" Sep 11 00:31:30.376794 kubelet[2700]: E0911 00:31:30.376616 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66cc5b99d4-bnsq4_calico-system(31613f2f-53c8-48c8-a407-1e86cbe5b31a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66cc5b99d4-bnsq4_calico-system(31613f2f-53c8-48c8-a407-1e86cbe5b31a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c4baf788d34709e75ad167d5e2654359e37861d66bb18c1a32411f3d06102c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66cc5b99d4-bnsq4" podUID="31613f2f-53c8-48c8-a407-1e86cbe5b31a" Sep 11 00:31:30.399283 containerd[1543]: time="2025-09-11T00:31:30.399220246Z" level=error msg="Failed to destroy network for sandbox \"45cc967cf06d8c20262129cd6d34261a50d505e5bc51bf01223a38475159cef1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.399690 containerd[1543]: time="2025-09-11T00:31:30.399224424Z" level=error msg="Failed to destroy network for sandbox \"5f86f029310620162ad06226f0627c29a4c827532f0723fbc2a18908fe9d0561\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.401561 containerd[1543]: time="2025-09-11T00:31:30.401517116Z" level=error msg="Failed to destroy network for sandbox \"c4a14945194c4793d1968f849bb67b0eb3f3e14b547ba36d6464edf1266a93b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.402672 containerd[1543]: time="2025-09-11T00:31:30.402607958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bf5776bb-p5zpf,Uid:67bb8c61-2f2e-46c2-a879-b0f78278865d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f86f029310620162ad06226f0627c29a4c827532f0723fbc2a18908fe9d0561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.402843 kubelet[2700]: E0911 00:31:30.402800 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f86f029310620162ad06226f0627c29a4c827532f0723fbc2a18908fe9d0561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.402902 kubelet[2700]: E0911 00:31:30.402859 2700 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f86f029310620162ad06226f0627c29a4c827532f0723fbc2a18908fe9d0561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bf5776bb-p5zpf" Sep 11 00:31:30.402902 kubelet[2700]: E0911 00:31:30.402880 2700 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f86f029310620162ad06226f0627c29a4c827532f0723fbc2a18908fe9d0561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bf5776bb-p5zpf" Sep 11 00:31:30.402956 kubelet[2700]: E0911 00:31:30.402933 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bf5776bb-p5zpf_calico-apiserver(67bb8c61-2f2e-46c2-a879-b0f78278865d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bf5776bb-p5zpf_calico-apiserver(67bb8c61-2f2e-46c2-a879-b0f78278865d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f86f029310620162ad06226f0627c29a4c827532f0723fbc2a18908fe9d0561\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bf5776bb-p5zpf" podUID="67bb8c61-2f2e-46c2-a879-b0f78278865d" Sep 11 00:31:30.404640 containerd[1543]: time="2025-09-11T00:31:30.404543609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8b67889cf-87x7f,Uid:612cc950-7987-4b06-8f21-d5f454e903b3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45cc967cf06d8c20262129cd6d34261a50d505e5bc51bf01223a38475159cef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.404713 kubelet[2700]: E0911 00:31:30.404681 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45cc967cf06d8c20262129cd6d34261a50d505e5bc51bf01223a38475159cef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.404713 kubelet[2700]: E0911 00:31:30.404707 2700 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45cc967cf06d8c20262129cd6d34261a50d505e5bc51bf01223a38475159cef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8b67889cf-87x7f" Sep 11 00:31:30.404768 kubelet[2700]: E0911 00:31:30.404751 2700 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45cc967cf06d8c20262129cd6d34261a50d505e5bc51bf01223a38475159cef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8b67889cf-87x7f" Sep 11 00:31:30.404893 kubelet[2700]: E0911 00:31:30.404861 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8b67889cf-87x7f_calico-system(612cc950-7987-4b06-8f21-d5f454e903b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8b67889cf-87x7f_calico-system(612cc950-7987-4b06-8f21-d5f454e903b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45cc967cf06d8c20262129cd6d34261a50d505e5bc51bf01223a38475159cef1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8b67889cf-87x7f" podUID="612cc950-7987-4b06-8f21-d5f454e903b3" Sep 11 00:31:30.406528 containerd[1543]: time="2025-09-11T00:31:30.406489058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zrbxh,Uid:67ee7562-8c17-492b-8855-1e9f5785122c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4a14945194c4793d1968f849bb67b0eb3f3e14b547ba36d6464edf1266a93b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.408426 kubelet[2700]: E0911 00:31:30.407550 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4a14945194c4793d1968f849bb67b0eb3f3e14b547ba36d6464edf1266a93b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.408426 kubelet[2700]: E0911 00:31:30.407604 2700 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4a14945194c4793d1968f849bb67b0eb3f3e14b547ba36d6464edf1266a93b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zrbxh" Sep 11 00:31:30.408426 kubelet[2700]: E0911 00:31:30.407618 2700 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4a14945194c4793d1968f849bb67b0eb3f3e14b547ba36d6464edf1266a93b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zrbxh" Sep 11 00:31:30.408631 kubelet[2700]: E0911 00:31:30.407649 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zrbxh_kube-system(67ee7562-8c17-492b-8855-1e9f5785122c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zrbxh_kube-system(67ee7562-8c17-492b-8855-1e9f5785122c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4a14945194c4793d1968f849bb67b0eb3f3e14b547ba36d6464edf1266a93b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zrbxh" podUID="67ee7562-8c17-492b-8855-1e9f5785122c" Sep 11 00:31:30.409536 containerd[1543]: time="2025-09-11T00:31:30.409516433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 00:31:30.424461 containerd[1543]: time="2025-09-11T00:31:30.424388425Z" level=error msg="Failed to destroy network for sandbox \"e817c3fc039983397be79556eda26070f16b46b40e0bb6f20be53209f8260193\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.427656 containerd[1543]: time="2025-09-11T00:31:30.427616677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gph9x,Uid:bcf304be-820c-49a9-9083-afdf36dc2dcc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e817c3fc039983397be79556eda26070f16b46b40e0bb6f20be53209f8260193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.429003 kubelet[2700]: E0911 00:31:30.427802 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e817c3fc039983397be79556eda26070f16b46b40e0bb6f20be53209f8260193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.429003 kubelet[2700]: E0911 00:31:30.427848 2700 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e817c3fc039983397be79556eda26070f16b46b40e0bb6f20be53209f8260193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gph9x" Sep 11 00:31:30.429003 kubelet[2700]: E0911 00:31:30.427869 2700 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e817c3fc039983397be79556eda26070f16b46b40e0bb6f20be53209f8260193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gph9x" Sep 11 00:31:30.429190 kubelet[2700]: E0911 00:31:30.427913 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gph9x_kube-system(bcf304be-820c-49a9-9083-afdf36dc2dcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gph9x_kube-system(bcf304be-820c-49a9-9083-afdf36dc2dcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e817c3fc039983397be79556eda26070f16b46b40e0bb6f20be53209f8260193\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gph9x" podUID="bcf304be-820c-49a9-9083-afdf36dc2dcc" Sep 11 00:31:30.429327 containerd[1543]: time="2025-09-11T00:31:30.429270679Z" level=error msg="Failed to destroy network for sandbox \"cb34da4f2892bfe42a22ca0ae1cfcc46438cb04cd56f159567c49cdce1f05100\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.430748 containerd[1543]: time="2025-09-11T00:31:30.430617792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bf5776bb-tp65d,Uid:4d238a88-a8a4-4120-b65a-06053649067f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb34da4f2892bfe42a22ca0ae1cfcc46438cb04cd56f159567c49cdce1f05100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.432677 kubelet[2700]: E0911 00:31:30.432638 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb34da4f2892bfe42a22ca0ae1cfcc46438cb04cd56f159567c49cdce1f05100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.432677 kubelet[2700]: E0911 00:31:30.432677 2700 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb34da4f2892bfe42a22ca0ae1cfcc46438cb04cd56f159567c49cdce1f05100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bf5776bb-tp65d" Sep 11 00:31:30.432875 kubelet[2700]: E0911 00:31:30.432694 2700 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb34da4f2892bfe42a22ca0ae1cfcc46438cb04cd56f159567c49cdce1f05100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bf5776bb-tp65d" Sep 11 00:31:30.432875 kubelet[2700]: E0911 00:31:30.432741 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bf5776bb-tp65d_calico-apiserver(4d238a88-a8a4-4120-b65a-06053649067f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bf5776bb-tp65d_calico-apiserver(4d238a88-a8a4-4120-b65a-06053649067f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb34da4f2892bfe42a22ca0ae1cfcc46438cb04cd56f159567c49cdce1f05100\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bf5776bb-tp65d" podUID="4d238a88-a8a4-4120-b65a-06053649067f" Sep 11 00:31:30.449908 containerd[1543]: time="2025-09-11T00:31:30.449856960Z" level=error msg="Failed to destroy network for sandbox \"3aa59a39adc2b9fa2d97682135c53dc47d9cf44469a6aa0c5b83cc2179952f3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.451161 containerd[1543]: time="2025-09-11T00:31:30.451118894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-j46cl,Uid:a893716b-fcc5-4f98-86ec-6a44b51140ab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aa59a39adc2b9fa2d97682135c53dc47d9cf44469a6aa0c5b83cc2179952f3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.451390 kubelet[2700]: E0911 00:31:30.451348 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aa59a39adc2b9fa2d97682135c53dc47d9cf44469a6aa0c5b83cc2179952f3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:30.451445 kubelet[2700]: E0911 00:31:30.451412 2700 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aa59a39adc2b9fa2d97682135c53dc47d9cf44469a6aa0c5b83cc2179952f3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-j46cl" Sep 11 00:31:30.451445 kubelet[2700]: E0911 00:31:30.451434 2700 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aa59a39adc2b9fa2d97682135c53dc47d9cf44469a6aa0c5b83cc2179952f3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-j46cl" Sep 11 00:31:30.451524 kubelet[2700]: E0911 00:31:30.451497 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-j46cl_calico-system(a893716b-fcc5-4f98-86ec-6a44b51140ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-j46cl_calico-system(a893716b-fcc5-4f98-86ec-6a44b51140ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3aa59a39adc2b9fa2d97682135c53dc47d9cf44469a6aa0c5b83cc2179952f3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-j46cl" podUID="a893716b-fcc5-4f98-86ec-6a44b51140ab" Sep 11 00:31:31.331092 systemd[1]: Created slice kubepods-besteffort-podd6874fb2_8b46_4762_a169_00ccac62f67f.slice - libcontainer container kubepods-besteffort-podd6874fb2_8b46_4762_a169_00ccac62f67f.slice. Sep 11 00:31:31.333481 containerd[1543]: time="2025-09-11T00:31:31.333432383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mn9tv,Uid:d6874fb2-8b46-4762-a169-00ccac62f67f,Namespace:calico-system,Attempt:0,}" Sep 11 00:31:31.643213 containerd[1543]: time="2025-09-11T00:31:31.643049853Z" level=error msg="Failed to destroy network for sandbox \"fe8de5fc70bf5ad00628ae57fe8f5f51486029bdec1572a4200322b1e134011e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:31.645308 systemd[1]: run-netns-cni\x2dcccade52\x2db631\x2da5cb\x2d06d7\x2d721ddb60acfe.mount: Deactivated successfully. Sep 11 00:31:32.164542 containerd[1543]: time="2025-09-11T00:31:32.164461809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mn9tv,Uid:d6874fb2-8b46-4762-a169-00ccac62f67f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe8de5fc70bf5ad00628ae57fe8f5f51486029bdec1572a4200322b1e134011e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:32.164758 kubelet[2700]: E0911 00:31:32.164708 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe8de5fc70bf5ad00628ae57fe8f5f51486029bdec1572a4200322b1e134011e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:31:32.165140 kubelet[2700]: E0911 00:31:32.164777 2700 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe8de5fc70bf5ad00628ae57fe8f5f51486029bdec1572a4200322b1e134011e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mn9tv" Sep 11 00:31:32.165140 kubelet[2700]: E0911 00:31:32.164802 2700 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe8de5fc70bf5ad00628ae57fe8f5f51486029bdec1572a4200322b1e134011e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mn9tv" Sep 11 00:31:32.165140 kubelet[2700]: E0911 00:31:32.164869 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mn9tv_calico-system(d6874fb2-8b46-4762-a169-00ccac62f67f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mn9tv_calico-system(d6874fb2-8b46-4762-a169-00ccac62f67f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe8de5fc70bf5ad00628ae57fe8f5f51486029bdec1572a4200322b1e134011e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mn9tv" podUID="d6874fb2-8b46-4762-a169-00ccac62f67f" Sep 11 00:31:38.352767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3722063713.mount: Deactivated successfully. Sep 11 00:31:39.237175 containerd[1543]: time="2025-09-11T00:31:39.237099814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:39.237852 containerd[1543]: time="2025-09-11T00:31:39.237793146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 11 00:31:39.239024 containerd[1543]: time="2025-09-11T00:31:39.238968564Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:39.240827 containerd[1543]: time="2025-09-11T00:31:39.240781759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:39.241257 containerd[1543]: time="2025-09-11T00:31:39.241224952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.831623801s" Sep 11 00:31:39.241257 containerd[1543]: time="2025-09-11T00:31:39.241253436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 11 00:31:39.262493 containerd[1543]: time="2025-09-11T00:31:39.262420704Z" level=info msg="CreateContainer within sandbox \"4fc75c7c72d7d9a175b1adefd3828f880ba1d39e66936643485eb7f5d8fcd18e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 00:31:39.279189 containerd[1543]: time="2025-09-11T00:31:39.279137835Z" level=info msg="Container 06063b502a3ee9e805c90900484d24c9a377a51348d2a46b0e7816b256bcbfaf: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:39.293024 containerd[1543]: time="2025-09-11T00:31:39.292961820Z" level=info msg="CreateContainer within sandbox \"4fc75c7c72d7d9a175b1adefd3828f880ba1d39e66936643485eb7f5d8fcd18e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"06063b502a3ee9e805c90900484d24c9a377a51348d2a46b0e7816b256bcbfaf\"" Sep 11 00:31:39.293632 containerd[1543]: time="2025-09-11T00:31:39.293580562Z" level=info msg="StartContainer for \"06063b502a3ee9e805c90900484d24c9a377a51348d2a46b0e7816b256bcbfaf\"" Sep 11 00:31:39.295707 containerd[1543]: time="2025-09-11T00:31:39.295666059Z" level=info msg="connecting to shim 06063b502a3ee9e805c90900484d24c9a377a51348d2a46b0e7816b256bcbfaf" address="unix:///run/containerd/s/ede337edd6f8d1ccbc747f65bf85a62222ea212ccc2c0f2f45831de8e3374d78" protocol=ttrpc version=3 Sep 11 00:31:39.314282 systemd[1]: Started cri-containerd-06063b502a3ee9e805c90900484d24c9a377a51348d2a46b0e7816b256bcbfaf.scope - libcontainer container 06063b502a3ee9e805c90900484d24c9a377a51348d2a46b0e7816b256bcbfaf. Sep 11 00:31:39.366875 containerd[1543]: time="2025-09-11T00:31:39.366825470Z" level=info msg="StartContainer for \"06063b502a3ee9e805c90900484d24c9a377a51348d2a46b0e7816b256bcbfaf\" returns successfully" Sep 11 00:31:39.446541 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 00:31:39.447908 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 00:31:39.558142 kubelet[2700]: I0911 00:31:39.557767 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fx68c" podStartSLOduration=0.842027412 podStartE2EDuration="20.55774764s" podCreationTimestamp="2025-09-11 00:31:19 +0000 UTC" firstStartedPulling="2025-09-11 00:31:19.526233933 +0000 UTC m=+20.294464287" lastFinishedPulling="2025-09-11 00:31:39.241954162 +0000 UTC m=+40.010184515" observedRunningTime="2025-09-11 00:31:39.456229057 +0000 UTC m=+40.224459410" watchObservedRunningTime="2025-09-11 00:31:39.55774764 +0000 UTC m=+40.325977993" Sep 11 00:31:39.588412 containerd[1543]: time="2025-09-11T00:31:39.588356354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06063b502a3ee9e805c90900484d24c9a377a51348d2a46b0e7816b256bcbfaf\" id:\"f6259f99d549dfa2fd1a14ba0ae9dbad80c2bd1a4ae95b03f04a3eea185acb09\" pid:3856 exit_status:1 exited_at:{seconds:1757550699 nanos:587855022}" Sep 11 00:31:39.600942 kubelet[2700]: I0911 00:31:39.600107 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/612cc950-7987-4b06-8f21-d5f454e903b3-whisker-ca-bundle\") pod \"612cc950-7987-4b06-8f21-d5f454e903b3\" (UID: \"612cc950-7987-4b06-8f21-d5f454e903b3\") " Sep 11 00:31:39.600942 kubelet[2700]: I0911 00:31:39.600161 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpk4w\" (UniqueName: \"kubernetes.io/projected/612cc950-7987-4b06-8f21-d5f454e903b3-kube-api-access-fpk4w\") pod \"612cc950-7987-4b06-8f21-d5f454e903b3\" (UID: \"612cc950-7987-4b06-8f21-d5f454e903b3\") " Sep 11 00:31:39.600942 kubelet[2700]: I0911 00:31:39.600197 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/612cc950-7987-4b06-8f21-d5f454e903b3-whisker-backend-key-pair\") pod \"612cc950-7987-4b06-8f21-d5f454e903b3\" (UID: \"612cc950-7987-4b06-8f21-d5f454e903b3\") " Sep 11 00:31:39.601173 kubelet[2700]: I0911 00:31:39.601098 2700 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612cc950-7987-4b06-8f21-d5f454e903b3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "612cc950-7987-4b06-8f21-d5f454e903b3" (UID: "612cc950-7987-4b06-8f21-d5f454e903b3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:31:39.605532 systemd[1]: var-lib-kubelet-pods-612cc950\x2d7987\x2d4b06\x2d8f21\x2dd5f454e903b3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 00:31:39.606100 kubelet[2700]: I0911 00:31:39.605885 2700 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612cc950-7987-4b06-8f21-d5f454e903b3-kube-api-access-fpk4w" (OuterVolumeSpecName: "kube-api-access-fpk4w") pod "612cc950-7987-4b06-8f21-d5f454e903b3" (UID: "612cc950-7987-4b06-8f21-d5f454e903b3"). InnerVolumeSpecName "kube-api-access-fpk4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:31:39.608957 systemd[1]: var-lib-kubelet-pods-612cc950\x2d7987\x2d4b06\x2d8f21\x2dd5f454e903b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfpk4w.mount: Deactivated successfully. Sep 11 00:31:39.609235 kubelet[2700]: I0911 00:31:39.609207 2700 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612cc950-7987-4b06-8f21-d5f454e903b3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "612cc950-7987-4b06-8f21-d5f454e903b3" (UID: "612cc950-7987-4b06-8f21-d5f454e903b3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:31:39.701105 kubelet[2700]: I0911 00:31:39.701066 2700 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/612cc950-7987-4b06-8f21-d5f454e903b3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 11 00:31:39.701105 kubelet[2700]: I0911 00:31:39.701095 2700 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fpk4w\" (UniqueName: \"kubernetes.io/projected/612cc950-7987-4b06-8f21-d5f454e903b3-kube-api-access-fpk4w\") on node \"localhost\" DevicePath \"\"" Sep 11 00:31:39.701105 kubelet[2700]: I0911 00:31:39.701104 2700 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/612cc950-7987-4b06-8f21-d5f454e903b3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 11 00:31:40.446763 systemd[1]: Removed slice kubepods-besteffort-pod612cc950_7987_4b06_8f21_d5f454e903b3.slice - libcontainer container kubepods-besteffort-pod612cc950_7987_4b06_8f21_d5f454e903b3.slice. Sep 11 00:31:40.513488 systemd[1]: Created slice kubepods-besteffort-pod091812d0_1a5c_4df8_a568_57e1c182b7f0.slice - libcontainer container kubepods-besteffort-pod091812d0_1a5c_4df8_a568_57e1c182b7f0.slice. Sep 11 00:31:40.539137 containerd[1543]: time="2025-09-11T00:31:40.539076190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06063b502a3ee9e805c90900484d24c9a377a51348d2a46b0e7816b256bcbfaf\" id:\"f7d4fad43da0fc8de3a6c3df3f45ae86324d76d9826b0769fb92c296ee13c116\" pid:3903 exit_status:1 exited_at:{seconds:1757550700 nanos:538686559}" Sep 11 00:31:40.607138 kubelet[2700]: I0911 00:31:40.607103 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/091812d0-1a5c-4df8-a568-57e1c182b7f0-whisker-backend-key-pair\") pod \"whisker-7849657554-dxjmd\" (UID: \"091812d0-1a5c-4df8-a568-57e1c182b7f0\") " pod="calico-system/whisker-7849657554-dxjmd" Sep 11 00:31:40.607138 kubelet[2700]: I0911 00:31:40.607136 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcwtm\" (UniqueName: \"kubernetes.io/projected/091812d0-1a5c-4df8-a568-57e1c182b7f0-kube-api-access-mcwtm\") pod \"whisker-7849657554-dxjmd\" (UID: \"091812d0-1a5c-4df8-a568-57e1c182b7f0\") " pod="calico-system/whisker-7849657554-dxjmd" Sep 11 00:31:40.607138 kubelet[2700]: I0911 00:31:40.607152 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/091812d0-1a5c-4df8-a568-57e1c182b7f0-whisker-ca-bundle\") pod \"whisker-7849657554-dxjmd\" (UID: \"091812d0-1a5c-4df8-a568-57e1c182b7f0\") " pod="calico-system/whisker-7849657554-dxjmd" Sep 11 00:31:40.819600 containerd[1543]: time="2025-09-11T00:31:40.819530033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7849657554-dxjmd,Uid:091812d0-1a5c-4df8-a568-57e1c182b7f0,Namespace:calico-system,Attempt:0,}" Sep 11 00:31:41.226984 systemd-networkd[1456]: cali51590b8f679: Link UP Sep 11 00:31:41.227633 systemd-networkd[1456]: cali51590b8f679: Gained carrier Sep 11 00:31:41.243060 containerd[1543]: 2025-09-11 00:31:41.079 [INFO][4022] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:31:41.243060 containerd[1543]: 2025-09-11 00:31:41.100 [INFO][4022] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7849657554--dxjmd-eth0 whisker-7849657554- calico-system 091812d0-1a5c-4df8-a568-57e1c182b7f0 921 0 2025-09-11 00:31:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7849657554 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7849657554-dxjmd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali51590b8f679 [] [] }} ContainerID="a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" Namespace="calico-system" Pod="whisker-7849657554-dxjmd" WorkloadEndpoint="localhost-k8s-whisker--7849657554--dxjmd-" Sep 11 00:31:41.243060 containerd[1543]: 2025-09-11 00:31:41.100 [INFO][4022] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" Namespace="calico-system" Pod="whisker-7849657554-dxjmd" WorkloadEndpoint="localhost-k8s-whisker--7849657554--dxjmd-eth0" Sep 11 00:31:41.243060 containerd[1543]: 2025-09-11 00:31:41.177 [INFO][4037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" HandleID="k8s-pod-network.a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" Workload="localhost-k8s-whisker--7849657554--dxjmd-eth0" Sep 11 00:31:41.243306 containerd[1543]: 2025-09-11 00:31:41.177 [INFO][4037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" HandleID="k8s-pod-network.a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" Workload="localhost-k8s-whisker--7849657554--dxjmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bf620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7849657554-dxjmd", "timestamp":"2025-09-11 00:31:41.177155356 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:31:41.243306 containerd[1543]: 2025-09-11 00:31:41.177 [INFO][4037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:31:41.243306 containerd[1543]: 2025-09-11 00:31:41.178 [INFO][4037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:31:41.243306 containerd[1543]: 2025-09-11 00:31:41.178 [INFO][4037] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:31:41.243306 containerd[1543]: 2025-09-11 00:31:41.186 [INFO][4037] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" host="localhost" Sep 11 00:31:41.243306 containerd[1543]: 2025-09-11 00:31:41.193 [INFO][4037] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:31:41.243306 containerd[1543]: 2025-09-11 00:31:41.198 [INFO][4037] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:31:41.243306 containerd[1543]: 2025-09-11 00:31:41.200 [INFO][4037] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:41.243306 containerd[1543]: 2025-09-11 00:31:41.202 [INFO][4037] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:41.243306 containerd[1543]: 2025-09-11 00:31:41.202 [INFO][4037] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" host="localhost" Sep 11 00:31:41.243546 containerd[1543]: 2025-09-11 00:31:41.204 [INFO][4037] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4 Sep 11 00:31:41.243546 containerd[1543]: 2025-09-11 00:31:41.209 [INFO][4037] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" host="localhost" Sep 11 00:31:41.243546 containerd[1543]: 2025-09-11 00:31:41.215 [INFO][4037] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" host="localhost" Sep 11 00:31:41.243546 containerd[1543]: 2025-09-11 00:31:41.215 [INFO][4037] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" host="localhost" Sep 11 00:31:41.243546 containerd[1543]: 2025-09-11 00:31:41.215 [INFO][4037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:31:41.243546 containerd[1543]: 2025-09-11 00:31:41.215 [INFO][4037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" HandleID="k8s-pod-network.a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" Workload="localhost-k8s-whisker--7849657554--dxjmd-eth0" Sep 11 00:31:41.243675 containerd[1543]: 2025-09-11 00:31:41.218 [INFO][4022] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" Namespace="calico-system" Pod="whisker-7849657554-dxjmd" WorkloadEndpoint="localhost-k8s-whisker--7849657554--dxjmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7849657554--dxjmd-eth0", GenerateName:"whisker-7849657554-", Namespace:"calico-system", SelfLink:"", UID:"091812d0-1a5c-4df8-a568-57e1c182b7f0", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7849657554", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7849657554-dxjmd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali51590b8f679", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:41.243675 containerd[1543]: 2025-09-11 00:31:41.218 [INFO][4022] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" Namespace="calico-system" Pod="whisker-7849657554-dxjmd" WorkloadEndpoint="localhost-k8s-whisker--7849657554--dxjmd-eth0" Sep 11 00:31:41.243751 containerd[1543]: 2025-09-11 00:31:41.218 [INFO][4022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51590b8f679 ContainerID="a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" Namespace="calico-system" Pod="whisker-7849657554-dxjmd" WorkloadEndpoint="localhost-k8s-whisker--7849657554--dxjmd-eth0" Sep 11 00:31:41.243751 containerd[1543]: 2025-09-11 00:31:41.227 [INFO][4022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" Namespace="calico-system" Pod="whisker-7849657554-dxjmd" WorkloadEndpoint="localhost-k8s-whisker--7849657554--dxjmd-eth0" Sep 11 00:31:41.243797 containerd[1543]: 2025-09-11 00:31:41.227 [INFO][4022] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" Namespace="calico-system" Pod="whisker-7849657554-dxjmd" WorkloadEndpoint="localhost-k8s-whisker--7849657554--dxjmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7849657554--dxjmd-eth0", GenerateName:"whisker-7849657554-", Namespace:"calico-system", SelfLink:"", UID:"091812d0-1a5c-4df8-a568-57e1c182b7f0", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7849657554", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4", Pod:"whisker-7849657554-dxjmd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali51590b8f679", MAC:"8e:da:af:8a:f4:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:41.243845 containerd[1543]: 2025-09-11 00:31:41.239 [INFO][4022] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" Namespace="calico-system" Pod="whisker-7849657554-dxjmd" WorkloadEndpoint="localhost-k8s-whisker--7849657554--dxjmd-eth0" Sep 11 00:31:41.325225 containerd[1543]: time="2025-09-11T00:31:41.325161529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-j46cl,Uid:a893716b-fcc5-4f98-86ec-6a44b51140ab,Namespace:calico-system,Attempt:0,}" Sep 11 00:31:41.327055 kubelet[2700]: I0911 00:31:41.326990 2700 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612cc950-7987-4b06-8f21-d5f454e903b3" path="/var/lib/kubelet/pods/612cc950-7987-4b06-8f21-d5f454e903b3/volumes" Sep 11 00:31:41.550134 containerd[1543]: time="2025-09-11T00:31:41.549985538Z" level=info msg="connecting to shim a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4" address="unix:///run/containerd/s/e25005c68f21bdc57a94c816d4c70e58dac8546843f8a2654b5d8b8a1f699506" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:31:41.577319 systemd[1]: Started cri-containerd-a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4.scope - libcontainer container a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4. Sep 11 00:31:41.594588 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:31:41.609305 systemd-networkd[1456]: cali8934b66aa30: Link UP Sep 11 00:31:41.610259 systemd-networkd[1456]: cali8934b66aa30: Gained carrier Sep 11 00:31:41.625912 containerd[1543]: 2025-09-11 00:31:41.529 [INFO][4053] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:31:41.625912 containerd[1543]: 2025-09-11 00:31:41.540 [INFO][4053] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--j46cl-eth0 goldmane-54d579b49d- calico-system a893716b-fcc5-4f98-86ec-6a44b51140ab 851 0 2025-09-11 00:31:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-j46cl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8934b66aa30 [] [] }} ContainerID="7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" Namespace="calico-system" Pod="goldmane-54d579b49d-j46cl" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--j46cl-" Sep 11 00:31:41.625912 containerd[1543]: 2025-09-11 00:31:41.540 [INFO][4053] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" Namespace="calico-system" Pod="goldmane-54d579b49d-j46cl" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--j46cl-eth0" Sep 11 00:31:41.625912 containerd[1543]: 2025-09-11 00:31:41.570 [INFO][4072] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" HandleID="k8s-pod-network.7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" Workload="localhost-k8s-goldmane--54d579b49d--j46cl-eth0" Sep 11 00:31:41.626231 containerd[1543]: 2025-09-11 00:31:41.571 [INFO][4072] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" HandleID="k8s-pod-network.7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" Workload="localhost-k8s-goldmane--54d579b49d--j46cl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-j46cl", "timestamp":"2025-09-11 00:31:41.570796349 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:31:41.626231 containerd[1543]: 2025-09-11 00:31:41.571 [INFO][4072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:31:41.626231 containerd[1543]: 2025-09-11 00:31:41.571 [INFO][4072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:31:41.626231 containerd[1543]: 2025-09-11 00:31:41.571 [INFO][4072] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:31:41.626231 containerd[1543]: 2025-09-11 00:31:41.578 [INFO][4072] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" host="localhost" Sep 11 00:31:41.626231 containerd[1543]: 2025-09-11 00:31:41.584 [INFO][4072] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:31:41.626231 containerd[1543]: 2025-09-11 00:31:41.589 [INFO][4072] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:31:41.626231 containerd[1543]: 2025-09-11 00:31:41.590 [INFO][4072] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:41.626231 containerd[1543]: 2025-09-11 00:31:41.592 [INFO][4072] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:41.626231 containerd[1543]: 2025-09-11 00:31:41.592 [INFO][4072] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" host="localhost" Sep 11 00:31:41.626457 containerd[1543]: 2025-09-11 00:31:41.594 [INFO][4072] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c Sep 11 00:31:41.626457 containerd[1543]: 2025-09-11 00:31:41.597 [INFO][4072] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" host="localhost" Sep 11 00:31:41.626457 containerd[1543]: 2025-09-11 00:31:41.604 [INFO][4072] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" host="localhost" Sep 11 00:31:41.626457 containerd[1543]: 2025-09-11 00:31:41.604 [INFO][4072] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" host="localhost" Sep 11 00:31:41.626457 containerd[1543]: 2025-09-11 00:31:41.604 [INFO][4072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:31:41.626457 containerd[1543]: 2025-09-11 00:31:41.604 [INFO][4072] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" HandleID="k8s-pod-network.7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" Workload="localhost-k8s-goldmane--54d579b49d--j46cl-eth0" Sep 11 00:31:41.626614 containerd[1543]: 2025-09-11 00:31:41.607 [INFO][4053] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" Namespace="calico-system" Pod="goldmane-54d579b49d-j46cl" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--j46cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--j46cl-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a893716b-fcc5-4f98-86ec-6a44b51140ab", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-j46cl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8934b66aa30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:41.626614 containerd[1543]: 2025-09-11 00:31:41.607 [INFO][4053] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" Namespace="calico-system" Pod="goldmane-54d579b49d-j46cl" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--j46cl-eth0" Sep 11 00:31:41.626693 containerd[1543]: 2025-09-11 00:31:41.607 [INFO][4053] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8934b66aa30 ContainerID="7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" Namespace="calico-system" Pod="goldmane-54d579b49d-j46cl" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--j46cl-eth0" Sep 11 00:31:41.626693 containerd[1543]: 2025-09-11 00:31:41.610 [INFO][4053] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" Namespace="calico-system" Pod="goldmane-54d579b49d-j46cl" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--j46cl-eth0" Sep 11 00:31:41.626751 containerd[1543]: 2025-09-11 00:31:41.611 [INFO][4053] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" Namespace="calico-system" Pod="goldmane-54d579b49d-j46cl" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--j46cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--j46cl-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a893716b-fcc5-4f98-86ec-6a44b51140ab", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c", Pod:"goldmane-54d579b49d-j46cl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8934b66aa30", MAC:"c6:85:25:3d:7a:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:41.626804 containerd[1543]: 2025-09-11 00:31:41.620 [INFO][4053] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" Namespace="calico-system" Pod="goldmane-54d579b49d-j46cl" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--j46cl-eth0" Sep 11 00:31:41.631730 containerd[1543]: time="2025-09-11T00:31:41.631683239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7849657554-dxjmd,Uid:091812d0-1a5c-4df8-a568-57e1c182b7f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4\"" Sep 11 00:31:41.632939 containerd[1543]: time="2025-09-11T00:31:41.632914852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 00:31:41.650875 containerd[1543]: time="2025-09-11T00:31:41.650817061Z" level=info msg="connecting to shim 7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c" address="unix:///run/containerd/s/e5cd58cba8d25c56a197ccfffb25f2bd65870db5bc88f42e4946c2ab0ac0cb0c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:31:41.681256 systemd[1]: Started cri-containerd-7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c.scope - libcontainer container 7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c. Sep 11 00:31:41.697439 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:31:41.734433 containerd[1543]: time="2025-09-11T00:31:41.734389332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-j46cl,Uid:a893716b-fcc5-4f98-86ec-6a44b51140ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c\"" Sep 11 00:31:42.479253 systemd-networkd[1456]: cali51590b8f679: Gained IPv6LL Sep 11 00:31:43.376296 systemd-networkd[1456]: cali8934b66aa30: Gained IPv6LL Sep 11 00:31:43.432831 containerd[1543]: time="2025-09-11T00:31:43.432779494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:43.433598 containerd[1543]: time="2025-09-11T00:31:43.433569447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 11 00:31:43.434631 containerd[1543]: time="2025-09-11T00:31:43.434597898Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:43.436726 containerd[1543]: time="2025-09-11T00:31:43.436691699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:43.437406 containerd[1543]: time="2025-09-11T00:31:43.437374642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.80442843s" Sep 11 00:31:43.437451 containerd[1543]: time="2025-09-11T00:31:43.437406331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 11 00:31:43.449938 containerd[1543]: time="2025-09-11T00:31:43.449908855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 00:31:43.454252 containerd[1543]: time="2025-09-11T00:31:43.454218417Z" level=info msg="CreateContainer within sandbox \"a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 00:31:43.462095 containerd[1543]: time="2025-09-11T00:31:43.462067434Z" level=info msg="Container fe8758be94bffe7ce3144cad242b7b5be84cf1a49af7d0b5a2eca6d9d45b7658: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:43.468873 containerd[1543]: time="2025-09-11T00:31:43.468834839Z" level=info msg="CreateContainer within sandbox \"a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"fe8758be94bffe7ce3144cad242b7b5be84cf1a49af7d0b5a2eca6d9d45b7658\"" Sep 11 00:31:43.469308 containerd[1543]: time="2025-09-11T00:31:43.469275617Z" level=info msg="StartContainer for \"fe8758be94bffe7ce3144cad242b7b5be84cf1a49af7d0b5a2eca6d9d45b7658\"" Sep 11 00:31:43.470183 containerd[1543]: time="2025-09-11T00:31:43.470153316Z" level=info msg="connecting to shim fe8758be94bffe7ce3144cad242b7b5be84cf1a49af7d0b5a2eca6d9d45b7658" address="unix:///run/containerd/s/e25005c68f21bdc57a94c816d4c70e58dac8546843f8a2654b5d8b8a1f699506" protocol=ttrpc version=3 Sep 11 00:31:43.498200 systemd[1]: Started cri-containerd-fe8758be94bffe7ce3144cad242b7b5be84cf1a49af7d0b5a2eca6d9d45b7658.scope - libcontainer container fe8758be94bffe7ce3144cad242b7b5be84cf1a49af7d0b5a2eca6d9d45b7658. Sep 11 00:31:43.547050 containerd[1543]: time="2025-09-11T00:31:43.546386898Z" level=info msg="StartContainer for \"fe8758be94bffe7ce3144cad242b7b5be84cf1a49af7d0b5a2eca6d9d45b7658\" returns successfully" Sep 11 00:31:44.324871 containerd[1543]: time="2025-09-11T00:31:44.324819710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bf5776bb-p5zpf,Uid:67bb8c61-2f2e-46c2-a879-b0f78278865d,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:31:44.325023 containerd[1543]: time="2025-09-11T00:31:44.324828827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zrbxh,Uid:67ee7562-8c17-492b-8855-1e9f5785122c,Namespace:kube-system,Attempt:0,}" Sep 11 00:31:44.325311 containerd[1543]: time="2025-09-11T00:31:44.325253434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gph9x,Uid:bcf304be-820c-49a9-9083-afdf36dc2dcc,Namespace:kube-system,Attempt:0,}" Sep 11 00:31:44.453018 systemd-networkd[1456]: cali133257f01f8: Link UP Sep 11 00:31:44.453803 systemd-networkd[1456]: cali133257f01f8: Gained carrier Sep 11 00:31:44.467870 containerd[1543]: 2025-09-11 00:31:44.362 [INFO][4308] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:31:44.467870 containerd[1543]: 2025-09-11 00:31:44.377 [INFO][4308] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--gph9x-eth0 coredns-674b8bbfcf- kube-system bcf304be-820c-49a9-9083-afdf36dc2dcc 848 0 2025-09-11 00:31:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-gph9x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali133257f01f8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gph9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gph9x-" Sep 11 00:31:44.467870 containerd[1543]: 2025-09-11 00:31:44.377 [INFO][4308] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gph9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gph9x-eth0" Sep 11 00:31:44.467870 containerd[1543]: 2025-09-11 00:31:44.411 [INFO][4332] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" HandleID="k8s-pod-network.19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" Workload="localhost-k8s-coredns--674b8bbfcf--gph9x-eth0" Sep 11 00:31:44.468451 containerd[1543]: 2025-09-11 00:31:44.411 [INFO][4332] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" HandleID="k8s-pod-network.19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" Workload="localhost-k8s-coredns--674b8bbfcf--gph9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-gph9x", "timestamp":"2025-09-11 00:31:44.411244233 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:31:44.468451 containerd[1543]: 2025-09-11 00:31:44.411 [INFO][4332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:31:44.468451 containerd[1543]: 2025-09-11 00:31:44.411 [INFO][4332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:31:44.468451 containerd[1543]: 2025-09-11 00:31:44.411 [INFO][4332] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:31:44.468451 containerd[1543]: 2025-09-11 00:31:44.418 [INFO][4332] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" host="localhost" Sep 11 00:31:44.468451 containerd[1543]: 2025-09-11 00:31:44.424 [INFO][4332] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:31:44.468451 containerd[1543]: 2025-09-11 00:31:44.428 [INFO][4332] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:31:44.468451 containerd[1543]: 2025-09-11 00:31:44.430 [INFO][4332] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:44.468451 containerd[1543]: 2025-09-11 00:31:44.432 [INFO][4332] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:44.468451 containerd[1543]: 2025-09-11 00:31:44.432 [INFO][4332] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" host="localhost" Sep 11 00:31:44.468763 containerd[1543]: 2025-09-11 00:31:44.433 [INFO][4332] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a Sep 11 00:31:44.468763 containerd[1543]: 2025-09-11 00:31:44.437 [INFO][4332] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" host="localhost" Sep 11 00:31:44.468763 containerd[1543]: 2025-09-11 00:31:44.443 [INFO][4332] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" host="localhost" Sep 11 00:31:44.468763 containerd[1543]: 2025-09-11 00:31:44.443 [INFO][4332] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" host="localhost" Sep 11 00:31:44.468763 containerd[1543]: 2025-09-11 00:31:44.443 [INFO][4332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:31:44.468763 containerd[1543]: 2025-09-11 00:31:44.444 [INFO][4332] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" HandleID="k8s-pod-network.19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" Workload="localhost-k8s-coredns--674b8bbfcf--gph9x-eth0" Sep 11 00:31:44.468930 containerd[1543]: 2025-09-11 00:31:44.448 [INFO][4308] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gph9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gph9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gph9x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bcf304be-820c-49a9-9083-afdf36dc2dcc", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-gph9x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali133257f01f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:44.469172 containerd[1543]: 2025-09-11 00:31:44.450 [INFO][4308] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gph9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gph9x-eth0" Sep 11 00:31:44.469172 containerd[1543]: 2025-09-11 00:31:44.450 [INFO][4308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali133257f01f8 ContainerID="19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gph9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gph9x-eth0" Sep 11 00:31:44.469172 containerd[1543]: 2025-09-11 00:31:44.454 [INFO][4308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gph9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gph9x-eth0" Sep 11 00:31:44.469292 containerd[1543]: 2025-09-11 00:31:44.455 [INFO][4308] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gph9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gph9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gph9x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bcf304be-820c-49a9-9083-afdf36dc2dcc", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a", Pod:"coredns-674b8bbfcf-gph9x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali133257f01f8", MAC:"a2:69:df:73:50:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:44.469292 containerd[1543]: 2025-09-11 00:31:44.464 [INFO][4308] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gph9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gph9x-eth0" Sep 11 00:31:44.492475 containerd[1543]: time="2025-09-11T00:31:44.492428228Z" level=info msg="connecting to shim 19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a" address="unix:///run/containerd/s/0ab8533936ea8895ece5d4ee205cc5f45e8b385801cce4b0736cdad33f06e5c4" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:31:44.517211 systemd[1]: Started cri-containerd-19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a.scope - libcontainer container 19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a. Sep 11 00:31:44.532432 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:31:44.558107 systemd-networkd[1456]: calib6b3a1f9dda: Link UP Sep 11 00:31:44.558451 systemd-networkd[1456]: calib6b3a1f9dda: Gained carrier Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.360 [INFO][4283] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.374 [INFO][4283] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0 calico-apiserver-54bf5776bb- calico-apiserver 67bb8c61-2f2e-46c2-a879-b0f78278865d 852 0 2025-09-11 00:31:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54bf5776bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54bf5776bb-p5zpf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6b3a1f9dda [] [] }} ContainerID="ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-p5zpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-" Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.374 [INFO][4283] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-p5zpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0" Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.416 [INFO][4334] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" HandleID="k8s-pod-network.ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" Workload="localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0" Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.416 [INFO][4334] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" HandleID="k8s-pod-network.ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" Workload="localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54bf5776bb-p5zpf", "timestamp":"2025-09-11 00:31:44.416240814 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.416 [INFO][4334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.443 [INFO][4334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.444 [INFO][4334] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.519 [INFO][4334] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" host="localhost" Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.526 [INFO][4334] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.530 [INFO][4334] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.532 [INFO][4334] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.534 [INFO][4334] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.534 [INFO][4334] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" host="localhost" Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.536 [INFO][4334] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.541 [INFO][4334] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" host="localhost" Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.545 [INFO][4334] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" host="localhost" Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.545 [INFO][4334] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" host="localhost" Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.546 [INFO][4334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:31:44.576146 containerd[1543]: 2025-09-11 00:31:44.546 [INFO][4334] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" HandleID="k8s-pod-network.ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" Workload="localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0" Sep 11 00:31:44.576745 containerd[1543]: 2025-09-11 00:31:44.552 [INFO][4283] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-p5zpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0", GenerateName:"calico-apiserver-54bf5776bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"67bb8c61-2f2e-46c2-a879-b0f78278865d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bf5776bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54bf5776bb-p5zpf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6b3a1f9dda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:44.576745 containerd[1543]: 2025-09-11 00:31:44.552 [INFO][4283] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-p5zpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0" Sep 11 00:31:44.576745 containerd[1543]: 2025-09-11 00:31:44.552 [INFO][4283] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6b3a1f9dda ContainerID="ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-p5zpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0" Sep 11 00:31:44.576745 containerd[1543]: 2025-09-11 00:31:44.559 [INFO][4283] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-p5zpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0" Sep 11 00:31:44.576745 containerd[1543]: 2025-09-11 00:31:44.559 [INFO][4283] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-p5zpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0", GenerateName:"calico-apiserver-54bf5776bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"67bb8c61-2f2e-46c2-a879-b0f78278865d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bf5776bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f", Pod:"calico-apiserver-54bf5776bb-p5zpf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6b3a1f9dda", MAC:"c2:08:2d:42:d5:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:44.576745 containerd[1543]: 2025-09-11 00:31:44.573 [INFO][4283] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-p5zpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--p5zpf-eth0" Sep 11 00:31:44.576745 containerd[1543]: time="2025-09-11T00:31:44.576270222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gph9x,Uid:bcf304be-820c-49a9-9083-afdf36dc2dcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a\"" Sep 11 00:31:44.582511 containerd[1543]: time="2025-09-11T00:31:44.582475142Z" level=info msg="CreateContainer within sandbox \"19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:31:44.612869 containerd[1543]: time="2025-09-11T00:31:44.612820843Z" level=info msg="Container 17a12a433d6a6a8989dbb8386858e9398cba83650e35661075da7ed4179dfcc6: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:44.618156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127189034.mount: Deactivated successfully. Sep 11 00:31:44.623480 containerd[1543]: time="2025-09-11T00:31:44.623439669Z" level=info msg="connecting to shim ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f" address="unix:///run/containerd/s/257dd9c70fd872ea358d5de491503690dc3cc54734d5c4c4f74cb3887768f269" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:31:44.628213 containerd[1543]: time="2025-09-11T00:31:44.628078097Z" level=info msg="CreateContainer within sandbox \"19fb330d03cd2785e727fdc484c91b433cc6925f56f847036d04106b76a7f01a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17a12a433d6a6a8989dbb8386858e9398cba83650e35661075da7ed4179dfcc6\"" Sep 11 00:31:44.628905 containerd[1543]: time="2025-09-11T00:31:44.628876556Z" level=info msg="StartContainer for \"17a12a433d6a6a8989dbb8386858e9398cba83650e35661075da7ed4179dfcc6\"" Sep 11 00:31:44.629978 containerd[1543]: time="2025-09-11T00:31:44.629937668Z" level=info msg="connecting to shim 17a12a433d6a6a8989dbb8386858e9398cba83650e35661075da7ed4179dfcc6" address="unix:///run/containerd/s/0ab8533936ea8895ece5d4ee205cc5f45e8b385801cce4b0736cdad33f06e5c4" protocol=ttrpc version=3 Sep 11 00:31:44.660304 systemd-networkd[1456]: cali63e58e8d5bb: Link UP Sep 11 00:31:44.661067 systemd-networkd[1456]: cali63e58e8d5bb: Gained carrier Sep 11 00:31:44.661184 systemd[1]: Started cri-containerd-ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f.scope - libcontainer container ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f. Sep 11 00:31:44.664781 systemd[1]: Started cri-containerd-17a12a433d6a6a8989dbb8386858e9398cba83650e35661075da7ed4179dfcc6.scope - libcontainer container 17a12a433d6a6a8989dbb8386858e9398cba83650e35661075da7ed4179dfcc6. Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.359 [INFO][4295] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.374 [INFO][4295] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0 coredns-674b8bbfcf- kube-system 67ee7562-8c17-492b-8855-1e9f5785122c 849 0 2025-09-11 00:31:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zrbxh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali63e58e8d5bb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" Namespace="kube-system" Pod="coredns-674b8bbfcf-zrbxh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zrbxh-" Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.374 [INFO][4295] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" Namespace="kube-system" Pod="coredns-674b8bbfcf-zrbxh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0" Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.416 [INFO][4331] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" HandleID="k8s-pod-network.7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" Workload="localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0" Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.416 [INFO][4331] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" HandleID="k8s-pod-network.7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" Workload="localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059ab30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zrbxh", "timestamp":"2025-09-11 00:31:44.416726706 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.416 [INFO][4331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.546 [INFO][4331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.546 [INFO][4331] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.619 [INFO][4331] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" host="localhost" Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.626 [INFO][4331] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.633 [INFO][4331] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.634 [INFO][4331] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.636 [INFO][4331] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.636 [INFO][4331] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" host="localhost" Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.640 [INFO][4331] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768 Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.646 [INFO][4331] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" host="localhost" Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.654 [INFO][4331] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" host="localhost" Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.654 [INFO][4331] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" host="localhost" Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.654 [INFO][4331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:31:44.679538 containerd[1543]: 2025-09-11 00:31:44.654 [INFO][4331] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" HandleID="k8s-pod-network.7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" Workload="localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0" Sep 11 00:31:44.679757 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:31:44.680169 containerd[1543]: 2025-09-11 00:31:44.658 [INFO][4295] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" Namespace="kube-system" Pod="coredns-674b8bbfcf-zrbxh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67ee7562-8c17-492b-8855-1e9f5785122c", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zrbxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63e58e8d5bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:44.680169 containerd[1543]: 2025-09-11 00:31:44.658 [INFO][4295] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" Namespace="kube-system" Pod="coredns-674b8bbfcf-zrbxh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0" Sep 11 00:31:44.680169 containerd[1543]: 2025-09-11 00:31:44.658 [INFO][4295] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63e58e8d5bb ContainerID="7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" Namespace="kube-system" Pod="coredns-674b8bbfcf-zrbxh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0" Sep 11 00:31:44.680169 containerd[1543]: 2025-09-11 00:31:44.662 [INFO][4295] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" Namespace="kube-system" Pod="coredns-674b8bbfcf-zrbxh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0" Sep 11 00:31:44.680169 containerd[1543]: 2025-09-11 00:31:44.666 [INFO][4295] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" Namespace="kube-system" Pod="coredns-674b8bbfcf-zrbxh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67ee7562-8c17-492b-8855-1e9f5785122c", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768", Pod:"coredns-674b8bbfcf-zrbxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63e58e8d5bb", MAC:"52:1a:e1:50:ec:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:44.680169 containerd[1543]: 2025-09-11 00:31:44.676 [INFO][4295] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" Namespace="kube-system" Pod="coredns-674b8bbfcf-zrbxh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zrbxh-eth0" Sep 11 00:31:44.705345 containerd[1543]: time="2025-09-11T00:31:44.705291356Z" level=info msg="connecting to shim 7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768" address="unix:///run/containerd/s/29af852960badbd2b09caaa81699ab108a4cbd0427a36217609f38b07e7fbe9a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:31:44.710520 containerd[1543]: time="2025-09-11T00:31:44.710456664Z" level=info msg="StartContainer for \"17a12a433d6a6a8989dbb8386858e9398cba83650e35661075da7ed4179dfcc6\" returns successfully" Sep 11 00:31:44.732289 systemd[1]: Started cri-containerd-7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768.scope - libcontainer container 7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768. Sep 11 00:31:44.736199 containerd[1543]: time="2025-09-11T00:31:44.736159639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bf5776bb-p5zpf,Uid:67bb8c61-2f2e-46c2-a879-b0f78278865d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f\"" Sep 11 00:31:44.749150 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:31:44.783479 containerd[1543]: time="2025-09-11T00:31:44.783415617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zrbxh,Uid:67ee7562-8c17-492b-8855-1e9f5785122c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768\"" Sep 11 00:31:44.789782 containerd[1543]: time="2025-09-11T00:31:44.789731905Z" level=info msg="CreateContainer within sandbox \"7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:31:44.798792 containerd[1543]: time="2025-09-11T00:31:44.798746169Z" level=info msg="Container 19d14bedff35dd8750421928bdf615623270e5822791a846f4b10beab1c91d86: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:44.806646 containerd[1543]: time="2025-09-11T00:31:44.806505025Z" level=info msg="CreateContainer within sandbox \"7892626e5b0c3fe04de955b58839ee582ecd50c01e5edd1290f7fe89328cf768\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19d14bedff35dd8750421928bdf615623270e5822791a846f4b10beab1c91d86\"" Sep 11 00:31:44.809969 containerd[1543]: time="2025-09-11T00:31:44.809915638Z" level=info msg="StartContainer for \"19d14bedff35dd8750421928bdf615623270e5822791a846f4b10beab1c91d86\"" Sep 11 00:31:44.811094 containerd[1543]: time="2025-09-11T00:31:44.811028097Z" level=info msg="connecting to shim 19d14bedff35dd8750421928bdf615623270e5822791a846f4b10beab1c91d86" address="unix:///run/containerd/s/29af852960badbd2b09caaa81699ab108a4cbd0427a36217609f38b07e7fbe9a" protocol=ttrpc version=3 Sep 11 00:31:44.839205 systemd[1]: Started cri-containerd-19d14bedff35dd8750421928bdf615623270e5822791a846f4b10beab1c91d86.scope - libcontainer container 19d14bedff35dd8750421928bdf615623270e5822791a846f4b10beab1c91d86. Sep 11 00:31:44.879570 containerd[1543]: time="2025-09-11T00:31:44.879511117Z" level=info msg="StartContainer for \"19d14bedff35dd8750421928bdf615623270e5822791a846f4b10beab1c91d86\" returns successfully" Sep 11 00:31:45.327287 containerd[1543]: time="2025-09-11T00:31:45.327241877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bf5776bb-tp65d,Uid:4d238a88-a8a4-4120-b65a-06053649067f,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:31:45.327530 containerd[1543]: time="2025-09-11T00:31:45.327482218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cc5b99d4-bnsq4,Uid:31613f2f-53c8-48c8-a407-1e86cbe5b31a,Namespace:calico-system,Attempt:0,}" Sep 11 00:31:45.679216 systemd-networkd[1456]: cali133257f01f8: Gained IPv6LL Sep 11 00:31:45.770339 kubelet[2700]: I0911 00:31:45.770303 2700 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:31:45.833934 kubelet[2700]: I0911 00:31:45.831879 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zrbxh" podStartSLOduration=40.831860922 podStartE2EDuration="40.831860922s" podCreationTimestamp="2025-09-11 00:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:31:45.825767463 +0000 UTC m=+46.593997836" watchObservedRunningTime="2025-09-11 00:31:45.831860922 +0000 UTC m=+46.600091275" Sep 11 00:31:45.861758 kubelet[2700]: I0911 00:31:45.860896 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gph9x" podStartSLOduration=40.860878255 podStartE2EDuration="40.860878255s" podCreationTimestamp="2025-09-11 00:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:31:45.849798075 +0000 UTC m=+46.618028438" watchObservedRunningTime="2025-09-11 00:31:45.860878255 +0000 UTC m=+46.629108618" Sep 11 00:31:46.028650 systemd-networkd[1456]: caliae7fd836105: Link UP Sep 11 00:31:46.030180 systemd-networkd[1456]: caliae7fd836105: Gained carrier Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.890 [INFO][4607] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.908 [INFO][4607] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0 calico-apiserver-54bf5776bb- calico-apiserver 4d238a88-a8a4-4120-b65a-06053649067f 853 0 2025-09-11 00:31:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54bf5776bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54bf5776bb-tp65d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae7fd836105 [] [] }} ContainerID="eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-tp65d" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--tp65d-" Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.908 [INFO][4607] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-tp65d" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0" Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.945 [INFO][4649] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" HandleID="k8s-pod-network.eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" Workload="localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0" Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.945 [INFO][4649] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" HandleID="k8s-pod-network.eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" Workload="localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ef90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54bf5776bb-tp65d", "timestamp":"2025-09-11 00:31:45.945512234 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.945 [INFO][4649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.945 [INFO][4649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.946 [INFO][4649] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.953 [INFO][4649] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" host="localhost" Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.960 [INFO][4649] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.966 [INFO][4649] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.972 [INFO][4649] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.979 [INFO][4649] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.979 [INFO][4649] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" host="localhost" Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:45.988 [INFO][4649] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2 Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:46.000 [INFO][4649] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" host="localhost" Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:46.015 [INFO][4649] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" host="localhost" Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:46.017 [INFO][4649] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" host="localhost" Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:46.017 [INFO][4649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:31:46.047199 containerd[1543]: 2025-09-11 00:31:46.017 [INFO][4649] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" HandleID="k8s-pod-network.eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" Workload="localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0" Sep 11 00:31:46.047944 containerd[1543]: 2025-09-11 00:31:46.023 [INFO][4607] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-tp65d" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0", GenerateName:"calico-apiserver-54bf5776bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d238a88-a8a4-4120-b65a-06053649067f", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bf5776bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54bf5776bb-tp65d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae7fd836105", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:46.047944 containerd[1543]: 2025-09-11 00:31:46.025 [INFO][4607] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-tp65d" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0" Sep 11 00:31:46.047944 containerd[1543]: 2025-09-11 00:31:46.025 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae7fd836105 ContainerID="eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-tp65d" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0" Sep 11 00:31:46.047944 containerd[1543]: 2025-09-11 00:31:46.030 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-tp65d" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0" Sep 11 00:31:46.047944 containerd[1543]: 2025-09-11 00:31:46.031 [INFO][4607] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-tp65d" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0", GenerateName:"calico-apiserver-54bf5776bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d238a88-a8a4-4120-b65a-06053649067f", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bf5776bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2", Pod:"calico-apiserver-54bf5776bb-tp65d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae7fd836105", MAC:"1e:d8:c7:51:81:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:46.047944 containerd[1543]: 2025-09-11 00:31:46.041 [INFO][4607] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" Namespace="calico-apiserver" Pod="calico-apiserver-54bf5776bb-tp65d" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bf5776bb--tp65d-eth0" Sep 11 00:31:46.063245 systemd-networkd[1456]: calib6b3a1f9dda: Gained IPv6LL Sep 11 00:31:46.077384 containerd[1543]: time="2025-09-11T00:31:46.077328575Z" level=info msg="connecting to shim eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2" address="unix:///run/containerd/s/b504728f12415821b8c2fed17054f5d3110547c50e46f2e57ffa65d2434c51b0" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:31:46.100356 systemd-networkd[1456]: cali01e1fd168db: Link UP Sep 11 00:31:46.101243 systemd-networkd[1456]: cali01e1fd168db: Gained carrier Sep 11 00:31:46.122363 systemd[1]: Started cri-containerd-eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2.scope - libcontainer container eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2. Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:45.902 [INFO][4631] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:45.915 [INFO][4631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0 calico-kube-controllers-66cc5b99d4- calico-system 31613f2f-53c8-48c8-a407-1e86cbe5b31a 850 0 2025-09-11 00:31:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66cc5b99d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66cc5b99d4-bnsq4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali01e1fd168db [] [] }} ContainerID="6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" Namespace="calico-system" Pod="calico-kube-controllers-66cc5b99d4-bnsq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-" Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:45.916 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" Namespace="calico-system" Pod="calico-kube-controllers-66cc5b99d4-bnsq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0" Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:45.945 [INFO][4655] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" HandleID="k8s-pod-network.6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" Workload="localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0" Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:45.946 [INFO][4655] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" HandleID="k8s-pod-network.6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" Workload="localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f220), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66cc5b99d4-bnsq4", "timestamp":"2025-09-11 00:31:45.945134174 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:45.946 [INFO][4655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.017 [INFO][4655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.019 [INFO][4655] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.055 [INFO][4655] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" host="localhost" Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.060 [INFO][4655] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.073 [INFO][4655] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.075 [INFO][4655] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.078 [INFO][4655] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.078 [INFO][4655] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" host="localhost" Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.080 [INFO][4655] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6 Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.087 [INFO][4655] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" host="localhost" Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.093 [INFO][4655] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" host="localhost" Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.093 [INFO][4655] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" host="localhost" Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.093 [INFO][4655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:31:46.123309 containerd[1543]: 2025-09-11 00:31:46.093 [INFO][4655] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" HandleID="k8s-pod-network.6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" Workload="localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0" Sep 11 00:31:46.123803 containerd[1543]: 2025-09-11 00:31:46.097 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" Namespace="calico-system" Pod="calico-kube-controllers-66cc5b99d4-bnsq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0", GenerateName:"calico-kube-controllers-66cc5b99d4-", Namespace:"calico-system", SelfLink:"", UID:"31613f2f-53c8-48c8-a407-1e86cbe5b31a", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cc5b99d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66cc5b99d4-bnsq4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali01e1fd168db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:46.123803 containerd[1543]: 2025-09-11 00:31:46.097 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" Namespace="calico-system" Pod="calico-kube-controllers-66cc5b99d4-bnsq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0" Sep 11 00:31:46.123803 containerd[1543]: 2025-09-11 00:31:46.097 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01e1fd168db ContainerID="6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" Namespace="calico-system" Pod="calico-kube-controllers-66cc5b99d4-bnsq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0" Sep 11 00:31:46.123803 containerd[1543]: 2025-09-11 00:31:46.100 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" Namespace="calico-system" Pod="calico-kube-controllers-66cc5b99d4-bnsq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0" Sep 11 00:31:46.123803 containerd[1543]: 2025-09-11 00:31:46.101 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" Namespace="calico-system" Pod="calico-kube-controllers-66cc5b99d4-bnsq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0", GenerateName:"calico-kube-controllers-66cc5b99d4-", Namespace:"calico-system", SelfLink:"", UID:"31613f2f-53c8-48c8-a407-1e86cbe5b31a", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cc5b99d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6", Pod:"calico-kube-controllers-66cc5b99d4-bnsq4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali01e1fd168db", MAC:"7a:0b:a6:ad:94:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:46.123803 containerd[1543]: 2025-09-11 00:31:46.114 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" Namespace="calico-system" Pod="calico-kube-controllers-66cc5b99d4-bnsq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cc5b99d4--bnsq4-eth0" Sep 11 00:31:46.140590 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:31:46.162683 containerd[1543]: time="2025-09-11T00:31:46.162602484Z" level=info msg="connecting to shim 6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6" address="unix:///run/containerd/s/72feb3a770dc65aff54f2558c2822f6d9993f01e8ac43efbde1e3f466f4e55b1" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:31:46.196349 systemd[1]: Started cri-containerd-6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6.scope - libcontainer container 6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6. Sep 11 00:31:46.219472 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:31:46.357012 containerd[1543]: time="2025-09-11T00:31:46.356900242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bf5776bb-tp65d,Uid:4d238a88-a8a4-4120-b65a-06053649067f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2\"" Sep 11 00:31:46.359818 containerd[1543]: time="2025-09-11T00:31:46.359751654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cc5b99d4-bnsq4,Uid:31613f2f-53c8-48c8-a407-1e86cbe5b31a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6\"" Sep 11 00:31:46.383781 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:36902.service - OpenSSH per-connection server daemon (10.0.0.1:36902). Sep 11 00:31:46.443988 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 36902 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:31:46.446442 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:46.452731 systemd-logind[1526]: New session 8 of user core. Sep 11 00:31:46.458163 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:31:46.537838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656304497.mount: Deactivated successfully. Sep 11 00:31:46.576668 systemd-networkd[1456]: cali63e58e8d5bb: Gained IPv6LL Sep 11 00:31:46.662548 systemd-networkd[1456]: vxlan.calico: Link UP Sep 11 00:31:46.662724 systemd-networkd[1456]: vxlan.calico: Gained carrier Sep 11 00:31:46.833494 sshd[4802]: Connection closed by 10.0.0.1 port 36902 Sep 11 00:31:46.833793 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:46.838494 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:36902.service: Deactivated successfully. Sep 11 00:31:46.841202 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:31:46.842943 systemd-logind[1526]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:31:46.844433 systemd-logind[1526]: Removed session 8. Sep 11 00:31:47.279226 systemd-networkd[1456]: cali01e1fd168db: Gained IPv6LL Sep 11 00:31:47.324709 containerd[1543]: time="2025-09-11T00:31:47.324657023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mn9tv,Uid:d6874fb2-8b46-4762-a169-00ccac62f67f,Namespace:calico-system,Attempt:0,}" Sep 11 00:31:47.353645 containerd[1543]: time="2025-09-11T00:31:47.353598916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:47.354641 containerd[1543]: time="2025-09-11T00:31:47.354613801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 11 00:31:47.355937 containerd[1543]: time="2025-09-11T00:31:47.355894445Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:47.358615 containerd[1543]: time="2025-09-11T00:31:47.358582411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:47.359885 containerd[1543]: time="2025-09-11T00:31:47.359861300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.909919764s" Sep 11 00:31:47.359938 containerd[1543]: time="2025-09-11T00:31:47.359888301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 11 00:31:47.361140 containerd[1543]: time="2025-09-11T00:31:47.361115575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 00:31:47.364785 containerd[1543]: time="2025-09-11T00:31:47.364759234Z" level=info msg="CreateContainer within sandbox \"7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 00:31:47.374192 containerd[1543]: time="2025-09-11T00:31:47.374127698Z" level=info msg="Container 5a54fe5d236f4a0c81955e7cde9a6f829337f2c2214a25890eb42ea519ced6f5: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:47.390887 containerd[1543]: time="2025-09-11T00:31:47.390838654Z" level=info msg="CreateContainer within sandbox \"7a25c01ea949cd3c002f12921307cbd48d136604195ce795a58c8c3fb099fe3c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5a54fe5d236f4a0c81955e7cde9a6f829337f2c2214a25890eb42ea519ced6f5\"" Sep 11 00:31:47.392176 containerd[1543]: time="2025-09-11T00:31:47.392112726Z" level=info msg="StartContainer for \"5a54fe5d236f4a0c81955e7cde9a6f829337f2c2214a25890eb42ea519ced6f5\"" Sep 11 00:31:47.397663 containerd[1543]: time="2025-09-11T00:31:47.397532689Z" level=info msg="connecting to shim 5a54fe5d236f4a0c81955e7cde9a6f829337f2c2214a25890eb42ea519ced6f5" address="unix:///run/containerd/s/e5cd58cba8d25c56a197ccfffb25f2bd65870db5bc88f42e4946c2ab0ac0cb0c" protocol=ttrpc version=3 Sep 11 00:31:47.425214 systemd[1]: Started cri-containerd-5a54fe5d236f4a0c81955e7cde9a6f829337f2c2214a25890eb42ea519ced6f5.scope - libcontainer container 5a54fe5d236f4a0c81955e7cde9a6f829337f2c2214a25890eb42ea519ced6f5. Sep 11 00:31:47.450676 systemd-networkd[1456]: cali3bc02a0ad05: Link UP Sep 11 00:31:47.451406 systemd-networkd[1456]: cali3bc02a0ad05: Gained carrier Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.369 [INFO][4932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mn9tv-eth0 csi-node-driver- calico-system d6874fb2-8b46-4762-a169-00ccac62f67f 740 0 2025-09-11 00:31:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mn9tv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3bc02a0ad05 [] [] }} ContainerID="ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" Namespace="calico-system" Pod="csi-node-driver-mn9tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--mn9tv-" Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.369 [INFO][4932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" Namespace="calico-system" Pod="csi-node-driver-mn9tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--mn9tv-eth0" Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.408 [INFO][4950] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" HandleID="k8s-pod-network.ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" Workload="localhost-k8s-csi--node--driver--mn9tv-eth0" Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.408 [INFO][4950] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" HandleID="k8s-pod-network.ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" Workload="localhost-k8s-csi--node--driver--mn9tv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000345510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mn9tv", "timestamp":"2025-09-11 00:31:47.408803574 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.409 [INFO][4950] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.409 [INFO][4950] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.409 [INFO][4950] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.416 [INFO][4950] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" host="localhost" Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.422 [INFO][4950] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.425 [INFO][4950] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.426 [INFO][4950] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.430 [INFO][4950] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.430 [INFO][4950] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" host="localhost" Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.433 [INFO][4950] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.437 [INFO][4950] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" host="localhost" Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.443 [INFO][4950] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" host="localhost" Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.443 [INFO][4950] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" host="localhost" Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.443 [INFO][4950] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:31:47.470475 containerd[1543]: 2025-09-11 00:31:47.443 [INFO][4950] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" HandleID="k8s-pod-network.ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" Workload="localhost-k8s-csi--node--driver--mn9tv-eth0" Sep 11 00:31:47.471109 containerd[1543]: 2025-09-11 00:31:47.447 [INFO][4932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" Namespace="calico-system" Pod="csi-node-driver-mn9tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--mn9tv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mn9tv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d6874fb2-8b46-4762-a169-00ccac62f67f", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mn9tv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3bc02a0ad05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:47.471109 containerd[1543]: 2025-09-11 00:31:47.447 [INFO][4932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" Namespace="calico-system" Pod="csi-node-driver-mn9tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--mn9tv-eth0" Sep 11 00:31:47.471109 containerd[1543]: 2025-09-11 00:31:47.447 [INFO][4932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3bc02a0ad05 ContainerID="ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" Namespace="calico-system" Pod="csi-node-driver-mn9tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--mn9tv-eth0" Sep 11 00:31:47.471109 containerd[1543]: 2025-09-11 00:31:47.452 [INFO][4932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" Namespace="calico-system" Pod="csi-node-driver-mn9tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--mn9tv-eth0" Sep 11 00:31:47.471109 containerd[1543]: 2025-09-11 00:31:47.452 [INFO][4932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" Namespace="calico-system" Pod="csi-node-driver-mn9tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--mn9tv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mn9tv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d6874fb2-8b46-4762-a169-00ccac62f67f", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 31, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f", Pod:"csi-node-driver-mn9tv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3bc02a0ad05", MAC:"66:cd:09:99:77:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:31:47.471109 containerd[1543]: 2025-09-11 00:31:47.463 [INFO][4932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" Namespace="calico-system" Pod="csi-node-driver-mn9tv" WorkloadEndpoint="localhost-k8s-csi--node--driver--mn9tv-eth0" Sep 11 00:31:47.622597 containerd[1543]: time="2025-09-11T00:31:47.622500488Z" level=info msg="StartContainer for \"5a54fe5d236f4a0c81955e7cde9a6f829337f2c2214a25890eb42ea519ced6f5\" returns successfully" Sep 11 00:31:47.646984 containerd[1543]: time="2025-09-11T00:31:47.646937366Z" level=info msg="connecting to shim ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f" address="unix:///run/containerd/s/dd40fe6d8cd51bd1659b5b549af1d7c8b09ee551a015ad02ed896c5271cdf066" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:31:47.673177 systemd[1]: Started cri-containerd-ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f.scope - libcontainer container ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f. Sep 11 00:31:47.688737 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:31:47.708489 containerd[1543]: time="2025-09-11T00:31:47.708340538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mn9tv,Uid:d6874fb2-8b46-4762-a169-00ccac62f67f,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f\"" Sep 11 00:31:48.047236 systemd-networkd[1456]: caliae7fd836105: Gained IPv6LL Sep 11 00:31:48.367258 systemd-networkd[1456]: vxlan.calico: Gained IPv6LL Sep 11 00:31:48.521368 kubelet[2700]: I0911 00:31:48.521305 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-j46cl" podStartSLOduration=24.896417 podStartE2EDuration="30.52128936s" podCreationTimestamp="2025-09-11 00:31:18 +0000 UTC" firstStartedPulling="2025-09-11 00:31:41.735648137 +0000 UTC m=+42.503878490" lastFinishedPulling="2025-09-11 00:31:47.360520497 +0000 UTC m=+48.128750850" observedRunningTime="2025-09-11 00:31:48.520941226 +0000 UTC m=+49.289171569" watchObservedRunningTime="2025-09-11 00:31:48.52128936 +0000 UTC m=+49.289519713" Sep 11 00:31:48.559664 systemd-networkd[1456]: cali3bc02a0ad05: Gained IPv6LL Sep 11 00:31:48.568798 containerd[1543]: time="2025-09-11T00:31:48.568757721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a54fe5d236f4a0c81955e7cde9a6f829337f2c2214a25890eb42ea519ced6f5\" id:\"6f679da1f2f5380dbd2dd2ac54cac02bada76327cf73f443e936b5a2bc1a36bb\" pid:5062 exit_status:1 exited_at:{seconds:1757550708 nanos:568417452}" Sep 11 00:31:49.578200 containerd[1543]: time="2025-09-11T00:31:49.578155323Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a54fe5d236f4a0c81955e7cde9a6f829337f2c2214a25890eb42ea519ced6f5\" id:\"1692a7f72b80b5d283d3dc275308b6b4a866cc46827b1435b50e9081e7a8a471\" pid:5089 exit_status:1 exited_at:{seconds:1757550709 nanos:577786331}" Sep 11 00:31:50.202776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3962021061.mount: Deactivated successfully. Sep 11 00:31:50.669202 containerd[1543]: time="2025-09-11T00:31:50.669096290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:50.669954 containerd[1543]: time="2025-09-11T00:31:50.669925577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 11 00:31:50.671241 containerd[1543]: time="2025-09-11T00:31:50.671213634Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:50.674017 containerd[1543]: time="2025-09-11T00:31:50.673964466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:50.674644 containerd[1543]: time="2025-09-11T00:31:50.674618964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.313473253s" Sep 11 00:31:50.674688 containerd[1543]: time="2025-09-11T00:31:50.674647838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 11 00:31:50.678465 containerd[1543]: time="2025-09-11T00:31:50.678443331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:31:50.682463 containerd[1543]: time="2025-09-11T00:31:50.682428570Z" level=info msg="CreateContainer within sandbox \"a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 00:31:50.690841 containerd[1543]: time="2025-09-11T00:31:50.690790573Z" level=info msg="Container f1bd015c1368a7c7a2dd8b08d174901f494dc6cb81ee647b696f79c3f6bcccb9: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:50.701472 containerd[1543]: time="2025-09-11T00:31:50.701426472Z" level=info msg="CreateContainer within sandbox \"a6bbf639422c4474718e559a2acd690436018eaa341066ce86d97f7ff8b25ca4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f1bd015c1368a7c7a2dd8b08d174901f494dc6cb81ee647b696f79c3f6bcccb9\"" Sep 11 00:31:50.702088 containerd[1543]: time="2025-09-11T00:31:50.701944985Z" level=info msg="StartContainer for \"f1bd015c1368a7c7a2dd8b08d174901f494dc6cb81ee647b696f79c3f6bcccb9\"" Sep 11 00:31:50.704370 containerd[1543]: time="2025-09-11T00:31:50.704341182Z" level=info msg="connecting to shim f1bd015c1368a7c7a2dd8b08d174901f494dc6cb81ee647b696f79c3f6bcccb9" address="unix:///run/containerd/s/e25005c68f21bdc57a94c816d4c70e58dac8546843f8a2654b5d8b8a1f699506" protocol=ttrpc version=3 Sep 11 00:31:50.755268 systemd[1]: Started cri-containerd-f1bd015c1368a7c7a2dd8b08d174901f494dc6cb81ee647b696f79c3f6bcccb9.scope - libcontainer container f1bd015c1368a7c7a2dd8b08d174901f494dc6cb81ee647b696f79c3f6bcccb9. Sep 11 00:31:50.804946 containerd[1543]: time="2025-09-11T00:31:50.804905985Z" level=info msg="StartContainer for \"f1bd015c1368a7c7a2dd8b08d174901f494dc6cb81ee647b696f79c3f6bcccb9\" returns successfully" Sep 11 00:31:51.849630 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:37684.service - OpenSSH per-connection server daemon (10.0.0.1:37684). Sep 11 00:31:51.916450 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 37684 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:31:51.918409 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:51.923168 systemd-logind[1526]: New session 9 of user core. Sep 11 00:31:51.937186 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:31:52.075090 sshd[5147]: Connection closed by 10.0.0.1 port 37684 Sep 11 00:31:52.075399 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:52.078599 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:37684.service: Deactivated successfully. Sep 11 00:31:52.080663 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:31:52.084315 systemd-logind[1526]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:31:52.085999 systemd-logind[1526]: Removed session 9. Sep 11 00:31:55.262795 containerd[1543]: time="2025-09-11T00:31:55.262685867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:55.264168 containerd[1543]: time="2025-09-11T00:31:55.264111903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 11 00:31:55.265687 containerd[1543]: time="2025-09-11T00:31:55.265628779Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:55.269183 containerd[1543]: time="2025-09-11T00:31:55.269137702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:55.270070 containerd[1543]: time="2025-09-11T00:31:55.269900554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.591426284s" Sep 11 00:31:55.270070 containerd[1543]: time="2025-09-11T00:31:55.269960847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:31:55.273355 containerd[1543]: time="2025-09-11T00:31:55.273315471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:31:55.286059 containerd[1543]: time="2025-09-11T00:31:55.285154012Z" level=info msg="CreateContainer within sandbox \"ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:31:55.474314 containerd[1543]: time="2025-09-11T00:31:55.474262473Z" level=info msg="Container 95dcb0dbd3a62f3128a9c79d9a356d5177ba4a6676ec82cac28b431d682ab880: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:55.591279 containerd[1543]: time="2025-09-11T00:31:55.591156667Z" level=info msg="CreateContainer within sandbox \"ce4c6e7eec3d4b55dc59601a0593179285faf785d98e5e646104d5be8786265f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"95dcb0dbd3a62f3128a9c79d9a356d5177ba4a6676ec82cac28b431d682ab880\"" Sep 11 00:31:55.591746 containerd[1543]: time="2025-09-11T00:31:55.591699125Z" level=info msg="StartContainer for \"95dcb0dbd3a62f3128a9c79d9a356d5177ba4a6676ec82cac28b431d682ab880\"" Sep 11 00:31:55.592991 containerd[1543]: time="2025-09-11T00:31:55.592949952Z" level=info msg="connecting to shim 95dcb0dbd3a62f3128a9c79d9a356d5177ba4a6676ec82cac28b431d682ab880" address="unix:///run/containerd/s/257dd9c70fd872ea358d5de491503690dc3cc54734d5c4c4f74cb3887768f269" protocol=ttrpc version=3 Sep 11 00:31:55.620271 systemd[1]: Started cri-containerd-95dcb0dbd3a62f3128a9c79d9a356d5177ba4a6676ec82cac28b431d682ab880.scope - libcontainer container 95dcb0dbd3a62f3128a9c79d9a356d5177ba4a6676ec82cac28b431d682ab880. Sep 11 00:31:55.666363 containerd[1543]: time="2025-09-11T00:31:55.666324926Z" level=info msg="StartContainer for \"95dcb0dbd3a62f3128a9c79d9a356d5177ba4a6676ec82cac28b431d682ab880\" returns successfully" Sep 11 00:31:55.715757 containerd[1543]: time="2025-09-11T00:31:55.715696416Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:55.759222 containerd[1543]: time="2025-09-11T00:31:55.759162647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 00:31:55.760907 containerd[1543]: time="2025-09-11T00:31:55.760880189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 487.529202ms" Sep 11 00:31:55.760907 containerd[1543]: time="2025-09-11T00:31:55.760905687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:31:55.761805 containerd[1543]: time="2025-09-11T00:31:55.761771692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 00:31:55.984732 containerd[1543]: time="2025-09-11T00:31:55.984654680Z" level=info msg="CreateContainer within sandbox \"eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:31:56.011069 containerd[1543]: time="2025-09-11T00:31:56.010642978Z" level=info msg="Container 9c724d42bf70ce5c15b66e365b8ea897babbbe6c0a6953d62b86d8b679db9577: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:56.022356 containerd[1543]: time="2025-09-11T00:31:56.022307853Z" level=info msg="CreateContainer within sandbox \"eaa63f70db65625b55eb354d25748ecea002ac4142e17ccc12b9785332effca2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c724d42bf70ce5c15b66e365b8ea897babbbe6c0a6953d62b86d8b679db9577\"" Sep 11 00:31:56.023058 containerd[1543]: time="2025-09-11T00:31:56.022999120Z" level=info msg="StartContainer for \"9c724d42bf70ce5c15b66e365b8ea897babbbe6c0a6953d62b86d8b679db9577\"" Sep 11 00:31:56.024298 containerd[1543]: time="2025-09-11T00:31:56.024257460Z" level=info msg="connecting to shim 9c724d42bf70ce5c15b66e365b8ea897babbbe6c0a6953d62b86d8b679db9577" address="unix:///run/containerd/s/b504728f12415821b8c2fed17054f5d3110547c50e46f2e57ffa65d2434c51b0" protocol=ttrpc version=3 Sep 11 00:31:56.043203 systemd[1]: Started cri-containerd-9c724d42bf70ce5c15b66e365b8ea897babbbe6c0a6953d62b86d8b679db9577.scope - libcontainer container 9c724d42bf70ce5c15b66e365b8ea897babbbe6c0a6953d62b86d8b679db9577. Sep 11 00:31:56.099200 containerd[1543]: time="2025-09-11T00:31:56.099144606Z" level=info msg="StartContainer for \"9c724d42bf70ce5c15b66e365b8ea897babbbe6c0a6953d62b86d8b679db9577\" returns successfully" Sep 11 00:31:56.515706 kubelet[2700]: I0911 00:31:56.515619 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7849657554-dxjmd" podStartSLOduration=7.469977846 podStartE2EDuration="16.515603438s" podCreationTimestamp="2025-09-11 00:31:40 +0000 UTC" firstStartedPulling="2025-09-11 00:31:41.632708404 +0000 UTC m=+42.400938757" lastFinishedPulling="2025-09-11 00:31:50.678333996 +0000 UTC m=+51.446564349" observedRunningTime="2025-09-11 00:31:51.494859761 +0000 UTC m=+52.263090114" watchObservedRunningTime="2025-09-11 00:31:56.515603438 +0000 UTC m=+57.283833791" Sep 11 00:31:56.550851 kubelet[2700]: I0911 00:31:56.549107 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54bf5776bb-p5zpf" podStartSLOduration=30.013439215 podStartE2EDuration="40.549087177s" podCreationTimestamp="2025-09-11 00:31:16 +0000 UTC" firstStartedPulling="2025-09-11 00:31:44.737494635 +0000 UTC m=+45.505724988" lastFinishedPulling="2025-09-11 00:31:55.273142597 +0000 UTC m=+56.041372950" observedRunningTime="2025-09-11 00:31:56.517772807 +0000 UTC m=+57.286003160" watchObservedRunningTime="2025-09-11 00:31:56.549087177 +0000 UTC m=+57.317317530" Sep 11 00:31:57.090144 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:37700.service - OpenSSH per-connection server daemon (10.0.0.1:37700). Sep 11 00:31:57.196159 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 37700 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:31:57.198648 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:57.202622 systemd-logind[1526]: New session 10 of user core. Sep 11 00:31:57.212184 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:31:57.367496 sshd[5261]: Connection closed by 10.0.0.1 port 37700 Sep 11 00:31:57.367744 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:57.373571 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:37700.service: Deactivated successfully. Sep 11 00:31:57.375924 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:31:57.376920 systemd-logind[1526]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:31:57.378874 systemd-logind[1526]: Removed session 10. Sep 11 00:31:57.502637 kubelet[2700]: I0911 00:31:57.502598 2700 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:31:58.034380 kubelet[2700]: I0911 00:31:58.034311 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54bf5776bb-tp65d" podStartSLOduration=32.631004706 podStartE2EDuration="42.034294889s" podCreationTimestamp="2025-09-11 00:31:16 +0000 UTC" firstStartedPulling="2025-09-11 00:31:46.358335676 +0000 UTC m=+47.126566029" lastFinishedPulling="2025-09-11 00:31:55.761625868 +0000 UTC m=+56.529856212" observedRunningTime="2025-09-11 00:31:56.549607582 +0000 UTC m=+57.317837935" watchObservedRunningTime="2025-09-11 00:31:58.034294889 +0000 UTC m=+58.802525242" Sep 11 00:31:58.617670 kubelet[2700]: I0911 00:31:58.617438 2700 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:31:59.881965 containerd[1543]: time="2025-09-11T00:31:59.881906473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:59.883257 containerd[1543]: time="2025-09-11T00:31:59.883222621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 11 00:31:59.885702 containerd[1543]: time="2025-09-11T00:31:59.885005927Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:59.889248 containerd[1543]: time="2025-09-11T00:31:59.889201688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:31:59.892051 containerd[1543]: time="2025-09-11T00:31:59.891604515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.129792038s" Sep 11 00:31:59.892051 containerd[1543]: time="2025-09-11T00:31:59.891654199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 11 00:31:59.913602 containerd[1543]: time="2025-09-11T00:31:59.913567025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 00:31:59.938764 containerd[1543]: time="2025-09-11T00:31:59.938712766Z" level=info msg="CreateContainer within sandbox \"6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 00:31:59.948862 containerd[1543]: time="2025-09-11T00:31:59.947794593Z" level=info msg="Container 421626ae77517a44a9a626f695f5cbb634ab9a7568be14bc369b9ce65b416e16: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:31:59.962313 containerd[1543]: time="2025-09-11T00:31:59.962261181Z" level=info msg="CreateContainer within sandbox \"6a7a3d6eca468c0c2a0611d0647943e69fd4db7e2527ed9d7af6fdc89a01d6b6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"421626ae77517a44a9a626f695f5cbb634ab9a7568be14bc369b9ce65b416e16\"" Sep 11 00:31:59.963088 containerd[1543]: time="2025-09-11T00:31:59.963063576Z" level=info msg="StartContainer for \"421626ae77517a44a9a626f695f5cbb634ab9a7568be14bc369b9ce65b416e16\"" Sep 11 00:31:59.965710 containerd[1543]: time="2025-09-11T00:31:59.965670056Z" level=info msg="connecting to shim 421626ae77517a44a9a626f695f5cbb634ab9a7568be14bc369b9ce65b416e16" address="unix:///run/containerd/s/72feb3a770dc65aff54f2558c2822f6d9993f01e8ac43efbde1e3f466f4e55b1" protocol=ttrpc version=3 Sep 11 00:32:00.004229 systemd[1]: Started cri-containerd-421626ae77517a44a9a626f695f5cbb634ab9a7568be14bc369b9ce65b416e16.scope - libcontainer container 421626ae77517a44a9a626f695f5cbb634ab9a7568be14bc369b9ce65b416e16. Sep 11 00:32:00.054733 containerd[1543]: time="2025-09-11T00:32:00.054633981Z" level=info msg="StartContainer for \"421626ae77517a44a9a626f695f5cbb634ab9a7568be14bc369b9ce65b416e16\" returns successfully" Sep 11 00:32:00.925329 kubelet[2700]: I0911 00:32:00.925245 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66cc5b99d4-bnsq4" podStartSLOduration=28.373810197 podStartE2EDuration="41.925228399s" podCreationTimestamp="2025-09-11 00:31:19 +0000 UTC" firstStartedPulling="2025-09-11 00:31:46.361471292 +0000 UTC m=+47.129701645" lastFinishedPulling="2025-09-11 00:31:59.912889494 +0000 UTC m=+60.681119847" observedRunningTime="2025-09-11 00:32:00.924469946 +0000 UTC m=+61.692700299" watchObservedRunningTime="2025-09-11 00:32:00.925228399 +0000 UTC m=+61.693458752" Sep 11 00:32:00.950989 containerd[1543]: time="2025-09-11T00:32:00.950947595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"421626ae77517a44a9a626f695f5cbb634ab9a7568be14bc369b9ce65b416e16\" id:\"e35a9bfc9a7c685ffcf9adc4e8253f3ecc10195204bb2d079adb1fb1da29864f\" pid:5349 exited_at:{seconds:1757550720 nanos:950599952}" Sep 11 00:32:01.840863 containerd[1543]: time="2025-09-11T00:32:01.840766881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:01.847707 containerd[1543]: time="2025-09-11T00:32:01.847640579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 11 00:32:01.902433 containerd[1543]: time="2025-09-11T00:32:01.902375378Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:01.925495 containerd[1543]: time="2025-09-11T00:32:01.925427539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:01.925881 containerd[1543]: time="2025-09-11T00:32:01.925855796Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.012251902s" Sep 11 00:32:01.925881 containerd[1543]: time="2025-09-11T00:32:01.925879390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 11 00:32:02.093857 containerd[1543]: time="2025-09-11T00:32:02.093665254Z" level=info msg="CreateContainer within sandbox \"ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 00:32:02.388868 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:33542.service - OpenSSH per-connection server daemon (10.0.0.1:33542). Sep 11 00:32:02.403983 containerd[1543]: time="2025-09-11T00:32:02.403938954Z" level=info msg="Container 92cd88b55c566337e593d359d00793f9e2c87bfcbe1f65c89581426f474d8834: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:02.418521 containerd[1543]: time="2025-09-11T00:32:02.418481927Z" level=info msg="CreateContainer within sandbox \"ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"92cd88b55c566337e593d359d00793f9e2c87bfcbe1f65c89581426f474d8834\"" Sep 11 00:32:02.419056 containerd[1543]: time="2025-09-11T00:32:02.419015269Z" level=info msg="StartContainer for \"92cd88b55c566337e593d359d00793f9e2c87bfcbe1f65c89581426f474d8834\"" Sep 11 00:32:02.421648 containerd[1543]: time="2025-09-11T00:32:02.421623031Z" level=info msg="connecting to shim 92cd88b55c566337e593d359d00793f9e2c87bfcbe1f65c89581426f474d8834" address="unix:///run/containerd/s/dd40fe6d8cd51bd1659b5b549af1d7c8b09ee551a015ad02ed896c5271cdf066" protocol=ttrpc version=3 Sep 11 00:32:02.446232 systemd[1]: Started cri-containerd-92cd88b55c566337e593d359d00793f9e2c87bfcbe1f65c89581426f474d8834.scope - libcontainer container 92cd88b55c566337e593d359d00793f9e2c87bfcbe1f65c89581426f474d8834. Sep 11 00:32:02.461555 sshd[5366]: Accepted publickey for core from 10.0.0.1 port 33542 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:02.463371 sshd-session[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:02.468743 systemd-logind[1526]: New session 11 of user core. Sep 11 00:32:02.477171 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:32:02.501194 containerd[1543]: time="2025-09-11T00:32:02.501109170Z" level=info msg="StartContainer for \"92cd88b55c566337e593d359d00793f9e2c87bfcbe1f65c89581426f474d8834\" returns successfully" Sep 11 00:32:02.502353 containerd[1543]: time="2025-09-11T00:32:02.502326965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 00:32:02.665571 sshd[5387]: Connection closed by 10.0.0.1 port 33542 Sep 11 00:32:02.665772 sshd-session[5366]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:02.677545 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:33542.service: Deactivated successfully. Sep 11 00:32:02.679221 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:32:02.679968 systemd-logind[1526]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:32:02.682843 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:33552.service - OpenSSH per-connection server daemon (10.0.0.1:33552). Sep 11 00:32:02.683644 systemd-logind[1526]: Removed session 11. Sep 11 00:32:02.729400 sshd[5414]: Accepted publickey for core from 10.0.0.1 port 33552 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:02.730710 sshd-session[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:02.735273 systemd-logind[1526]: New session 12 of user core. Sep 11 00:32:02.744177 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:32:02.900757 sshd[5416]: Connection closed by 10.0.0.1 port 33552 Sep 11 00:32:02.903737 sshd-session[5414]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:02.913515 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:33552.service: Deactivated successfully. Sep 11 00:32:02.917580 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:32:02.919352 systemd-logind[1526]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:32:02.924240 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:33560.service - OpenSSH per-connection server daemon (10.0.0.1:33560). Sep 11 00:32:02.925213 systemd-logind[1526]: Removed session 12. Sep 11 00:32:02.970151 sshd[5427]: Accepted publickey for core from 10.0.0.1 port 33560 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:02.971741 sshd-session[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:02.976328 systemd-logind[1526]: New session 13 of user core. Sep 11 00:32:02.990160 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:32:03.115995 sshd[5429]: Connection closed by 10.0.0.1 port 33560 Sep 11 00:32:03.116280 sshd-session[5427]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:03.119857 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:33560.service: Deactivated successfully. Sep 11 00:32:03.121650 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:32:03.122305 systemd-logind[1526]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:32:03.123786 systemd-logind[1526]: Removed session 13. Sep 11 00:32:04.843285 containerd[1543]: time="2025-09-11T00:32:04.843222395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:04.844496 containerd[1543]: time="2025-09-11T00:32:04.844464533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 11 00:32:04.846087 containerd[1543]: time="2025-09-11T00:32:04.846025336Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:04.847887 containerd[1543]: time="2025-09-11T00:32:04.847845129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:04.848407 containerd[1543]: time="2025-09-11T00:32:04.848375303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.34601865s" Sep 11 00:32:04.848407 containerd[1543]: time="2025-09-11T00:32:04.848405562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 11 00:32:04.853191 containerd[1543]: time="2025-09-11T00:32:04.853155351Z" level=info msg="CreateContainer within sandbox \"ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 00:32:04.862506 containerd[1543]: time="2025-09-11T00:32:04.862469432Z" level=info msg="Container a497d4260b1de86d38481d3a384eb89542d609e5ed8f0067e83c5cca74d958a1: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:32:04.872596 containerd[1543]: time="2025-09-11T00:32:04.872550936Z" level=info msg="CreateContainer within sandbox \"ec0f9383e094de873a4ccc0b6020a29c25336e7104d6ff45144f7101039c2b1f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a497d4260b1de86d38481d3a384eb89542d609e5ed8f0067e83c5cca74d958a1\"" Sep 11 00:32:04.875067 containerd[1543]: time="2025-09-11T00:32:04.873282317Z" level=info msg="StartContainer for \"a497d4260b1de86d38481d3a384eb89542d609e5ed8f0067e83c5cca74d958a1\"" Sep 11 00:32:04.875067 containerd[1543]: time="2025-09-11T00:32:04.874732557Z" level=info msg="connecting to shim a497d4260b1de86d38481d3a384eb89542d609e5ed8f0067e83c5cca74d958a1" address="unix:///run/containerd/s/dd40fe6d8cd51bd1659b5b549af1d7c8b09ee551a015ad02ed896c5271cdf066" protocol=ttrpc version=3 Sep 11 00:32:04.899191 systemd[1]: Started cri-containerd-a497d4260b1de86d38481d3a384eb89542d609e5ed8f0067e83c5cca74d958a1.scope - libcontainer container a497d4260b1de86d38481d3a384eb89542d609e5ed8f0067e83c5cca74d958a1. Sep 11 00:32:05.238019 containerd[1543]: time="2025-09-11T00:32:05.237893660Z" level=info msg="StartContainer for \"a497d4260b1de86d38481d3a384eb89542d609e5ed8f0067e83c5cca74d958a1\" returns successfully" Sep 11 00:32:05.895758 kubelet[2700]: I0911 00:32:05.895722 2700 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 00:32:05.926353 kubelet[2700]: I0911 00:32:05.926305 2700 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 00:32:06.162242 kubelet[2700]: I0911 00:32:06.161707 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mn9tv" podStartSLOduration=30.022436328 podStartE2EDuration="47.161690804s" podCreationTimestamp="2025-09-11 00:31:19 +0000 UTC" firstStartedPulling="2025-09-11 00:31:47.709763158 +0000 UTC m=+48.477993501" lastFinishedPulling="2025-09-11 00:32:04.849017623 +0000 UTC m=+65.617247977" observedRunningTime="2025-09-11 00:32:06.161546356 +0000 UTC m=+66.929776709" watchObservedRunningTime="2025-09-11 00:32:06.161690804 +0000 UTC m=+66.929921157" Sep 11 00:32:08.133270 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:33572.service - OpenSSH per-connection server daemon (10.0.0.1:33572). Sep 11 00:32:08.193800 sshd[5488]: Accepted publickey for core from 10.0.0.1 port 33572 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:08.195736 sshd-session[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:08.200176 systemd-logind[1526]: New session 14 of user core. Sep 11 00:32:08.207171 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:32:08.549547 sshd[5490]: Connection closed by 10.0.0.1 port 33572 Sep 11 00:32:08.549908 sshd-session[5488]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:08.554347 systemd-logind[1526]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:32:08.554608 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:33572.service: Deactivated successfully. Sep 11 00:32:08.556384 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:32:08.557790 systemd-logind[1526]: Removed session 14. Sep 11 00:32:10.516210 containerd[1543]: time="2025-09-11T00:32:10.516146727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06063b502a3ee9e805c90900484d24c9a377a51348d2a46b0e7816b256bcbfaf\" id:\"4b88c8cd49a115fb9c47399f8e399ead3aa6fe9316d60cf7ffeb2b6fbba461b1\" pid:5519 exit_status:1 exited_at:{seconds:1757550730 nanos:515780152}" Sep 11 00:32:13.568226 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:34258.service - OpenSSH per-connection server daemon (10.0.0.1:34258). Sep 11 00:32:13.629691 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 34258 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:13.631444 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:13.637781 systemd-logind[1526]: New session 15 of user core. Sep 11 00:32:13.647196 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:32:13.776618 sshd[5535]: Connection closed by 10.0.0.1 port 34258 Sep 11 00:32:13.776881 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:13.780704 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:34258.service: Deactivated successfully. Sep 11 00:32:13.782673 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:32:13.783572 systemd-logind[1526]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:32:13.784826 systemd-logind[1526]: Removed session 15. Sep 11 00:32:18.788885 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:34264.service - OpenSSH per-connection server daemon (10.0.0.1:34264). Sep 11 00:32:18.840660 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 34264 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:18.842261 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:18.847089 systemd-logind[1526]: New session 16 of user core. Sep 11 00:32:18.861196 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:32:18.975896 sshd[5551]: Connection closed by 10.0.0.1 port 34264 Sep 11 00:32:18.976463 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:18.980983 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:34264.service: Deactivated successfully. Sep 11 00:32:18.982896 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:32:18.983724 systemd-logind[1526]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:32:18.984919 systemd-logind[1526]: Removed session 16. Sep 11 00:32:19.607474 containerd[1543]: time="2025-09-11T00:32:19.607411829Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a54fe5d236f4a0c81955e7cde9a6f829337f2c2214a25890eb42ea519ced6f5\" id:\"9d4420dc0266ef644487ec3b778b8b5a7d7eeee04b9d7d3d60d01e615e1aec31\" pid:5575 exited_at:{seconds:1757550739 nanos:606400054}" Sep 11 00:32:23.993766 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:53390.service - OpenSSH per-connection server daemon (10.0.0.1:53390). Sep 11 00:32:24.066668 sshd[5589]: Accepted publickey for core from 10.0.0.1 port 53390 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:24.068658 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:24.075261 systemd-logind[1526]: New session 17 of user core. Sep 11 00:32:24.096272 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:32:24.311236 sshd[5591]: Connection closed by 10.0.0.1 port 53390 Sep 11 00:32:24.312284 sshd-session[5589]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:24.321231 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:53390.service: Deactivated successfully. Sep 11 00:32:24.323427 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:32:24.324474 systemd-logind[1526]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:32:24.327782 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:53402.service - OpenSSH per-connection server daemon (10.0.0.1:53402). Sep 11 00:32:24.328623 systemd-logind[1526]: Removed session 17. Sep 11 00:32:24.366719 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 53402 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:24.368067 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:24.372619 systemd-logind[1526]: New session 18 of user core. Sep 11 00:32:24.383330 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:32:24.679274 sshd[5607]: Connection closed by 10.0.0.1 port 53402 Sep 11 00:32:24.679590 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:24.689159 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:53402.service: Deactivated successfully. Sep 11 00:32:24.691246 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:32:24.692007 systemd-logind[1526]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:32:24.695027 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:53410.service - OpenSSH per-connection server daemon (10.0.0.1:53410). Sep 11 00:32:24.697449 systemd-logind[1526]: Removed session 18. Sep 11 00:32:24.748997 sshd[5618]: Accepted publickey for core from 10.0.0.1 port 53410 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:24.750477 sshd-session[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:24.755002 systemd-logind[1526]: New session 19 of user core. Sep 11 00:32:24.763158 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:32:25.435503 sshd[5620]: Connection closed by 10.0.0.1 port 53410 Sep 11 00:32:25.435812 sshd-session[5618]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:25.449465 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:53410.service: Deactivated successfully. Sep 11 00:32:25.452332 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:32:25.454066 systemd-logind[1526]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:32:25.463400 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:53420.service - OpenSSH per-connection server daemon (10.0.0.1:53420). Sep 11 00:32:25.464341 systemd-logind[1526]: Removed session 19. Sep 11 00:32:25.507378 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 53420 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:25.509277 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:25.514482 systemd-logind[1526]: New session 20 of user core. Sep 11 00:32:25.519177 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:32:25.911775 sshd[5641]: Connection closed by 10.0.0.1 port 53420 Sep 11 00:32:25.912480 sshd-session[5639]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:25.923919 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:53420.service: Deactivated successfully. Sep 11 00:32:25.926071 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:32:25.927755 systemd-logind[1526]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:32:25.930444 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:53426.service - OpenSSH per-connection server daemon (10.0.0.1:53426). Sep 11 00:32:25.932302 systemd-logind[1526]: Removed session 20. Sep 11 00:32:25.996431 sshd[5652]: Accepted publickey for core from 10.0.0.1 port 53426 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:25.998446 sshd-session[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:26.003342 systemd-logind[1526]: New session 21 of user core. Sep 11 00:32:26.014189 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:32:26.160066 sshd[5654]: Connection closed by 10.0.0.1 port 53426 Sep 11 00:32:26.160416 sshd-session[5652]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:26.164365 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:53426.service: Deactivated successfully. Sep 11 00:32:26.167267 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:32:26.169449 systemd-logind[1526]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:32:26.170967 systemd-logind[1526]: Removed session 21. Sep 11 00:32:30.952907 containerd[1543]: time="2025-09-11T00:32:30.952862610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"421626ae77517a44a9a626f695f5cbb634ab9a7568be14bc369b9ce65b416e16\" id:\"e4acea20a63e3fdcdcf0633a07e4836a04bd8e12d495b0f9d531c0a24663377b\" pid:5685 exited_at:{seconds:1757550750 nanos:952618565}" Sep 11 00:32:31.178898 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:55904.service - OpenSSH per-connection server daemon (10.0.0.1:55904). Sep 11 00:32:31.221898 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 55904 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:31.223505 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:31.227585 systemd-logind[1526]: New session 22 of user core. Sep 11 00:32:31.234160 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:32:31.361957 sshd[5698]: Connection closed by 10.0.0.1 port 55904 Sep 11 00:32:31.362645 sshd-session[5696]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:31.366906 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:55904.service: Deactivated successfully. Sep 11 00:32:31.369164 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:32:31.369986 systemd-logind[1526]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:32:31.371400 systemd-logind[1526]: Removed session 22. Sep 11 00:32:34.921784 containerd[1543]: time="2025-09-11T00:32:34.921740322Z" level=info msg="TaskExit event in podsandbox handler container_id:\"421626ae77517a44a9a626f695f5cbb634ab9a7568be14bc369b9ce65b416e16\" id:\"4bccf1d7c8913a996294e4feb8cec4107668dcdeede8d70d4e0cb81b27c46284\" pid:5726 exited_at:{seconds:1757550754 nanos:921460169}" Sep 11 00:32:36.374879 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:55920.service - OpenSSH per-connection server daemon (10.0.0.1:55920). Sep 11 00:32:36.446708 sshd[5737]: Accepted publickey for core from 10.0.0.1 port 55920 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:36.448351 sshd-session[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:36.452680 systemd-logind[1526]: New session 23 of user core. Sep 11 00:32:36.463180 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:32:36.805892 sshd[5739]: Connection closed by 10.0.0.1 port 55920 Sep 11 00:32:36.806223 sshd-session[5737]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:36.810562 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:55920.service: Deactivated successfully. Sep 11 00:32:36.812740 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:32:36.813471 systemd-logind[1526]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:32:36.814847 systemd-logind[1526]: Removed session 23. Sep 11 00:32:38.264991 containerd[1543]: time="2025-09-11T00:32:38.264940258Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a54fe5d236f4a0c81955e7cde9a6f829337f2c2214a25890eb42ea519ced6f5\" id:\"48867239996ceb3766a5502191eabbe3c4e07644f15285f15708204dcef18139\" pid:5765 exited_at:{seconds:1757550758 nanos:264602799}" Sep 11 00:32:40.520014 containerd[1543]: time="2025-09-11T00:32:40.519965487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06063b502a3ee9e805c90900484d24c9a377a51348d2a46b0e7816b256bcbfaf\" id:\"33c7ec1e4e594712359cfbaa655d566a1a1128bdfc76f0af28855e8196b9d707\" pid:5790 exited_at:{seconds:1757550760 nanos:519233298}" Sep 11 00:32:41.819490 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:59158.service - OpenSSH per-connection server daemon (10.0.0.1:59158). Sep 11 00:32:41.870794 sshd[5804]: Accepted publickey for core from 10.0.0.1 port 59158 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:32:41.872576 sshd-session[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:41.876896 systemd-logind[1526]: New session 24 of user core. Sep 11 00:32:41.882165 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:32:42.064132 sshd[5806]: Connection closed by 10.0.0.1 port 59158 Sep 11 00:32:42.064440 sshd-session[5804]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:42.068523 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:59158.service: Deactivated successfully. Sep 11 00:32:42.070657 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:32:42.071397 systemd-logind[1526]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:32:42.072506 systemd-logind[1526]: Removed session 24.